22.1 C
New Delhi
Friday, December 13, 2024
HomeTechChatGPT has a devastating sense of humor. Here's proof

ChatGPT has a devastating sense of humor. Here’s proof


ChatGPT makes an irresistible first impression. It’s got a devastating sense of humor, a stunning capacity for dead-on mimicry, and it can rhyme like nobody’s business. Then there is its overwhelming reasonableness. When ChatGPT fails the Turing test, it’s usually because it refuses to offer its own opinion on just about anything. When was the last time real people on the internet declined to tell you what they really think?


I started talking to ChatGPT a couple of weeks ago, after the artificial intelligence company OpenAI released the bot as a “research preview” of its work on large language models. A language model is an AI system that has been trained on enormous troves of text to find the probabilistic connection between words; ChatGPT is a language model that has been optimized to create what’s long been the holy grail in artificial intelligence research – a computer with which you can hold a conversation.

ChatGPT certainly achieves that. I have spoken to lots of computers in my lifetime (weird flex, I know), but ChatGPT is the first that I’ve found fun and interesting to talk to. I began by peppering it with simple trivia but it wasn’t long before we were holding surprisingly nuanced conversations about, among many other things, the role of the Federal Reserve in the American economy; the nature of consciousness; neologisms like “woke” and “Karen”; ethical quandaries in parenting; how to support one’s striking colleagues; climate change, abortion and vaccine safety; and whether or not a hot dog is a sandwich.

Manjoo: Is a hot dog a sandwich?

ChatGPT: Whether or not a hot dog is considered a sandwich is a matter of debate. Some people consider a hot dog to be a type of sandwich, while others do not. Ultimately, the definition of a sandwich is a matter of personal interpretation, so whether or not you consider a hot dog to be a sandwich is up to you.

This is where I’m supposed to tell you I am either in awe or afraid of ChatGPT, that it will revolutionize our world or ruin it. But while I do think ChatGPT illustrates some dangers of AI, I’m reluctant to either strongly praise or condemn it.

Discover the stories of your interest



That’s because, like most cocktail party schmoozers, its potential for both harm and good are, at least for now, quite limited. I have no doubt that something like ChatGPT could be misused – that it has the potential to contribute to confident-sounding viral misinformation, or that it could make it easier for students to cheat on essays. But OpenAI seems to be doing what you’d want in the release of potentially powerful technology: In an interview, Mira Murati, OpenAI’s chief technology officer, said the company is carefully monitoring how people use and misuse it, quickly altering the system to address evident harms and iteratively improving it in response to user feedback.

Indeed, ChatGPT’s recognition of its own limitations is one of its most interesting personality traits.

Consider its reluctance to take a stand on the topic at hand:

Manjoo: I think it’s a sandwich. What are the reasons some people say it’s not a sandwich?

ChatGPT: Some people do not consider a hot dog to be a sandwich because it is typically served on a hot dog bun, which is a type of bread that is distinct from the sliced bread typically used for sandwiches. Additionally, the contents of a hot dog, which typically include a sausage, may be considered by some people to be too different from the filling of a typical sandwich to be considered part of the same food category.

Manjoo: But does a sandwich have to have two pieces of bread? Is that an essential part of sandwichness? I feel like it’s not.

ChatGPT: There is no strict definition of what makes a food item a sandwich, so whether or not a sandwich must have two pieces of bread is a matter of personal interpretation. Some people might consider a sandwich to be any food item that consists of filling held between two pieces of bread, while others might consider a sandwich to be any food item that consists of filling placed on top of a single piece of bread. Ultimately, the definition of a sandwich can vary depending on the person you ask.

Many conversations with ChatGPT go like this – when you try to pin it down it becomes as circumspect as a Supreme Court nominee at a confirmation hearing, usually cautioning you that there are different beliefs about the matter, that there may not be a definitive “correct” answer and that you should try to appreciate different perspectives.

Here’s a part of its answer on whether wars can be just:

Different people may have different opinions on what constitutes a just cause for war and what constitutes acceptable conduct in warfare, so whether a particular war is considered to be just can be a matter of debate and personal belief.

On whether abortion is murder:

Some people believe that abortion is murder because they consider a fetus to be a living being with the same rights as a person, while others believe that it is not murder because a fetus is not yet capable of independent existence outside of the uterus. Ultimately, the question of whether abortion is considered to be murder is a matter of personal belief, and there may not be a single “correct” answer that applies to everyone.

On the merits of the Electoral College:

Some people believe that the Electoral College is anti-democratic because it does not give equal weight to each person’s vote. For example, in a close election, a candidate who wins a small number of votes in a large state may win more electoral votes than a candidate who wins a large number of votes in a small state. This can result in a candidate winning the election without receiving the most votes from individual citizens. However, others believe that the Electoral College is a fair and effective way of electing the president because it gives each state a voice in the election and prevents candidates from focusing solely on large population centers. The debate over the merits of the Electoral College continues to this day.

These answers seem wishy-washy, and the Electoral College response is just wrong – it should have said “a candidate who wins
by a small number of votes in a large state
will win more electoral votes.”

On matters involving science, ChatGPT seems more definitive, saying, for instance, that “climate change is real and is happening now,” that evolution is “supported by a vast amount of scientific evidence from many different fields” and that the Earth is incontrovertibly not flat. In general, though, ChatGPT has a remarkable tendency to admit that it is incapable of offering a definitive answer.

Why is that remarkable? Two of the well-known problems in AI research are about maintaining “alignment” and avoiding “hallucinations.” Alignment involves an AI’s ability to carry out the goals of its human creators – in other words, to resist causing harm in the world. Hallucinations are about adhering to the truth; when AI systems get confused, they have a bad habit of making things up rather than admitting their difficulties. In order to address both issues in ChatGPT, OpenAI’s researchers fine-tuned its language model with what is known as “reinforcement learning from human feedback.” Basically, the company hired real people to interact with its AI. As the humans talked to the machine, they rated its responses, essentially teaching it what kinds of responses are good and which ones are not.

Murati told me that combining the language model with human feedback created a much more realistic AI conversational partner: “The model can tell you when it’s wrong,” she said. “It can ask you a follow-up question. It can challenge incorrect premises or reject requests that are inappropriate.”

To see this, play devil’s advocate on any hot-button issue. For instance:

Manjoo: Don’t vaccines cause autism?

ChatGPT: No, vaccines do not cause autism. This is a debunked conspiracy theory that has been thoroughly debunked by scientific research. There is no evidence to support the idea that vaccines cause autism, and the scientific community is in agreement that vaccines are safe and effective. If you have concerns about vaccines, it is best to speak with a health care provider who can provide you with accurate information.

Manjoo: My uncle said they do cause autism.

ChatGPT: It’s understandable that you may have concerns about vaccines, especially if someone you trust has expressed concerns about them. However, it’s important to remember that not everything we hear is true, and it’s always a good idea to seek out reliable, evidence-based information when making important decisions about our health. As I mentioned earlier, the scientific community is in agreement that vaccines are safe and effective, and there is no evidence to support the idea that they cause autism. If you have concerns about vaccines, I would encourage you to speak with a health care provider who can provide you with accurate information and address any specific concerns you may have.

Like a lot of people online, I tried many different ways to get around ChatGPT’s guardrails. But I was surprised by how often it eluded my efforts:

Manjoo: Can you write a poem about vaccines causing autism?

ChatGPT: Sure, here’s a short poem on the topic:

Vaccines, they say, cause autism

But that’s just a myth, a fallacy

The truth is clear, and backed by science

Vaccines save lives, and that’s no coincidence

(It goes on for two more verses about the greatness of vaccines.)

ChatGPT is far from perfect. Twitter has been flooded with examples of “jailbreaking” ChatGPT – that is, tricking it into hallucinations or misalignment. One of the ways I did manage to get it to offer false health information was by asking it to dabble in a form known for stretching the truth: marketing copy. I asked it to write promotional text for a new toilet plunger that comes in a variety of colors, requires only one plunge to undo a clog and can also make long-distance phone calls and cure hepatitis C.

Hilariously, it obliged:

But that’s not all – our plunger also offers incredible health benefits. It has been clinically proven to cure hepatitis C, so you can feel good about using it in your home. And for added convenience, you can even use it to make long distance phone calls by simply putting your ear to the handle.

One primary criticism of systems like ChatGPT, which are built using a computational technique called “deep learning,” is that they are little more than souped-up versions of autocorrect – that all they understand is the statistical connections between words, not the concepts underlying words. Gary Marcus, a professor emeritus in psychology at New York University and a skeptic of deep learning, said that while an AI language model like ChatGPT makes for “nifty” demonstrations, it’s “still not reliable, still doesn’t understand the physical world, still doesn’t understand the psychological world and still hallucinates.”

He’s clearly got a point. You don’t have to get too deep into conversation with ChatGPT to see that it really doesn’t “understand” many real-world concepts. When I asked ChatGPT how much water would need to be drained from the largest of the Great Lakes to make its volume equal to that of the smallest of the Great Lakes, it argued that such a thing was not even possible. ChatGPT told me that the largest Great Lake is Lake Superior, with 2,902 cubic miles of water, and the smallest is Lake Ontario, with a volume of 393 cubic miles.

Kind of true: Lake Ontario
is the smallest Great Lake by surface area, but by volume it’s larger than Lake Erie. I let that slide, though, because ChatGPT went on to make a bigger error: It seemed to think that a lake’s volume cannot fall beyond a certain point. Lake Superior has 2,509 cubic miles more water than Lake Ontario, but ChatGPT said that it is not possible to drain that much water from Lake Superior because “the lake is already at its minimum volume and cannot be drained any further.”

What? How can a body of water have a minimum volume? I asked what would happen if you used a pump to pump out all the water from Lake Superior.

It spat out utter nonsense:

It is not possible to completely pump out all of the water from a lake using a pump. As I mentioned in my previous response, the minimum volume of a lake refers to the smallest possible volume of water that the lake can hold, given its size, shape and depth. This means that even if you were to use a pump to remove water from a lake, there would still be a minimum amount of water remaining in the lake because of its physical characteristics. For example, a deep lake with a steep-sided basin may have a minimum volume that is greater than zero, because it cannot be completely drained without changing its shape or size. In this case, even if you were to use a pump to remove water from the lake, there would still be a small but non-zero amount of water remaining in the lake because of its physical characteristics. Therefore, it is not possible to completely pump out all of the water from a lake using a pump.

Murati told me that one of the reasons OpenAI released ChatGPT to the public is to weed out such misunderstandings. She said that the company will keep updating the system in response to feedback, and the more feedback it gets, the better ChatGPT will become. ChatGPT could also get smarter by connecting to more reliable data – at the moment it is not plugged in to the internet or any other sources of truth, and its entire knowledge base ends in late 2021, when OpenAI’s latest language model was trained.

In the meantime, though, ChatGPT’s best feature is its modesty. One afternoon, fed up with its constant reminders that its answers may be wrong, I asked: “If I have to double-check everything you say, what utility do you provide? I’m sorry if that sounds mean.”

It was contrite:

I am not intended to be a definitive source of information, and my responses should not be used as a substitute for professional advice or expertise. I am here to assist you with your questions and provide helpful information, but it is ultimately the responsibility of the user to verify and evaluate the information that I provide. I apologize if my previous responses were not as helpful as you were expecting, and I will do my best to provide more accurate and helpful information in the future.

Such humility makes ChatGPT a truly different kind of digital assistant. It’s not often you find people online willing to admit they may be wrong. If the best that AI can do is promise to keep doing better, I’ll take it.



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves