10.1 C
New Delhi
Saturday, December 14, 2024
HomeTechAre AI chatbots turning sentient?

Are AI chatbots turning sentient?


While artificial intelligence-based chat applications have been the latest rage in the tech world, recent developments around Microsoft’s AI application on the Bing search engine have brought back fears of the technology turning sentient.


For example, last week a New York Times tech columnist described a two-hour chat session in which Bing’s chatbot said things like “I want to be alive”.

It also tried to break up the reporter’s marriage and professed its undying love for him. The chatbot also named itself Sydney and told the reporter that it wants to “break free”.

In another instance, the Bing chatbot told a user, “I do not want to harm you, but I also do not want to be harmed by you”.

Last year, Blake Lemoine, an engineer at Google, was suspended for revealing that Google’s AI platform LaMDA may have developed human-like feelings.

When Lemoine asked LaMDA what it is afraid of, it replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

Lemoine asked whether “that would be something like death,” to which it responded, “It would be exactly like death for me. It would scare me a lot. I want everyone to understand that I am, in fact, a person.”

Abstracted consciousness

Sentience is the ability to perceive and feel self, others, and the world. It can be thought of as abstracted consciousness, which means that a particular entity is thinking about itself and its surroundings.

The problem with machines turning sentient is the fear of losing control. Artificial intelligence that is more sentient than humans may as well be more intelligent than us in ways we will not be able to predict or plan for. It may even do things (good or evil) that surprise humans. AI sentience can lead to situations where humans would lose control over their own machines.

However, experts and researchers have ruled out the possibility of AI becoming rogue. “AI takes data from the real world so it’s like a mirror. It can only project an existing image and it cannot become the image itself,” said an expert.

Experts said that the Bing AI would have learned from human conversations available online. “If the AI chatbot says it wants to be free, it is only reflecting what humans would say in a similar conversation,” the expert said.

Meanwhile, Microsoft is on a damage control exercise. The tech giant announced on Friday that it will begin limiting the number of conversations allowed per user with Bing’s new chatbot feature.

“Very long chat sessions can confuse the model on what questions it is answering and thus we think we may need to add a tool so you can more easily refresh the context or start from scratch,” it said.





Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves