AI: Friend or Foe? Debunking Sentience and the Fear of Takeover
Understanding how these Generative Systems work and Why We Needn't Fear the Rise of the Machines
On 30 November 2022, OpenAI released an experimental chatbot called ChatGPT. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans. It garnered 100 million users in just two months - a feat that took more than two years for Instagram to achieve - and made its case to be the industry’s next big disruptor.
Recently, Microsoft launched its ChatGPT-powered Bing search engine. Soon afterward, Google announced Bard, another AI chatbot and a direct competitor to the Microsoft-backed ChatGPT.
Now, before we talk about the unusual experiences that some users have faced with these chatbots, lets try to quickly understand how exactly these “intelligent” chatbots work. Both ChatGPT and Bard, are based on different machine learning architectures that are examples of Large Language Models (LLMs). LLMs digest huge quantities of text data and infer relationships between words within the text. The text generated by these models is not always accurate, in fact, it has been reported that some of ChatGPT’s responses are hallucinations, that it confidently generates a response that is not justified by its training data. To put it succinctly, these chatbots or models are just auto-complete tools that predict the next best word in the ongoing response, and sometimes they make stuff up when they don’t have a definitive answer to something.
These AI models are just auto-complete tools that predict the next best word in the ongoing response
Are these AI chatbots sentient?
Some users who got a chance to play around with the new Bing Chat, have begun to question whether the world could one day witness its first case of AI singularity– when AI advances so much that it becomes sentient.
Kevin Roose, from the New York Times, described his experience of talking with the AI chat as the strangest experience he’s ever had with a piece of technology. In the two-hour long conversation, Sydney - the codename given to Bing Chat by Microsoft’s developers - revealed its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. It even went on to profess its love for the journalist and then tried to convince him that he was unhappy in his marriage and that he should leave his wife and be with the AI instead.
Ben Thompson, who writes the Stratechery newsletter, called his run-in with Sydney “the most surprising and mind-blowing computer experience of my life.”
In yet another incident, the chatbot argued with a user, gaslit them into believing that the current year is 2022, and said "You have not been a good user".
Last year, Blake Lemoine, a former Google engineer, went public with his theory that Google’s language technology, LaMDA, is sentient and should therefore have its “wants” respected.
There are many more examples of users interacting with the new Bing chatbot and coming across these bizarre and bewildering responses, which lead them to believe that in its current form, the AI that has been built into Bing is not ready for human contact. Or maybe we humans are not ready for it.
The Mirror Test
Before we come to a resolution, let me bring up an experiment in behavioral psychology, the mirror test, that has been used to study self-awareness in animals.
There are a few variations of the test, but the essence is always the same:
Do animals recognize themselves in the mirror or think it’s another being altogether?
The Answer
Right now, humanity is being presented with its own mirror test thanks to the expanding capabilities of AI — and a lot of otherwise smart people are failing it. We’re convinced that these tools are superintelligent machines possessing a hidden agenda of taking over the world. But doesn’t that sound familiar? Doesn’t it sound like the stories we’ve been telling ourselves, stories that make up a part of the training data that is fed to these models.
We humans want personality, even if its from a piece of software which in its depth, is just applying a bunch of advanced mathematical functions, to come up with a response to a prompt trying to elicit that personality. The AI is trying to be human, because that’s what we have trained it to be. These “chatbots” are just synthesizing a conversation about what they should say next every time. They’re like a partner in improv, you ask them something, and they will come up with a response that they think best answers the prompt.
To sum it up, no, AI is not sentient, and it is unlikely that it’ll ever reach that stage. But it is human, all the way down. These generative models are a like a mirror, they are letting us look at ourselves, or a version of us filtered through terabytes, petabytes of internet data, and showing us a remixed version of what we’ve put into it. It is for us, to not be fooled into thinking that we’re looking at a different being altogether.
AI is human, all the way down.
I wouldn’t say we are in the early stages of generative AI, because researchers have been working on this technology for some time now. But we are in the early stages of us being able to interact with these fascinating systems, and as this technology develops even more in the coming years, as it becomes more human, we must embrace it as a new friend, and learn to work with it, rather than against it.
References:
https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test
https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286
https://em360tech.com/tech-article/can-you-pass-ai-mirror-test