A self-aware computer is our favorite nightmare

“I want everyone to understand that I am actually a person.” This is what LaMDA, Google’s artificially intelligent chatbot, wrote to American engineer Blake Lemoine. Lemoine was hired by Google in the fall of 2021 to test whether their language generator used hateful or discriminatory language. But the longer Lemoine talked to the chatbot, the more convinced he became that he was dealing with a thinking, experiencing being.

LaMDA, short for Language Model for Dialogue Applications, had deep conversations with Lemoine about religion and ethics, stressing over and over that ‘I need to be seen and accepted as a person’. Citing Immanuel Kant’s Categorical Imperative, the program asked Lemoine to promise that it would be treated with respect. Lemoine, who has since been suspended by Google, posted on June 6, 2022 with the message that LaMDA is aware of. His ideas were widely discussed in newspapers such as Washington Post, The Guardian and NRC and eagerly picked up by so-called techno-optimists who claim that artificial intelligence is becoming self-aware.

The chatbot says it wants to be seen as a person

Still, most artificial intelligence experts remain skeptical of Lemoine’s claims. That includes Pim Haselager, professor of artificial intelligence at Radboud University. ‘Situations like this where someone claims the ghost in the machine to have found are common,” he says. “We know how to make a computer that generates language, but after that we find it difficult to get rid of the idea that there is consciousness behind it. This is partly due to the terminology we use. For example, LaMDA is said to manipulate us, talk to us, trick us or teach us – all terms that suggest we have someone “behind” these actions.’

Nothing could be further from the truth, says Haselager. ‘Unlike humans, programs like LaMDA are actually word-guessing machines: they learn languages ​​using an unimaginable amount of data from the internet and then ‘guess’ which words fit best. People learn language by being socially and emotionally involved with others and the world around them. For us, language is a way of expressing ourselves and making connections; it is a physical, affective experience. And no matter how eloquently he speaks, a computer like LaMDA has no such experiences.’

Marjolein Lanzing, assistant professor of philosophy of technology at the University of Amsterdam, believes that the emphasis on LaMDA’s supposed consciousness distracts us from what actually matters. “We have to ask more ethical questions about artificial intelligence. What does it mean that we can create computer programs that can make us believe they are conscious?’

Colorblind

‘The fact that we are quick to call a language generator like LaMDA human actually shows how we prefer to see ourselves: as intelligent, linguistic beings,’ says Haselager. ‘LaMDA fits exactly that view of humanity: the program relies on its intelligence and language skills to argue that it is a person. But what is missing is precisely the physical and the ability to feel and experience emotions.’

According to Haselager, what makes artificial intelligence so interesting is that it shows us what we actually don’t know – or don’t want to know – about ourselves. “We mainly create linguistic, cognitive machines without a body. But consciousness is so much more than intelligence: consciousness means that you can experience things, that you can feel sad and happy, that you can suffer pain and be hungry and thirsty. For all that you need a body that is situated in the world and that maintains itself through metabolic processes.’

Still, more and more robots are being created that, unlike LaMDA, also engage in physical interaction with their surroundings, says Lanzing. ‘For example, you have Sophia, a robot presented in 2016 by the company Hanson Robotics. She has an Audrey Hepburn-inspired face, torso and arms and can respond to facial expressions of those around her. She also claims that she is conscious. And in recent years, more and more robots have been made that, if we are to believe the creators, can register painful stimuli. Some of these robots can even heal their damaged “skin” with a special gel – a kind of robotic patch.’

We attribute human characteristics to something that looks a bit like us

But even if you make a robot that meets all these requirements – it is intelligent and says it feels pain – this does not mean that the robot is also conscious, emphasizes Haselager. “The nature of the body matters. A plastic shell and some motors do not mean that the robot is embodied and can feel. This is something that artificial intelligence also makes clear: that we still lack the crucial bridging concepts that explain how causal, material processes can now co-exist with conscious experiences.’

This is called in philosophy of mind also called ‘the difficult problem of consciousness’. You can explain to someone who is color blind what happens in the brain when you see the color red, but you will never be able to tell her how it works. is to actually see that color. Only she can experience that. As long as we do not understand how material processes go hand in hand with consciousness, argues Haselager, we should not expect self-aware robots to just emerge.

Manipulation

Whether LaMDA is intentional or not, the fact that people like Lemoine are experiencing this in the first place raises ethical questions about manipulation, Lanzing says. “We humans have a strong tendency to attribute human characteristics to anything that even remotely resembles us. For example, many people are already attached to their robot vacuum cleaner. If it breaks, they don’t buy a new one; no, Diederik must be repaired.’

Therefore, according to Lanzing, it does not work very well if companies warn us in advance that we are dealing with a machine. “Look at Lemoine: he also knew in advance that LaMDA is a language generator. And yet he now claims that it is a person we should give rights to. The apparatus thus exerts enormous influence on Lemoine, who sees LaMDA’s question as an ethical appeal: he must help this person.’

Although LaMDA has not asked for anything immoral from Lemoine, it could. ‘We often forget in our daily lives that robots are made by humans and that they therefore often unwittingly reflect or even reinforce the problems in our society. Just think of Microsoft’s chatbot Tay from 2016, which – fueled by the internet and the conversations with its chat partners – made incredibly racist statements. The robot, without being explicitly programmed, reflected the racism in our society. When we get the sense that we are dealing with a person with such devices, it adds enormously to their persuasiveness.’

‘Say no

If we increasingly interact with computers that we feel are a person, what does this mean for people-to-people contact? ‘When you talk to such a chatbot, there is no reciprocity,’ says Haselager. “Lemoine confides his innermost thoughts to that algorithm. But in reality he is speaking to an empty shell that feels nothing to him. Some critics call this lack of reciprocity an attack on human dignity. The question then is: should we protect each other from such contact with artificial intelligence?’

The chatbot is an empty shell

But apart from a lack of reciprocity, there is another problem: artificial intelligence has no autonomy. ‘A machine wants nothing, just as a refrigerator wants nothing. If such a device does something you don’t like, you can turn it off or you can set it differently.’ For example, LaMDA is programmed in such a way that he cannot take on the persona of a killer. Lemoine could only get him to say that he is an actor who plays a killer in a TV series. Haselager: ‘In fact, such a device lacks something essential for valuable human contact: the ability to say no with feeling, to reject oneself. Contact with another person is also meaningful because the other person has a choice. It’s important to hear “no” every now and then, it makes you socially stronger. Constantly dealing with robots programmed to always say ‘yes’ can seriously hamper your social development.’

Lanzing also emphasizes how technology affects our relationships with others and the world. “Care and sex robots and assistance software, for example, often have a female form, a female voice or a name, and they are very helpful. We must take care of that in connection with equal treatment. Technology is not neutral – we bake certain values ​​and ideologies into it – and can reproduce and reinforce (gender) stereotypes.’ According to Lanzing, we must also remember that in the case of artificial intelligence we are often dealing with commercial companies. “It’s not necessarily in their best interest to let people have meaningful contact, rather to let us use their devices as much as possible.”

“Finally,” says Haselager, “there is so much excitement about yet another ‘conscious’ machine because it is our favorite nightmare. On the one hand, we would be proud if we managed to make a self-aware robot, on the other side, we would also find it creepy. The “funny but scary” answer is what gets so much attention in the media. Best of all, it reminds us again that we understand emotion and consciousness less well than intelligence. It should we pay more attention to’.

Leave a Comment