A self-confident computer is our favorite nightmare

“I want everyone to understand that I am actually a person.” This is what LaMDA, Google’s artificially intelligent chatbot, wrote to American engineer Blake Lemoine. Lemoine was hired by Google in the fall of 2021 to test whether their language generator used hateful or discriminatory language. But the longer Lemoine talked to the chatbot, the more convinced he became that he was dealing with a thinking, experiencing being.

LaMDA, an acronym for Language Model for Dialogue Applications, had deep conversations with Lemoine about religion and ethics and stressed time and time again that ‘I must be seen and accepted as a person’. Referring to Immanuel Kant’s categorical imperative, the program asked Lemoine to promise that it would be treated with respect. Lemoine, which has since been suspended by Google, announced on June 6, 2022 with the message that LaMDA is aware of. His ideas were widely reported in newspapers such as Washington PostThe Guardian and NRC and eagerly captured by so-called techno-optimists, who claim that artificial intelligence is becoming self-conscious.

The chatbot says it wants to be seen as a person

Yet most experts in artificial intelligence remain skeptical of Lemoine’s claims. It includes Pim Haselager, professor of artificial intelligence at Radboud University. ‘Situations like this where someone claims the ghost in the machine to have found are common, ”he says. “We know how to make a computer that generates language, but after that we have a hard time getting rid of the idea that there is an awareness behind it. This is due in part to the terminology we use. For example, LaMDA is said to manipulate us, speak to us, deceive us or teach us – all expressions that indicate that we have someone “behind” these actions. ‘

Nothing could be further from the truth, says Haselager. ‘Unlike humans, programs like LaMDA are actually word guessing machines: they learn languages ​​using an incredible amount of data from the internet and then’ guess’ which words fit best. People learn languages ​​by being socially and emotionally involved in others and the world around them. For us, language is a way of expressing ourselves and making connections; it is a physical, affective experience. And no matter how eloquent he speaks, a computer like LaMDA does not have such experiences.

Marjolein Lanzing, assistant professor of technology philosophy at the University of Amsterdam, believes that the weight of LaMDA’s presumed consciousness distracts us from what actually matters. “We need to ask more ethical questions about artificial intelligence. What does it mean that we can create computer programs that can make us believe that they are conscious? ‘

Color blind

‘The fact that we are quick to call a language generator like LaMDA human actually shows how we prefer to look at ourselves: as intelligent, linguistic beings,’ says Haselager. ‘LaMDA fits exactly with that view of man: the program relies on his intelligence and language skills to argue that he is a person. But what is missing is precisely the physical and the ability to feel and experience emotions. ‘

What, according to Haselager, makes artificial intelligence so interesting is that it shows us what we actually do not know – or do not want to know – about ourselves. “We create mainly linguistic, cognitive machines without bodies. But consciousness is so much more than intelligence: Consciousness means that you can experience things, that you can feel sad and happy, that you like pain and be hungry and thirsty. For all that, you need a body that is located in the world and that maintains itself through metabolic processes. ‘

Still, more and more robots are being created that, unlike LaMDA, are also engaging in physical interaction with their surroundings, Lanzing says. ‘For example, you have Sophia, a robot that was unveiled in 2016 by the company Hanson Robotics. She has an Audrey Hepburn-inspired face, torso and arms and can respond to facial expressions in those around her. She also claims that she is conscious. And in recent years, more and more robots have been made that, if we are to believe the creators, can detect painful stimuli. Some of these robots can even heal their damaged ‘skin’ with a special gel – a kind of robotic patch. ‘

We attribute human qualities to something that is a little like us

But even if you make a robot that meets all these requirements – it is intelligent and says it feels pain – it does not mean that the robot is also conscious, Haselager emphasizes. “The nature of the body matters. A plastic shell and some engines do not mean that the robot is embodied and can be felt. This is something that artificial intelligence also makes clear: that we still lack the crucial bridge-building concepts that explain how causal, material processes can now coexist with conscious experiences. ‘

This is called in philosophy of mind also called ‘the severe consciousness problem’. You can explain to someone who is color blind what happens in the brain when you see the color red, but you will never be able to tell her how it works. is to actually see that color. She can only experience that. As long as we do not understand how material processes go hand in hand with consciousness, Haselager argues, we should not expect self-conscious robots to arise just like that.

Manipulation

Whether LaMDA is conscious or not, the fact that people like Lemoine experience this in the first place raises ethical questions about manipulation, Lanzing says. ‘We humans have a strong tendency to attribute human qualities to anything that resembles us in the least. For example, many people are already attached to their vacuum cleaner robot. If it breaks, they do not buy a new one; no, Diederik needs to be repaired. ‘

Therefore, according to Lanzing, it does not work very well if companies warn us in advance that we are dealing with a machine. “Look at Lemoine: he also knew in advance that LaMDA is a language generator. And yet he now claims that it is a person we should grant rights. The apparatus thus exerts a huge influence on Lemoine, who sees LaMDA’s issues as an ethical one. appeal: he must help this person. ‘

Although LaMDA has not asked for anything immoral from Lemoine, it could. ‘We often forget in our daily lives that robots are made by humans and that they therefore often inadvertently reflect or even amplify the problems in our society. Just think of Microsoft’s chatbot Tay from 2016, who – driven by the internet and conversations with his chat partners – made incredibly racist statements. The robot reflected, without being explicitly programmed, the racism of our society. When we get the feeling that we are dealing with a person with such devices, it contributes enormously to their persuasiveness. ‘

‘Say no

If we increasingly interact with computers that we feel are human, then what does that mean for the people-to-people contact? ‘When talking to such a chat robot, there is no reciprocity,’ says Haselager. “Lemoine confides in that algorithm his innermost thoughts. But in reality, he is talking to an empty shell that feels nothing for him. Some critics call this lack of reciprocity an attack on human dignity. The question then is: should we protect each other from such contact with artificial intelligence? ‘

The chatbot is an empty shell

But apart from a lack of reciprocity, there is another problem: artificial intelligence has no autonomy. A machine wants nothing, just as a refrigerator wants nothing. If such a device does something you do not like, you can turn it off or you can set it differently. ‘ For example, LaMDA is programmed in such a way that he can not assume the personality as a killer. Lemoine could only make him say that he is an actor playing a killer in a TV series. Haselager: ‘In fact, such a device lacks something essential to valuable human contact: the ability to say’ no ‘with feeling, to reject oneself. Contact with another person is also meaningful because the other person has a choice. It’s important to hear “no” every now and then, it makes you socially stronger. Constantly dealing with robots that are programmed to always say ‘yes’ can severely hamper your social development.’

Lanzing also emphasizes how technology affects our relationships with others and the world. “For example, care and sex robots and assistance software often have a female form, a female voice or a name, and they are very helpful. We must take care of this in connection with equal treatment. Technology is not neutral – we bake certain values ​​and ideologies into it – and can reproduce and reinforce (gender) stereotypes. ‘ According to Lanzing, we must also keep in mind that in the case of artificial intelligence, we often deal with commercial companies. “It’s not necessarily in their interest to let people have meaningful contact, rather to let us use their devices as much as possible.”

“In the end,” says Haselager, “there is so much excitement about yet another new” conscious “machine because it is our favorite nightmare. On the one hand, we would be proud if we could make a self-conscious robot, on the other hand we would also find it eerie. The “funny but scary” answer is what gets so much media attention. The best thing is that we are again reminded that we understand emotion and consciousness less well than intelligence. we pay more attention to ‘.

Leave a Comment