David Deutsch the risks of AI

In conversation with David Deutsch

What will the quantum computer mean for artificial intelligence?

There is currently a lot of talk about the ongoing digitization, especially about AI and the role of that technology in society. It also deals regularly with quantum computers and quantum algorithms. What will the quantum computer mean for artificial intelligence?

‘There are certain functions that quantum algorithms can perform incredibly more efficiently than any classical algorithm. But at the moment I think its use will be limited to special purposes. Development of medicine or games, for example. It may be important, but I don’t expect quantum algorithms to play a central role in AI or artificial general intelligence, AGI.’

“AI must be obedient, it must do what it is programmed to do. Whereas a human is fundamentally disobedient.’

Why not?

To explain it, it is good to start by knowing that AI and AGI are not only different from each other, but that they are almost opposites. The AIs we know are AIs that, for example, diagnose diseases, play chess or run huge factories. They are objective functions designed to maximize processes. The AI ​​must obey it. It must do the things it is programmed to do. They can do that better than humans because humans are fundamentally disobedient. And especially people who are creative. When a human plays chess, he or she calculates the moves completely differently than a chess computer. Where the computer is able to look at billions of possibilities, humans are limited to a few hundred possibilities. Another difference is that man is able to explain something. He or she can write a book afterwards, for example about how he or she became world champion. But a computer program that beats that world champion cannot write such a book because it does not know what it has done. It just followed a series of program rules.’

‘The AGI program must can give unexpected answers’

And an AG? How does it differ from AI?

‘We expect an AGI to behave in a way that cannot be specified in advance. Because if you could clarify it, you would already have the answer. The AGI program must therefore be able to provide unexpected answers. Answers to questions we didn’t even know how to ask.’

Are people thinking fast enough?

Research has been done that shows that people process information on a cognitive level with a bit rate at 50 bits per second. That is not much. Is it a limitation of human thinking? Is that why we play chess completely differently than computers? And also solve completely different problems?

‘Processes in the human brain run somewhat parallel, so I don’t think it’s as slow as 50 bits per second. But actually: it’s much less than a billion bits per second. We humans are therefore not able to see and tick off all possibilities. But you don’t have to, because we are able to come to the right insight from understanding. A chess player looks at the chessboard, tries to understand the situation and then speculates. I am a follower of the philosopher Karl Popper. He taught us that science and thinking in general is a matter of speculation and criticism. But no computer program can do that at the moment.’

Do you think we can get explainable AI at a level where people can understand that explanation?

This is in line with the current discussion of ‘explainable AI’. Many people believe that if we develop AI for serious high-risk applications, then that AI should be able to explain itself. Do you think we can get explainable AI at a level where people can understand that explanation?

“Yes, but I don’t think this is a step in the direction of AGI. On the contrary. It’s just another step in the other direction. For suppose a computer program could say that it has diagnosed cancer and indicate which X-ray image it used and what it detected in the image. It can certainly be done in a way that we as humans can understand. But so far, an AI cannot indicate that it thinks there is a new disease and also explain the development of the disease as a human doctor would. AI cannot create anything new. He can arrive at new implications, but he does so on the basis of existing knowledge that has been introduced.’

When talking about creating new knowledge, we have examples where AI has discovered new antibiotics.

‘I think it happens by matching chemical properties with the properties of cells or bacteria. It is not really explanatory knowledge’.

And there are several examples of how AI is able to reduce complex patterns to laws. Can we call it explanatory knowledge?

‘None. Although it appears that way, upon closer inspection you will find that there is also knowledge that is pre-programmed with those situations. Maybe not consciously. This makes it very easy for a programmer to enter knowledge. For example, about which data is relevant and which is not. In this way, you are already telling the program to find an answer. But physicists don’t do that when they discover a new law. They discover the concepts behind the unknown.’

Many people say that AI should show moral behavior. Do you think AI can learn by interacting with humans?

“The same applies here: the real content of the moral behavior that the AI ​​performs will have been entered by the programmer. It’s a bit like training a dog. A dog cannot create explanatory knowledge either. A dog is trained to follow certain moral patterns. And it’s actually fantastic that dogs can be trained that way. Machines can arguably do this even better than dogs. But just obeying the rules is not moral behavior. It cannot make moral judgments itself, but only carry out the moral judgments of others. Like a slave.’

Do you want these systems to be slaves? Or do you want them to disobey and do whatever they want in those roles?

And if we look at self-driving cars, e.g. Or for autonomous responses to cyber attacks, which are also intelligence software. Do you want these systems to be slaves? Or do you want them to disobey and do whatever they want in those roles?

“In the case of self-driving cars, we could look at guide dogs for the blind. They must also assess traffic and people. They must assess certain threats and ignore other signals. And such a guide dog does all that better than a human. A person is much less committed to such a task, so sooner or later he or she will make a mistake. It’s amazing what guide dogs can do. It’s a harder problem for self-driving cars, but it’s pretty much the same problem.’

‘The possibilities are endless. Every year I am amazed by the power of AIs

What advice do you have for people who want to make AI applications responsibly?

‘The possibilities are endless. Every year I am amazed by the power of AIs. And I don’t think the risks are apocalyptic. The risks are similar to the risks of any new technology. The first time a steam locomotive was demonstrated to the public, an MP was killed. People were not used to such a locomotive that can quickly pass other objects. And fast back then meant 15 miles an hour. So yes, we need to be careful with AI. It is important to realize that AIs are not perfect. And to realize that they are not completely ruled by rules, simply because we don’t know what rules to give them. We must be aware that these are applications that look as if they do not need supervision, while it is very important, especially at the beginning, to carry out proper supervision and immediately intervene in the event of errors. The first version of an AI will not be as good as the tenth. But when it’s mature, such an AI makes fewer mistakes than humans, and we can trust it.’

Leave a Comment