‘There is always something missing in AI’

Can you teach a computer not to lie unless the Gestapo is at the door? And how does a machine know that it is not about ‘ordinary’ agents, who in democratic societies are called your best friend?

Well, it should still be possible, says professor of artificial intelligence Jan Broersen. It is actually a fairly simple example of an exception that goes beyond the general rule. “You already have formal reasoning systems that can handle rules, which you must also be able to break in certain situations. There are much more complicated examples, whole systems of rules interacting. But in theory, all of that can be programmed. ”

The extent to which this enables ‘moral’ computers, automated systems that learn to make their own choices – and to take responsibility – is the focus of Broersen’s research, which gave its inaugural lecture at Utrecht University at the end of March. Educated as a mathematician in Delft, he will use logic to try to develop a moral calculation for AI systems.

Such ‘deonic logic’ (from Greek deon: that which is appropriate or obligatory) is badly needed, he believes, for a society that is increasingly making use of artificial intelligence. “Everyone talks about AI and ethics, but hardly anyone does anything about it. At least it’s usually perceived as a social or legal issue. Do we really want self-driving cars on the road? How do we take care of it legally? When are the manufacturers responsible? “

All very important, says Broersen, but why not look at whether moral responsibility itself can be programmed into AI systems? “I want to know: how to operationalize existing theories of ethics and put them into a machine so that it knows how to handle situations where moral considerations are important.”

Suppose you can save the ten people by pushing each other onto the rails

Just explain how to put ethics into machines?

“It starts, of course, with very different moral theories. Philosophers have been working on this indefinitely. Some theories are easier to imagine automation than others. Take utilitarianism, roughly speaking, the view that you should do what brings the greatest benefit or happiness to as many people as possible. It’s a fairly quantitative approach that you can easily capture in an application. It fits into how computer scientists already think about intelligence: as a procedure for choosing actions from a range of options with a clear goal. “

Then the ‘wagon problem’ arises. Do you have to let a runaway trolley run on ten people to make sure they get killed, or do you have to turn the switch to another lane where only one person gets killed?

“Yes, there are many varieties. You can see that utilitarianism for many people is an unsatisfactory moral theory. Is morality really such a simple calculation: ten sacrifices or one? People also make a difference between doing nothing and actively intervening, such as pulling on the switch. It has implications for how we perceive responsibility. And suppose you can save the ten people by pushing another person onto the rails. It is something else. We sense that intuitively – and no, a computer does not. ”

Rules provide direction, but one must also be able to break them depending on the context

What is so possible?

“The moral calculation I support is not thinking primarily in desired results – such as: the minimum number of victims – but in following rules. Moral behavior is rule-based behavior. Which rules apply in one situation, and which rule should take precedence over the other? We also teach children how to behave. Rules provide direction, but you also need to be able to break them depending on the context. It must be possible to operationalize this in a formal system. ”

Yet in your inaugural lecture, you are skeptical of the possibility of powerful AI, machines for teaching ‘real’ human intelligence. Why? You mention Wittgenstein as support. But it also says: we follow rules blindly, without thinking about it. A computer can do that too, right?

“I still think something is missing, namely the moral source. With us, it is reason and community, both make us human. We learn and test our moral insights and intuitions against each other, we interpret rules, we nuance them. Machines do not have it. You can program a lot, but not such moral intuitions. ”

Can they not also develop these themselves while learning?

“No, in my opinion such a system will always miss something. We are still the ones who decide how a rule should be interpreted. In the end, a machine does nothing but follow the instructions that we have put into it. This also applies to the machines we use now. It can not be ruled out that one day it will be possible. take quantum calculation, automation with insight from quantum mechanics. If you start to understand intelligence and moral choices in a quantum way, it will be a different story. But it is very speculative. That area is open, we still understand very little of it. I am personally a non-determinist, I believe that reality is not completely determined by law. But I believe we are ultimately machines. Just not the kind of machines we now call computers. “

That car does not understand what to do when fellow road users honk at the horn

More practical: How can ethical logic help the tax authorities prevent a new compensation scandal?

“I do not think it has much to do with AI. There are simply statistical relationships made in a way that we do not find desirable. You enter cases, ‘yes / no scams’, and then such a computer begins to learn. It then looks for connections between characteristics of people or files and you can no longer turn it off.You can also do it differently and make rules in advance about how a computer may search.Then you can program them in such a way, “They weigh certain characteristics or do not include them. As it is now, you do not know how the system learns – nor can you correct that. Yes, afterwards in the House of Representatives.”

The self-driving car has eyes, a memory, can make choices. Is not that a strong AI?

“No, I do not think so. That car can not have emotions or see meaning as we do, not with our current computers. He will not understand what to do when fellow road users begin to react to a sign that says’.tut if you are happy‘on it, as you can see in America. One can, of course, program something into it, but the behavior will still be different. Not backwards, yes you can teach him that. ”

Leave a Comment