Human rights also play a role in AI, which is why Jan Kleijssen is working on a treaty

Computer-controlled prisons where – apart from the arrested – no human is involved. Automated court system with digitally spit out verdicts. Universal translation programs. Algorithms that make medical decisions in hybrid hospitals. Personnel selection by computer. Almost anything seems to be possible soon or already now due to the boom that artificial intelligence (AI) has taken.

With the possibilities, so do concerns and fears about the emergence of a digital surveillance state. And how reliable are AI systems? Recently, US experts warned that researchers often treat AI programs carelessly and rely too easily on systems that are not watertight or poorly used.

There is a need for better training of professionals working with artificial intelligence

New, binding agreements are now being drawn up. The Council of Europe is working on a convention to ensure that the use of artificial intelligence does not violate human rights or undermine the foundations of the rule of law and democracy. A working group working on a draft treaty will meet again this week. The goal is an international treaty on artificial intelligence and the protection of human rights.

Jan Kleijssen, the Dutch human rights director, is convinced that this is a task for the Council, he says in an online interview. “The Council recognized early on that if you want to protect human rights, the rule of law and democracy, new technologies also require attention.” He points to the first treaty for the protection of personal data from 1981, Convention 108, which has been ratified by 55 countries, including the Netherlands.

Convention against cybercrime

The Council also initiated the first international treaty against cybercrime (2001), to which 68 states are now parties. Kleijssen: “It is the only treaty of its kind in the world.” Since the advent of the Internet, the council has also made a number of recommendations on, for example, the online role of the media, child protection and health care.

A working group issued exploratory advice on artificial intelligence two years ago. Conclusion: the existing rules showed gaps. This led to the EU’s Committee of Ministers being tasked with drawing up a treaty on artificial intelligence and human rights.

What holes are there? Kleijssen sums up: “There is no legal basis for governments’ correct use of artificial intelligence. There is insufficient overview of datasets used in all kinds of applications. There is a need for better training of professionals working with artificial intelligence. Citizens should be told if a decision that concerns them is made by an AI system. An appeal is required against such a decision. In the private sector and with governments.”

Pitfall of machine learning

Lots of examples of what can go wrong. “In the United States, an experiment was done with the automation of the question of whether a suspect could be released on bail. What turned out? If the suspect was white, he got bail, a colored suspect had to stay put. How did it happen? Because the data set used was based on the current prison population in the United States.”

This is the well-known pitfall of machine learning: computers learn from the data you feed them. Amazon also had to deal with it. “They have announced that they are no longer using an AI system for recruitment. It turned out that women did not pass the first selection because the system was based on the current composition of the leadership. So it concluded that women are unsuitable for leadership positions.” Another: “In Austria, single mothers and people with disabilities were not offered work because the system decided they were unsuitable based on the existing labor market.”

The State Council has admitted that things must be done differently

The icing on the cake was the Dutch benefits scandal. “The tragedy has been widely noticed. It has become a standard for what can go wrong with AI. Countries are starting to wonder: Can this happen to us too? Some countries were still hesitant, the Netherlands leading the way. “It was mostly on the brake, but it has hit completely.” The ministries of justice were quickly convinced, but the ministries of economy feared that regulation would be at the expense of innovation.” That is wrong, Kleijssen believes. “Look at the pharmaceutical industry, the most regulated industry in the world and one of the most innovative.”

The treaty text must be ready by the end of next year, after which it can be submitted to the member states. A checklist of rules that the use of AI systems must comply with is being prepared. Is the data set in order, is the staff well trained and skilled? Is there a career opportunity for citizens?

A shock therapy

The judiciary must also adapt. “The State Council has received shock therapy with the benefits scandal and has admitted that things must be done differently. A plus for the Netherlands. We do not hide the matter.”

Some uses of artificial intelligence will be excluded from the treaty if it is up to the working group. Kleijssen: “What we want to ban is social scoring as in China, establishing a points system for citizens. We also don’t want the government to use facial recognition to collect private data from citizens, such as sexual preference.”

Rules are also necessary for mega-corporations that moderate online content. Kleijssen: “A lot of content moderation is done by AI systems, and this can lead to one-sidedness and self-reinforcing information bubbles. And then to polarization. It is about companies taking their responsibility to ensure that their information supply remains heterogeneous and does not go in one direction.”

How are you? In October, Kleijssen was received at the White House to discuss the United States’ input into the negotiations. “The Americans also participate. They now have a plan for a national AI Bill of Rights, partly inspired by our work.” One Eurasian power is no longer involved: Russia, which was expelled from the Council of Europe after the invasion of Ukraine.

Leave a Comment