We think too fast that it can be done better to see computers

Control room with surveillance cameras.Statue Martijn Beekman

Surveillance technology has long since ceased to be the exclusive toy for dictatorial regimes that want to keep their people under control. Doctors, employers and neighbors are also involved. It is therefore right that all possible authorities sound the alarm to prevent excesses. Meanwhile, we seem to have forgotten to ask the most important question: do waking computers really make society better?

Surveillance dictatorship, surveillance capitalists – anyone who follows the debate on digital surveillance technology encounters warnings from parties such as Amnesty International, the Brookings Institution and the American thinker Shoshana Zuboff. And they find political audiences. For example, the European Commission proposes to ban police biometric identification of people on the street in real time as well as the use of other dangerous forms of artificial intelligence.

All this fuss is good and necessary. For example, the scandal surrounding the Israeli Pegasus software showed that many governments use surveillance technology to spy on journalists and human rights lawyers. But the debate is one-sided. Attention is focused on preventing the worst of the worst, while surveillance technology also poses another problem: the useful yield is often difficult to prove.

That insight is crucial. Because if the operation of surveillance software is not proven, it is not just a question of whether the technology goes too far. Then the question is whether monitoring software is worth the social price at all.

Because surveillance technology is used in many areas (eg labor market, education, crime detection), there is a lot of research into how it works. Yet it is not so simple to prove the usefulness of surveillance technology. Take, for example, political micro-targeting. Political parties pay Facebook to send targeted ads to users using algorithms. Does it work? We actually do not know.

For example, on the one hand, there is scientific research from American Northwestern University that questions the effectiveness of these types of advertisements. In fact, it would be hard to prove that Facebook’s advertising policies convince people better than traditional advertising methods.

On the other hand, research from the University of Amsterdam shows that political micro-targeting certainly has an effect. Not because left-wing voters are convinced to vote for the right, but because they are encouraged to vote even more for the left.

This discussion is not surprising. Thousands of factors drive the choices we make and the behaviors we exhibit, from our genetic makeup to our Monday morning mood. What exactly determines our choices is difficult to predict, and therefore difficult to influence.

It is surprising that countless ‘actors’ use surveillance technology whose effectiveness is not scientifically proven. The lure of the beautiful promises of digitalisation unleashes political parties, companies and citizens’ AI systems on society – often without being able to prove that they are really good at anything. That insight should change our view of digital surveillance. For why should we use surveillance technology if it might not help solve societal challenges?

Remember that surveillance technology always has a price. If you feel monitored, you will behave differently. And digital systems can be hacked. The digital world is decades old, but unfortunately it is becoming increasingly uncertain. In addition, surveillance technology affects our autonomy. When the computer determines which candidates receive an invitation to a job interview, it relieves an employer, but reduces the employer’s autonomy to choose.

It’s good to let all this sink in. For if surveillance technology almost always costs privacy, security and autonomy, it must be worth the price. Then a careful ethical analysis is desirable and necessary.

Sobriety is required. We certainly do not need to throw all surveillance applications overboard, but we should critically evaluate them for their added value to society. There are other ways to make our streets safer, spread messages and hire people. If AI systems can sharpen our work and judgment, they are welcome – but do not assume that existing practices are less successful than automated ones. The reverse may just be the case.

Linda Kool is a researcher digital society at the Rathenau Institute.
Jurrien Hamer is a philosopher and lawyer and researcher in surveillance technology at the Rathenau Institute.

Right to privacy – Systemic risk indication

Last year, the Hague District Court ruled that the System Risk Indication (SyRI) used by the government to detect fraud in benefits, allowances and taxes was in breach of Article 8 of the European Convention on Human Rights (ECHR). the right to privacy. Among other things, the legislation would not provide insight into how the risk scans used were validated, which makes it uncontrollable whether SyRI worked correctly. The social significance of the law therefore did not outweigh the invasion of privacy.

On-the-job selection – AI-driven assessments

Employers are increasingly using digital monitoring tools, for example to assess job applicants. The monitoring instrument may make a selection of promising candidates. However, the Rathenau Institute’s report Working on Value shows that the effectiveness of various AI-driven assessments is not sufficiently proven and that it is not sufficiently clear how these systems should be controlled. A new application code requires algorithms to be validated and transparent, but there are no detailed rules.

Politics based on hope – Digitization of government tasks

The Scientific Service of the European Commission (JRC) conducted a large-scale study on the digitization of government tasks in the EU. These included the use of AI predictions in health care, crime prevention, and education. The researchers concluded that governments generally look too optimistically at innovations and base their policies more often on hope than on hard empirical evidence. They advised governments to be more realistic and recognize the complexity of digital innovations.

Leave a Comment