AI’s iPhone Moment | The time

The fact that the supposedly almighty Google needs help from its founders again shows what more and more people are discovering to their surprise on chat.GPT: artificial intelligence is experiencing a breakthrough that threatens to blow many people away.

For years, the European Commission fought in vain against Microsoft’s monopoly until Apple and Google broke it. Now it seems the same is happening with the latest tech giant, thanks to chat.gpt’s phenomenal breakthrough. The panic is so great that, according to the New York Times, Google has called its founders Sergey Brin and Larry Page back to the table to think along.

Indeed, it is no exaggeration to say that we are experiencing an “iPhone moment” with artificial intelligence, although Apple haters will find the term ill-chosen. But the fact is that Apple managed to turn the phone into a mobile pocket computer more powerful than the computers that sent Apollo 11 to the moon. They created a new technological ecosystem where everyone made apps for that smartphone and changed the concept of ‘phone’ in such a way that today’s youth do everything with it except make calls.

The system is powerful. Enter the question on about the dangers of AI’s rapid breakthrough to society, and after a few seconds a grammar- and syntax-free text in Dutch of 844 characters rolls out.

There are several dangers, the chatbot says about himself, after which he lists five. Unemployment because AI takes over human tasks. Bias because the AI ​​system has been trained on data that may contain biases. Breach of privacy. Security risks, such as military cyber attacks. And social isolation, because in the long term people threaten not to have contact with other people. Then ask the chatbot if there are also specific dangers for Belgium and he will point out the legal situation for the GDPR directive on personal data protection.

This means that a new search starts in all areas. At the World Economic Forum in Davos last week, AI was seen as a dangerous weapon in the hands of dictators. Schools fear the end of verifiable homework. Parents fear that their children, already depressed by social media, lazy by Wikipedia and aggressive by video games, will now also outsource their writing skills.

Anyone with an office job risks having to wonder how quickly mediocrity will lead to dismissal. In Europe, the question is again being raised as to why this technology is once again coming from the United States – and not even through the Pentagon research agency DARPA’s military budgets. Europe should also ask itself whether, as with GDPR, it should perhaps take on the task of regulating what is made elsewhere in the world. And even the almighty Google has doubts about its technological hegemony.

A new form of literacy must be taught, as with previous breakthroughs. There was once a discussion about whether you could use a Wikipedia page. Today, most people know that it is not an end point of an investigation, but a starting point for that investigation; sources, talk page and Wikipedia versions in other languages ​​included. For example, one of the great unknowns of AI is based on which sources and texts the bot comes up with its text. And how to check these sources. And whether the chatbot can also reveal it itself in a credible way.

And again, this opens a new chapter in whether technology is amoral. Is it a tool or a weapon? Are we as users being pushed in one of two directions?

For example, we once thought Facebook was reconnecting people who had lost touch, until we realized it was also connecting the Russian government with naïve American voters. A similar challenge looms now that we can outsource even more of our thinking to an external brain.

Should we be afraid of this? “That’s understandable,” the chatbot replies. “But it’s important to remember that it also offers a lot of positive opportunities.” Finding the latter will be a job for humans, not chatbots.

Leave a Comment