‘We ask each other the wrong questions about ChatGPT’

Interview | by Michael Bakker

20 January 2023 | It is pointless to fend off technology like ChatGPT. Better think about how you can use it, advises Professor Bart Wernaart (Fontys University of Applied Sciences). It often does not matter whether students use such an AI application as a tool, sparring partner or source of inspiration. Better teach them when they should and when they shouldn’t, he argues.

Bart Wernaart, Associate Professor Moral Design Strategy at Fontys University of Applied Sciences.

Bart Wernaart is a lecturer in Moral Design Strategy at Fontys University of Applied Sciences and winner of the first Melanie Peters Prize from the Rathenau Institute (more on that later). As a researcher who is very concerned with new technologies and ethics, he follows the discussion on ChatGPT with interest. He also sees how that discussion could be improved. “We’re asking ourselves the wrong questions,” he says.

Does it matter if something is made with ChatGPT?

“ChatGPT is a technology that allows you to create something that does not belong to you. So there is a fear in the programs that a student has to hand in an assignment that is not written by herself, but by technology”, the teacher outlines the problem. “However, there is a deeper question behind this. Does it matter if something is real or not? Does it matter whether something is man-made or not?”

Anyone who says ‘yes’ to that immediately has a problem. The difference between the text of a human or a computer is almost impossible to see. You can search endlessly for that difference or develop technology that can find such differences, but then we always lag behind the facts, sees Wernaart. “Carriers sometimes have a tendency to take the negative consequences of new technology from top to bottom to regulate. But innovation is always faster than regulation.”

Technology vs. Technology – at the expense of creativity

The tendency to use technology to discover automatically generated texts can also mean that the human creativity that people want to protect is suppressed, warns Wernaart.

“Technology can only register whether something has already existed or not. Truly new ideas will therefore always be considered ‘strange’ by technology. If I use technology to detect technology, as is now proposed with regard to ChatGPT, then a technological cat-and-mouse game ensues, in which the new Galileo Galilei is automatically finished.”

What do we want students to learn?

So what should higher education do? It makes much more sense to accept that something like ChatGPT exists and to look for a good way to deal with it, says Wernaart. Also, it’s not all doom and gloom. A student can use the chatbot to cheat during a test, but ChatGPT can also act as a sparring partner or source of inspiration.

The newsletter is exclusively available to employees of our partners.

“Make sure you’re equipping people on the front lines to handle it responsibly,” says Wernaart. “Teach students when they can use such a chatbot for learning and inspiration, and especially teach them when to invest in themselves by not using such a tool for a while.”

Educators will therefore need to think carefully about what they want to teach their students. “Is it important that a student does everything himself? If so, then you need to act. If not, then it doesn’t matter if a student uses such a chatbot in any way.”

Conversation around ChatGPT should be about using it

Back to the deeper question. Does it matter if something is real or not? “We strongly distinguish between False and not False”, explains Wernaart, “but False however, does not mean that something is automatically false. The same applies the other way around. Suppose we have one deepfakevideo of a politician playing soccer. This does not mean that the politician does not really play football sometimes, or that the deep fake cannot be a reflection of anything that actually happens.”

Humans now have the unique opportunity to pull themselves up to technology and thereby become better.

Some time ago there was an uproar when an artificial intelligence application that did not itself come up with anything new won a painting competition. “What does it really mean that people like such a highly refined collage of existing images?” asks Wernaart. “Who cares if it wins a prize? There are also computers that can beat the best chess player. So what? It gives the chess master resistance so he can improve.”

People now have the unique opportunity to trust technology and thereby become better, says Wernaart. “You have to continue to think carefully about the use of that technology. If you use it to influence elections or manipulate news, then your goal is not valid. That’s what the conversation around ChatGPT should be about; are you using technology to make something better? If you use it to make something visibly better, it doesn’t matter whether something is real or fake. If you use it to make something worse, it matters.”

ChatGPT is not ethically neutral

It is not the authenticity of a technological artifact that is most important, but the morality with which the technology is loaded and applied. “Technology is never ethically neutral. It has a certain moral charge. Someone pulling the trigger on a gun may have other intentions, but a gun-heavy society is de facto many times more violent than a society without guns. The mere fact that those weapons exist makes society ethically different,’ says Wernaart.

A chatbot like ChatGPT also has an ethical connotation – one that may be far from the intention with which the software was designed. “The designer of such a bot probably did not do it to help students with fake assignments. You will always have to check whether the ethical charge that is put forward still corresponds to the ethical charge of the technology itself. Does it correspond to what we as a society want? If the answer is ‘yes’ twice, we can use technology to do something better.”

A social media such as Facebook is a good example of the opposite, says Wernaart. The designers of the current form of social media intended to sell as many ads as possible that match the user as closely as possible. “It then inadvertently created the familiar social bubbles, and a few steps later you have the storming of the Capitol.”

Better think about how you can use it

It’s no use fighting it, rather think about how you can use it, is the message from the Eindhoven lecturer. “If I want to teach a student to write for themselves, I shouldn’t use such a chatbot, but if I want to inspire or guide them, it might not matter at all if I use such a chatbot.”

Leave a Comment