Chatbot ChatGPT is quickly becoming more famous and popular. The program nicely shows the possibilities of artificial intelligence. But criminals are also looking at how they can use artificial intelligence for more serious cyber attacks. “It’s naive to think that people with bad intentions aren’t involved in this,” says Dave Maasland of cybersecurity firm ESET Netherlands.
At the end of last year, ChatGPT appeared seemingly out of nowhere when people on social media shared witty and eye-catching results of conversations with the chatbot. The program has been in the news a lot lately because students are using AI to outsource their homework. ChatGPT can independently write entire papers and essays.
But the program can do much more. For example, it can help developers recognize errors in their programming. “AI will offer added value in all sectors,” says cybersecurity expert Stijn Rommens. As long as we use it for good.
But according to Maasland, it’s naive to think that malicious parties aren’t also involved in ChatGPT. “Cybercriminals are lazy and want to make a quick buck as efficiently as possible,” he says.
No code red yet
Cyber security company Check Point recently warned of rapidly growing interest in ChatGPT from criminals. Many tips for malicious use of the chatbot have already been shared on hacker forums.
However, there are currently no known attacks that have been set up using ChatGPT, say both Maasland and Check Point’s Zahier Madhar. According to them, it may take some time for the effects of ChatGPT to become visible.
Cyber security experts therefore do not immediately talk about code red. The instrument is new, but the goals have not changed. Criminals still want to steal data or money.
But there are concerns from the cybersecurity world, says Maasland. “This development could democratize cybercrime.” This means that many in one fell swoop gain access to tools to commit cyber attacks. “I think tools like ChatGPT could be the start of a new arms race between attackers and defenders.”
Help with malware and phishing
“Everyone can suddenly program,” says Madhar. “People with a little technical knowledge can get ChatGPT to write codes. They can even ask the chatbot for an explanation if they can’t figure it out.”
ChatGPT even helps you write phishing emails. By convincing people in such an email to click on a link or file, attackers can gain access to victims’ computers. If you ask for it in a smart way, the chatbot provides a ready-made text on behalf of a courier company. All an attacker has to add himself is a piece of malicious software, but the program can help with that too.
For example, the chatbot contributes to a step-by-step plan for developing malicious software. Madhar himself wrote a script that allowed him to bypass antivirus programs. “The speed with which I managed to do it is bizarre.”
ChatGPT continues to evolve. Maker OpenAI says it does its best to make the chatbot reject inappropriate requests. “We also use techniques to send warnings about certain types of unsafe or harmful content, but these can occasionally go wrong,” the company says.
AI also helps fight cybercrime
The blocks in ChatGPT are quite easy to bypass. And the fact that the threshold is so low can ultimately lead to many more cyber attacks, says Maasland. According to him, the most important thing is that companies and organizations prepare themselves against this. “A lot of companies still don’t look at security software or update too little. There’s a lot to be gained there.”
But AI will not only be abused by cybercriminals. The instruments can also be used to ward off attacks. “We used to look at specific files if we wanted to stop a virus, now we look at behavior,” says Maasland.
If a certain type of behavior is seen on a network or computer, an AI will almost certainly know if something bad is happening there or not. “Compare with this: if a person with a balaclava and a crowbar walks around a house, it probably means that he wants to break in,” explains Maasland.
Experts emphasize that ChatGPT is not a bad program in itself. It’s about how you use it, says Rommens. “AI doesn’t yet have self-awareness and has to be programmed in a certain way to make decisions and do work. So basically there are still people behind the buttons deciding what happens to it.”