Whether it is writing essays or analyzing data, ChatGPT can be used to eliminate a person’s workload. That goes for cybercriminals too.
A lead ChatGPT researcher at cybersecurity company Checkpoint security, Sergey Shykevich, has already seen cybercriminals use the AI’s power to create code that can be used in a cybercrime attack.
His team started studying the potential for AI to lend itself to cyber crimes in December 2021. Using the AI’s large language model, they created phishing emails and malicious code. As it became clear ChatGPT could be used for illegal purposes, Shykevich told Insider the team wanted to see whether or not their findings were “theoretical” or if they could find “the bad guys using it in the wild.”
Because it’s hard to tell if a harmful email delivered to someone’s inbox was written with ChatGPT, his team turned to the dark web to see how the application was being utilized.
On December 21, they found their first piece of evidence: cybercriminals were using the chatbot to create a python script that could be used in a malware attack. The code had some errors, Shykevich said, but much of it was correct.
“What is interesting is that these guys that posted it had never developed anything before,” he said.
Shykevich said that ChatGPT and Codex, an OpenAI service that can write code for developers, will “allow less experienced people to be alleged developers.”
Misuse of ChatGPT — which is now powering Bing’s new, already troubling chatbot — is worrying cybersecurity experts, who see the potential for chatbots to aid in phishing, malware, and hacking attacks.
Justin Fier, director for Cyber Intelligence & Analytics at Darktrace, a cybersecurity company, told Insider when it comes to phishing attacks, the barrier to entry is already low, but ChatGPT could make it uncomplicated for people to efficiently create dozens of targeted scam emails — as long as they craft good prompts.
“For phishing, it is all about volume — imagine 10,000 emails, highly targeted. And now instead of 100 positive clicks, I’ve got three or 4,000,” he said referring to a hypothetical number of people who may click a phishing email, which is used to get users to give up personal information, such as banking passwords. “That’s huge, and it’s all about that target.”
In early February, cybersecurity company Blackberry released a survey from 1,500 information technology experts, 74% of whom said they were worried about ChatGPT aiding in cybercrime.
The survey further indicated that 71 per cent believed ChatGPT may already be in use by nation-states to attack other countries through hacking and phishing attempts.
Shishir Singh, Chief Technology Officer of Cybersecurity at BlackBerry said, “It’s been well documented that people with malicious intent are testing the waters but, over the course of this year, we expect to see hackers get a much better handle on how to use ChatGPT successfully for nefarious purposes.”