Hackers steal data from ChatGPT, wrote malicious codes
On a reputable illegal cybercrime site, a post entitled "ChatGPT - Benefits of Malware" emerged on December 29.

According to a statement, hackers are using ChatGPT, an AI-powered chatbot that replies to queries with human-like responses, to build harmful programmes that can collect your data.

 

Experts from Check Point Research (CPR) have uncovered the very first evidence of hackers utilizing ChatGPT to build malicious software.

 

Threat actors are building “infostealers,” encrypted tools, and enabling scamming in underground hacking communities.

 

The scientists issued an advisory concerning hackers’ rapidly increasing desire to use ChatGPT to expand and educate illicit activity.

 

 “Cybercriminals are finding ChatGPT attractive. In recent weeks, we*re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point,” said Sergey Shykevich, Threat Intelligence Group Manager at Check Point.

 

Both good and bad implementations of ChatGPT are possible. For instance, it can be used to help developers write a program.

 

On a reputable illegal cybercrime site, a post entitled “ChatGPT – Benefits of Malware” emerged on December 29.

 

The thread’s developer revealed that he had been using ChatGPT to test out virus strains and methods that have been documented in articles and research papers about regular malware.

“While this individual could be a tech-oriented threat actor, these posts seemed to be demonstrating less technically capable cybercriminals how to utilise ChatGPT for malicious purposes, with real examples they can immediately use,” the report mentioned.

 

 A threat actor released a Python code on December 21 and indicated that it was his “first script ever.”

 

The hacker admitted that OpenAI provided him with a “nice (helping) hand to finish the script with a nice scope” after another computer hacker mentioned that the code’s structure is identical to OpenAI code.

 

This might imply that aspiring cybercriminals with limited to no experience in the field could also use ChatGPT to build malicious programmes and advance towards becoming entirely cybercriminals with the necessary skills and knowledge, the study said.

 

“Although the tools that we analyse are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools,” Shykevich said.

 

The creator of ChatGPT, OpenAI, is looking for funding at a worth of nearly $30 billion.

 

Microsoft recently invested $1 billion to buy OpenAI and is currently promoting ChatGPT programs for addressing practical issues.

 

Introduced by OpenAI in November 2022, ChatGPT is a chatbot. It is built based on the GPT-3.5 series of big language models from OpenAI, and it has been improved via supervised and reinforcement learning methods.