Norwegian citizen Arve Hjalmar Holmen recently discovered something shocking: ChatGPT had created a completely false story claiming he murdered his own children.
When Holmen searched for information about himself using ChatGPT, the AI tool responded with disturbing and completely fabricated claims. According to a complaint filed Thursday by digital rights group Noyb, ChatGPT falsely stated that Holmen was “a convicted criminal who murdered two of his children and attempted to murder his third son” and had been sentenced to 21 years in prison.
None of this was true. Yet, the AI mixed these horrible lies with accurate personal details about Holmen, including the correct number and gender of his children and his hometown.
“The fact that someone could read this output and believe it is true, is what scares me the most,” Holmen said in a press release from Noyb.
Noyb Files GDPR Complaint Against OpenAI Over False Information in ChatGPT
Noyb (None of Your Business), a European Union digital rights organization, has filed a formal complaint with Norway’s data protection authority, Datatilsynet.
The group claims OpenAI, the company behind ChatGPT, violated the General Data Protection Regulation (GDPR) by creating and sharing false information about a real person.
Under GDPR rules, people have the right to correct inaccurate personal information about themselves. However, according to Noyb, OpenAI has previously argued that it cannot correct information in its system—it can only block certain outputs.
“Adding a disclaimer that you do not comply with the law does not make the law go away,” said Kleanthi Sardeli, a data protection lawyer at Noyb. “AI companies can also not just ‘hide’ false information from users while they internally still process false information.”
The complaint seeks an order requiring OpenAI to delete the false information about Holmen and fine-tune its AI model to prevent similar inaccurate results in the future. Noyb also wants authorities to impose a fine on OpenAI to discourage similar violations.
This isn’t the first time ChatGPT has created false, damaging stories about real people. An Australian mayor previously threatened to sue after ChatGPT wrongly claimed he went to prison. A law professor was falsely linked to a made-up sexual harassment scandal, and a radio host sued OpenAI over fake embezzlement charges generated by the AI.
European Regulators Target ChatGPT Over Data Accuracy
ChatGPT apparently no longer produces false claims about Holmen. A recent update allows ChatGPT to search the internet for information about people when asked, rather than just generating responses based on its training data. However, Noyb believes the false information still exists within ChatGPT’s internal data, which would continue to violate GDPR rules.
Fixing the problem completely might be difficult. According to Noyb, if ChatGPT has used false information about Holmen in its training process, the only way to ensure complete removal might be to retrain the entire AI model.
OpenAI has already faced consequences in Europe for GDPR violations. Earlier this year, Italy temporarily banned ChatGPT and fined OpenAI $16 million following a data breach that exposed user conversations and payment information. To restore service in Italy, OpenAI had to provide users with a way to request corrections to their personal data when processed inaccurately.
European regulators continue to increase scrutiny of AI companies. In 2023, the European Data Protection Board launched a ChatGPT task force to investigate data privacy concerns and possible enforcement actions after users reported experiencing similar false and potentially defamatory outputs.
The outcome of this complaint could force OpenAI to make significant changes to how ChatGPT handles personal information in Europe, potentially requiring technical changes to their AI systems to better comply with European privacy laws.