OpenAI is facing another ChatGPT privacy complaint in Europe, this time over allegations that the AI chatbot generated defamatory and entirely false claims about an individual. Privacy advocacy group Noyb has taken legal action on behalf of a Norwegian citizen who was shocked to discover that ChatGPT falsely claimed he had been convicted of murdering two of his children and attempting to kill the third.
The disturbing allegations, which had no basis in reality, raise serious concerns about the accuracy and accountability of AI-generated information. While OpenAI has acknowledged that ChatGPT can sometimes produce misleading or incorrect responses, privacy advocates argue that disclaimers are not enough to absolve the company of its legal responsibilities under the General Data Protection Regulation (GDPR).
Also Read: OpenAI to Test connecting ChatGPT to Slack and Google Drive
ChatGPT’s Hallucinations Under GDPR Scrutiny
Noyb’s latest ChatGPT privacy complaint highlights a fundamental legal issue: the right to correct or remove false personal data under GDPR scrutiny. The law mandates that any personal data collected, stored, or generated about individuals must be accurate and that individuals must have the ability to rectify incorrect information.
Joakim Söderberg, a data protection lawyer at Noyb, stated:
“The GDPR is clear. Personal data has to be accurate. If it’s not, users have the right to have it changed to reflect the truth. Simply showing a disclaimer that ChatGPT may generate incorrect information isn’t sufficient.”
GDPR violations can result in severe penalties, including fines of up to 4% of a company’s global annual revenue. OpenAI has previously faced GDPR scrutiny, with Italy’s privacy watchdog temporarily banning ChatGPT in 2023 over similar concerns. That incident forced OpenAI to introduce changes to how it informs users about data processing.
However, privacy regulators across Europe have since taken a cautious approach, trying to determine how best to regulate generative AI. A previous ChatGPT privacy complaint filed in Poland in 2023 is still under investigation, showing how complex these cases can be.
The Case That Shocked a Community
The Norwegian citizen at the center of this case, Arve Hjalmar Holmen, discovered that ChatGPT fabricated a highly specific and damaging false history about him. The chatbot claimed he had been convicted of child murder and sentenced to 21 years in prison.

While the core claim was entirely false, some details in the AI-generated response were eerily accurate—such as his number of children and hometown—raising questions about how AI models combine real and fictional data.
Noyb investigated the issue but found no evidence suggesting ChatGPT confused him with another person. A spokesperson for the organization explained:
“We checked newspaper archives and other sources to ensure this wasn’t a mistaken identity issue. There is no explanation for why the AI generated this entirely fabricated and defamatory claim.”
This case isn’t an isolated incident. In recent years, ChatGPT has made similarly false accusations, including claims that an Australian politician was involved in a bribery scandal and that a German journalist was a child abuser.

The Dangers of AI Hallucinations
Large language models, including ChatGPT, generate text by predicting the most likely next words based on vast datasets. However, when gaps exist in its knowledge, the AI can “hallucinate” plausible-sounding but false information.
While OpenAI has added disclaimers warning users that ChatGPT may produce inaccurate responses, Noyb argues that this does not absolve the company of legal responsibility. Under GDPR, companies handling personal data must ensure its accuracy and provide users with mechanisms to correct false information.
Kleanthi Sardeli, another data protection lawyer at Noyb, emphasized:
“Adding a disclaimer does not make the law disappear. AI companies cannot simply hide false information from users while continuing to process incorrect data internally.”
What Happens Next?
Noyb has filed the ChatGPT privacy complaint with Norway’s data protection authority, hoping the regulator will investigate OpenAI’s practices. However, OpenAI’s recent restructuring, which placed its European operations under its Ireland-based division, could complicate enforcement.
A similar complaint filed in Austria in April 2024 was transferred to Ireland’s Data Protection Commission (DPC), where it remains under review. The DPC has been slow to act, leading to frustration among privacy advocates who argue that delayed enforcement allows AI companies to continue operating without proper oversight.
Despite these challenges, Noyb remains determined to hold OpenAI accountable for ChatGPT’s hallucinations, particularly when they cause serious reputational harm. As AI continues to evolve, regulators worldwide will face mounting pressure to ensure that AI-generated content does not violate individuals’ rights.
For now, the key question remains: Will OpenAI be forced to implement stronger safeguards to prevent AI-generated falsehoods? Or will ChatGPT’s hallucinations continue to pose a legal and ethical minefield for AI companies?