The prominent artificial intelligence (AI) developer OpenAI has been placed at the center of a new privacy complaint launched by a data rights protection advocacy group in Austria.

On April 29, Noyb opened the complaint alleging that OpenAI has not fixed false information provided by its generative AI chatbot ChatGPT. The group said these actions, or lack thereof, could be in breach of privacy rules in the European Union.

According to the group, the complainant of the case, an unnamed public figure, asked OpenAI’s chatbot for information about himself and was consistently provided with incorrect information.

OpenAI allegedly refused the public figure’s request to correct or erase the data, saying it wasn’t possible. It also refused to reveal information on its training data and where it was sourced.

Maartje de Graaf, a Noyb data protection lawyer, commented on the case in a statement saying:

“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

Related: Sam Altman’s OpenAI reportedly in partnership talks with his other firm, Worldcoin

Noyb took its complaint to the Austrian data protection authority requesting that it investigate OpenAI’s data processing and how it ensures the accuracy of personal data that its large language models (LLMs) process. 

De Graaf said that, “it’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals.”

Noyb, also known as the European Center for Digital Rights, operates from Vienna, Austria with the goal of launching “strategic court cases and media initiatives” that support European General Data Protection Regulation (GDPR) laws.

This is not the first time that chatbots have been called out in Europe by either activists or researchers.

In December 2023, a study from two European nonprofit organizations revealed that Microsoft’s Bing AI chatbot, rebranded to Copilot, was providing misleading or inaccurate information regarding local elections in political elections in Germany and Switzerland.

The chatbot provided inaccurate answers on candidate information, polls, scandals and voting, while also misquoting its sources.

Another instance, though not specifically in the EU, was seen with Google’s Gemini AI chatbot, which was providing “woke” and inaccurate imagery in its image generator. Google issued an apology for the incident and said it would be updating its model.

Magazine: How to get better crypto predictions from ChatGPT, Humane AI pin slammed: AI Eye