Italy Blocked ChatGPT Over Data Protection Concerns

The Italian data protection authority has ordered OpenAI to stop processing people’s data locally and blocked ChatGPT’s service, citing concerns about the company’s compliance with the European Union’s General Data Protection Regulation (GDPR). The Garante has issued the order, and OpenAI has 20 days to respond. The order also raises concerns about OpenAI unlawfully processing people’s data and the lack of a system to prevent minors from accessing the technology. The GDPR applies whenever EU users’ personal data is processed. The law requires entities that process personal data to adequately protect the information. The regulation also provides Europeans with a suite of rights over their data, including the right to rectification of errors.

 

Just two days after a letter called for a pause in the development of advanced generative AI models until regulators can catch up, the Italian data protection authority has issued an order to OpenAI to cease processing people’s data locally with immediate effect. The Garante is concerned that OpenAI’s ChatGPT is breaching the European Union’s General Data Protection Regulation (GDPR) and has opened an investigation. The order was issued specifically to block ChatGPT because of concerns that OpenAI has processed people’s data unlawfully and because there is no system in place to prevent minors from accessing the technology.

OpenAI, based in San Francisco, has been given 20 days to respond to the order. Failure to comply may result in significant fines; under the EU’s data protection regime, breaches can lead to penalties of up to 4% of annual turnover or €20 million, whichever is greater. This timely reminder highlights that some countries already have laws in place that apply to cutting-edge AI.

It is important to mention that OpenAI does not have a presence in the EU, which means any data protection authority can take action under the GDPR if it deems there are potential risks to local users. Therefore, if Italy takes action, it is possible that other countries may do the same.

GDPR Issues

It’s important to note that OpenAI’s large language model is processing personal data of EU users, thereby subjecting it to the GDPR. This is evident in the fact that it can generate biographies of named individuals in the region, suggesting the use of personal data. However, OpenAI has remained tight-lipped on the specific training data used for the latest version of the technology, GPT-4, while earlier models were trained on data scraped from the internet, including platforms like Reddit. Therefore, if you’ve been active online, it’s highly probable that the bot has access to your name.

Furthermore, ChatGPT has demonstrated the ability to produce completely false information about individuals, fabricating details that its training data lacks. This raises additional GDPR concerns since the regulation grants Europeans certain rights over their data, including the right to correct any inaccuracies. However, it’s unclear how individuals can request OpenAI to rectify any erroneous pronouncements made about them by the bot. This scenario is just one example of the potential implications of ChatGPT‘s activities on EU users’ data protection rights.

The GDPR provides guidelines on data breaches and emphasizes the importance of safeguarding personal data. It also requires the prompt notification of relevant supervisory authorities in case of significant breaches.

However, a significant concern is the legal basis OpenAI relied upon to process European citizens’ data in the first place. The GDPR allows several possibilities, including consent and public interest. Still, the vast amount of data required to train large language models complicates the legality question, as noted by the Garante, highlighting the “mass collection and storage of personal data.”

The GDPR also mandates principles of transparency and fairness, including data minimization. It appears that OpenAI, now a for-profit company, has not informed the individuals whose data it has repurposed to train its commercial AI, which could pose significant problems for the company.

If OpenAI unlawfully processed European data, data protection authorities (DPAs) could order the deletion of the data. However, it remains unclear whether this would necessitate retraining the models that were initially trained using unlawfully obtained data, as existing laws are struggling to keep up with cutting-edge technology.

 

 

      Guidantech
      Logo
      Shopping cart