⏰ Download FLUENT MEETINGS Mastery Guide eBook!

ChatGPT Fined Over Personal Data Misuse

The Italian data protection authority is imposing millions in fines on OpenAI. The authority believes that the ChatGPT operator has not been transparent about how personal data is used. ChatGPT wants to fight the fine.

The Italian data protection authority has fined ChatGPT operator OpenAI 15 million euros. The authority said that the company processed users’ personal data without adequate legal basis to train artificial intelligence (AI).

According to Italian legislators, the US company violated the principle of transparency and the associated information obligations towards users.

Say Goodbye to Meeting Chaos

Try our secure AI meeting assistant to manage meeting notes, agendas, and tasks effortlessly. Sign up today for AI meeting platform designed with data privacy at the core. Perfect for industries that demand privacy and confidentiality such as legal, finance, and defense.

What ChatGPT Says About Personal Data Usage Scandal

OpenAI described the decision as “disproportionate” and announced an appeal. The investigation, which began in 2023, also concluded that OpenAI had not set up an adequate age verification system to prevent children under 13 from being exposed to inappropriate AI-generated content, the authority said.

It also ordered OpenAI to launch a six-month campaign in Italian media to inform the public about how ChatGPT works – especially about the collection of data from users and non-users to train algorithms.

ChatGPT and European Data Protection Laws

The Italian authority Garante (known as Garante per la Protezione dei Dati Personali) is considered one of the most active regulators in the European Union in assessing whether AI platforms comply with EU data protection laws.

Last year, writes Handelsblatt, the authority briefly banned the use of ChatGPT in Italy due to alleged violations of EU data protection rules. The service was reactivated after OpenAI took action against the problems.

READ MORE: OpenAI Chat: Security Considerations

The authority recognized the approach to protecting privacy in AI, OpenAI said. “But this fine is almost twenty times the turnover we generated in Italy during the period in question.”

The regulator said it calculated the amount of the fine taking into account OpenAI’s “cooperative attitude”. This in turn suggests that the fine could have been even higher.

Under the EU’s General Data Protection Regulation (GDPR), introduced in 2018, companies face fines of up to 20 million euros or four percent of their global turnover if they violate the data privacy regulation.

A Brief History of European Disputes against ChatGPT

ChatGPT, developed by OpenAI and launched publicly in November 2022, quickly attracted worldwide attention for its ability to generate human-like text.

Alongside its popularity, ChatGPT also faced scrutiny from European regulators concerned about data privacy, potential misinformation, and overall compliance with existing legislation. Below is a brief overview of key disputes and regulatory developments involving ChatGPT and European legislators.

Early Concerns about Possible Data Privacy Violations

Shortly after ChatGPT rose to prominence, European data protection agencies began evaluating whether its operations complied with the General Data Protection Regulation (GDPR).

As we mentioned, in March 2023, Italy’s data protection authority ordered the temporary suspension of ChatGPT services in Italy due to alleged violations of GDPR principles.

The authority cited concerns over how user data was collected, processed, and stored. OpenAI responded by implementing additional privacy disclosures and age-gating measures.

The fine against the company amounts to 15 million euros. Photo: REUTERS
The fine against the company amounts to 15 million euros. Photo: REUTERS

Pressure on ChatGPT By EU Data Regulators

Following Italy’s decision, data protection authorities in several European countries, such as France, Spain, and Germany, began examining ChatGPT’s compliance.

The European Data Protection Board (EDPB) launched a dedicated task force in April 2023 to foster cooperation and information-sharing among member states. Regulators indicated particular interest in:

  • Transparency regarding data collection
  • Legal bases for processing personal data
  • Measures for preventing minors from using ChatGPT
  • Safeguards against disinformation or harmful content

Although no immediate EU-wide ban materialized, the coordinated scrutiny underscored a growing willingness among European legislators to enforce strict privacy and consumer protection standards on AI tools.

Keep Your Meetings and Conversations Secure

90% of your meeting data leaks online. Want to change that? We offer familiar features such as AI meeting notes and transcripts wrapped into ironclad data privacy. Get started with an AI assistant that protects your data.

The European AI Act and Potential Future Disputes

Separately from GDPR concerns, the European Commission has been working on comprehensive AI legislation, commonly referred to as the AI Act.

This proposed regulatory framework categorizes AI systems based on perceived risk levels and imposes requirements related to transparency, accountability, and fundamental rights. Generative AI services, including ChatGPT, could face additional obligations, such as:

  • Mandatory labeling of AI-generated content
  • Strict data governance and auditing requirements
  • Robust mechanisms to address biases and misinformation

As negotiations continue among EU institutions, observers anticipate that platforms like ChatGPT will be directly affected by the final text of the AI Act.

OpenAI has signaled readiness to adapt its models to comply with future regulations but has also expressed concerns about balancing innovation with regulatory compliance.

LEARN MORE: California AI Safety Bill SB 1047: What You Should Know

What Worries Europeans About ChatGPT Data Privacy Approach

Despite OpenAI’s efforts to address data privacy concerns—such as updating policies and enabling users to delete conversations—uncertainties remain. Key questions revolve around how ChatGPT and similar large language models handle:

  • Consent and informed data usage
  • Data retention policies
  • Accountability for content that violates individual privacy or national laws

European regulators continue to monitor these issues, and more formal investigations or guidelines could emerge. Some lawmakers argue that stronger legal frameworks are needed to hold AI providers accountable for issues ranging from misinformation to potential discrimination.

Others caution that overly restrictive rules could stifle innovation and hamper European competitiveness in AI development.

Privacy Is Not an Option

Did you know that your meetings are leaking private information? You need a secure AI meeting platform you can trust. At Eyre Meet, encryption and meeting data protection are included by default. What happens in your meeting is your business.

Final Thoughts

Given the EU’s history of robust consumer and data protection laws, it is likely that ChatGPT and similar AI services will see heightened oversight as legislative discussions progress.

However, the practical outcome will depend on how strictly member states interpret and enforce new or existing regulations. In the short term, platforms offering large language models may face additional compliance hurdles, including thorough data impact assessments and explicit user-consent mechanisms.

In the long term, aligning with European rules could pave the way for greater trust in AI services—assuming companies like OpenAI can demonstrate unwavering commitment to data security and transparency.

READ NEXT: Web Scraping: What Is It and How to Safeguard Your Data

FAQ

Does ChatGPT collect personal data?

ChatGPT processes user input to provide answers. it may inadvertently receive personal data if users include it in their prompts. OpenAI, the company behind ChatGPT, indicates that it employs filtering and data retention policies to minimize storing personally identifiable information.

How does ChatGPT use the data it collects?

OpenAI utilizes user input primarily to generate responses and to improve the model’s performance. according to OpenAI documentation, some data may be analyzed for research or system improvement, but personally identifying details should be removed or masked to protect user privacy.

What are the potential risks of sharing personal data with ChatGPT?

One risk is the unintentional exposure of sensitive information if users include private details in their prompts. While OpenAI employs security measures, no system is immune to data breaches or misuse. another concern is the possibility of long-term storage of user input that could be accessed or retrieved in the future. Regulators, including those in the European union, stress that organizations must adhere to data protection regulations such as GDPR to mitigate these risks.

Is ChatGPT compliant with GDPR and other data protection laws?

Openai has taken steps to meet GDPR requirements, including providing clearer data usage disclosures and options for users to manage or delete their conversation histories. However, some European regulators have questioned whether these measures are fully sufficient. Italy’s data protection authority temporarily banned ChatGPT in 2023, prompting to enhance its privacy safeguards.

Can users request the deletion of their personal data?

Yes, under GDPR and similar regulations in other regions, users have the right to request the deletion or removal of personal information. OpenAI offers methods to request the erasure of data associated with a user’s account or usage. Users seeking deletion of specific content can contact the support team and reference their rights under data protection laws.

How can users protect their personal data when using ChatGPT?

The simplest step is to avoid sharing sensitive information, such as financial or health data, in ChatGPT prompts. regularly review your conversation history and take advantage of any built-in privacy settings, including the option to delete past sessions. Keep your account credentials secure, and if you have concerns about data usage or storage, consult OpenAI’s privacy policy or contact their support team.

Sources

  • BBC (2023) ‘ChatGPT briefly banned in Italy’ [Online] Available at: bbc.com
  • Politico (2023) ‘EU privacy regulators scrutinize ChatGPT’ [Online] Available at: politico.eu
  • TechCrunch (2023) ‘The AI Act and its implications for generative models’ [Online] Available at: techcrunch.com
Author Profile
Julie Gabriel

Julie Gabriel wears many hats—founder of Eyre.ai, product marketing veteran, and, most importantly, mom of two. At Eyre.ai, she’s on a mission to make communication smarter and more seamless with AI-powered tools that actually work for people (and not the other way around). With over 20 years in product marketing, Julie knows how to build solutions that not only solve problems but also resonate with users. Balancing the chaos of entrepreneurship and family life is her superpower—and she wouldn’t have it any other way.

In this article