Generative AI is in the news, as usual. However, one of the big pushes we’re seeing lately is how the practices used by AI providers like OpenAI may violate user privacy.
This, of course, is a big no-no for jurisdictions like the EU.
Here, we’re dipping into the world of AI to talk about the latest complaint against OpenAI and how this speaks to privacy and GDPR compliance issues.
The Tense History of ChatGPT in the EU
One of the big unknowns for users of ChatGPT (and non-users who might work with platforms in partnership with OpenAI) is how training data is collected and used to power the large language models (LLMs).
Some concerns over these unknowns have repeatedly arisen in the EU, where GDPR is the law of the land. IN 2023, Garante (the Italian data protection watchdog) filed a GDPR complaint against OpenAI, claiming that the company unlawfully used consumer data to power the service. The Italian authorities briefly shut down the service in that country until OpenAI provided more control for users to opt out of training models and better control access to the tool based on age.
Now, privacy advocacy group noyb is filing a GDPR complaint against OpenAI, claiming that the ChatGPT service provides provably false information through “AI hallucinations” and that the organization cannot explain where this information comes from. Their complaint insists that this presents a violation of GDPR in that it denies data privacy rights to EU citizens.
Why Is AI a Threat to Data Privacy?
It’s not entirely clear how AI threatens data privacy outside the company’s (OpenAI) actions and their relationship with consumers. But generative AI presents several significant challenges that can undermine an individual’s privacy:
AI poses several threats to data privacy, primarily due to its capacity to collect, analyze, and generate insights from vast amounts of data. Here are some of the key concerns:
- Mass Surveillance: AI can process large volumes of data from various sources, such as CCTV cameras, smartphones, and other devices, enabling mass surveillance. This raises significant privacy concerns as it can lead to the monitoring of individuals without their consent.
- Data Breaches: AI systems require large datasets to train on, often including sensitive personal information. The storage and processing of such data increase the risk of breaches, where personal information can be accessed unlawfully.
- Profiling and Decision Making: AI can analyze data to create detailed profiles of individuals, potentially leading to privacy invasions. These profiles can be used for targeted advertising, credit scoring, or even predictive policing, which might not always be transparent or fair.
- Inference of Sensitive Information: AI can infer additional personal details from seemingly innocuous data. For example, machine learning models can predict sensitive attributes like sexual orientation, political affiliation, or health status from patterns in data that are not explicitly related to these attributes.
- Manipulation and Control: The insights gained from data analysis can be used to manipulate behaviors and opinions. This is particularly concerning when AI-driven personalization algorithms determine the kind of information individuals are exposed to online, potentially infringing on privacy and autonomy.
- Lack of Anonymity: AI’s ability to re-identify individuals from anonymized datasets through sophisticated data matching techniques challenges the effectiveness of traditional privacy protection methods like data anonymization.
Why Are OpenAI and ChatGPT an Issue for GDPR?
OpenAI, like any organization operating in or servicing clients in the European Union, must comply with GDPR.
Here are some potential areas where AI technologies, including those developed by companies like OpenAI, might face challenges with GDPR compliance:
- Data Minimization and Purpose Limitation: GDPR requires that data collection be kept to the minimum necessary and only for specific, explicit, and legitimate purposes. AI systems, which often rely on large datasets to improve their accuracy and functionality, might inadvertently collect more data than necessary or use it for purposes not initially disclosed.
- Right to Explanation: Under GDPR, individuals can receive explanations of decisions made by automated systems that significantly affect them. AI systems, particularly those using complex algorithms like deep learning, can be inherently opaque, making it difficult to provide clear and understandable explanations.
- Data Subject Rights: GDPR provides individuals with extensive rights over their data, including the right to access, correct, delete, or transfer their data. Fulfilling these rights can be challenging in AI systems, especially when data is integrated and processed in complex and distributed ways.
- Automated Decision-Making: GDPR restricts purely automated decision-making with legal or similar significant effects on individuals. Ensuring that AI systems comply with these provisions, including providing appropriate human oversight and intervention, can be complex.
- International Data Transfers: AI developers often need to transfer data across borders for processing or development purposes. GDPR requires that such transfers only occur under specific conditions that ensure data protection. Adhering to these rules while managing a globally distributed data processing network can be challenging.
Does GDPR Compliance Rule Out Using AI?
The short answer is no.
The longer answer is that generative AI in GDPR jurisdiction will require businesses to follow privacy and processing rules, as in any other situation. Closed-system and vetted AI models that support privacy are still viable technologies under GDPR.
More importantly, it’s still up to the organization to ensure that consumers have the right to address their data, see it, and have it changed or deleted as needed.
Lazarus Alliance and A.ITAM: AI for Compliance
We’re moving forward with an innovative, secure, proprietary AI system that streamlines compliance and technical writing through our cloud platform, Continuum GRC. To learn more, contact us.
- FedRAMP
- StateRAMP
- NIST 800-53
- FARS NIST 800-171
- CMMC
- SOC 1 & SOC 2
- HIPAA, HITECH, & Meaningful Use
- PCI DSS RoC & SAQ
- IRS 1075 & 4812
- ISO 27001, ISO 27002, ISO 27005, ISO 27017, ISO 27018, ISO 27701, ISO 22301, ISO 17020, ISO 17021, ISO 17025, ISO 17065, ISO 9001, & ISO 90003
- NIAP Common Criteria – Lazarus Alliance Laboratories
- And dozens more!
[wpforms id=”137574″]