When ‘Anonymous’ Isn’t Anonymous: AI and the Shifting Ground Beneath Privacy Law

By Ruarri Fairweather
Published:June 6, 2025
Tags:

i 3 Table of Contents

Introduction

Artificial intelligence has become a transformative force in business, government, and society. However, its rapid advancement is quietly undermining a foundational assumption underpinning privacy law: that de-identified data is safe.

Thanks to increasingly powerful AI tools—many freely available—data that was once considered anonymised or pseudonymised can now be reverse-engineered to re-identify individuals with high confidence. This development is placing significant strain on existing privacy frameworks, which were never designed with such computational capabilities in mind.

In this article, we explore how AI is reshaping the boundaries of personal information, the legal and operational risks that emerge as a result, and the key conversations and reforms that need to happen now.

The Problem: AI Has Changed the Definition of “Identifiable”

Summary

Privacy laws globally draw a critical distinction between “personal information” (which is protected) and information that has been de-identified, anonymised, or pseudonymised (which may fall outside legal protections).

However, AI technologies, including machine learning and large language models, can now combine seemingly harmless datasets to identify individuals indirectly. Facial recognition, biometric profiling, geolocation triangulation, voice pattern analysis, and cross-dataset correlation can all lead to successful reidentification.

This means that even data stripped of names or direct identifiers can still pose significant privacy risks in the hands of AI.

Who does this impact?

All organisations collecting, using, or sharing data—particularly those using AI systems internally or through third-party platforms.

Potential level of impact:

High

Legal Frameworks Are Lagging Behind Technological Reality

Summary

Australia:

The Privacy Act 1988 (Cth) hinges on the concept of information that can “reasonably” identify an individual. This interpretation is becoming outdated in light of AI’s capabilities.

EU (GDPR):

The General Data Protection Regulation (GDPR) makes a distinction between personal data and anonymous data. However, under Recital 26, data is only considered anonymous if it cannot be re-identified by any means “reasonably likely to be used.” As AI makes reidentification more likely, datasets once thought compliant may now fall under full GDPR scope.

United States:

The U.S. lacks a federal privacy law, but state-level laws (e.g., CCPA/CPRA in California, CPA in Colorado) follow similar principles. The CCPA defines personal information broadly, but also allows for the use of “deidentified” data under specific standards. AI may undercut these standards by making reidentification more feasible.

In all three jurisdictions, AI is outpacing the legal assumptions that underpin current privacy exemptions and safe harbours.

Who does this impact?

Data custodians, AI developers, legal teams, compliance officers, and organisations relying on de-identified datasets (for product training, analytics, or research).

Potential level of impact:

Medium to High

What This Means In Practice: Real-world Consequences

Summary

The risks aren’t just theoretical. Reidentification via AI can have tangible consequences:

  • Privacy breaches: Organisations may unknowingly hold or disclose data that qualifies as personal information.
  • Regulatory action: GDPR fines can be severe; under the CCPA, statutory damages and class actions are increasing.
  • Reputational damage: Discovery that “anonymous” data is actually traceable back to individuals can erode public trust.
  • Ethical concerns: Misuse of AI to identify individuals (especially for targeting, profiling, or exclusion) raises broader societal issues.

Even where breaches are avoided, regulatory scrutiny and public perception are becoming more sensitive to the power of AI over data.

Who does this impact?

Private and public sector organisations, tech platforms, data brokers, research institutions, and marketers.

Potential level of impact:

High

What Needs to Change: Reform, Awareness, and Accountability

Summary

Several steps should now be on the agenda:

  • Update legal definitions: All jurisdictions must re-express what “deidentified” means in light of AI.
  • Revise thresholds of identifiability: Adopt a future-focused interpretation of “reasonably likely” means of identification, as referenced in GDPR.
  • Embed technical controls: Invest in modern privacy-enhancing technologies like differential privacy, synthetic data, and federated learning.
  • Develop AI-specific governance: Traditional privacy programs are insufficient. Risk assessments should include AI model exposure to personal data.
  • Create shared standards and ethics frameworks: Particularly in the U.S., sectoral coordination can help fill federal gaps.

Who should lead this?

Regulators, legislators, privacy professionals, data scientists, and organisational leadership.

Potential level of impact:

Global

What Next?

AI has forever changed the landscape of data and privacy. What was once unidentifiable may now be readily identifiable. This requires more than technical fixes—it demands a rethinking of how we define, protect, and respect privacy in an AI-powered world.

Policymakers, regulators, and organisations alike must grapple with the uncomfortable truth: anonymity is no longer guaranteed.

Getting Help

If your organisation uses AI, relies on de-identified datasets, or shares information externally, now is the time to review your privacy governance, legal risk exposure, and data handling practices.

We help clients navigate AI and data governance risks across jurisdictions—combining legal insight, technical understanding, and practical frameworks for compliance and innovation.

Get in touch to explore how we can support your team in managing these evolving challenges.

M

Close Menu