Did Figma Use User Data for AI Training?

AI Copyright Disputes and the Legal Rights of Training Data (2026)

AI copyright disputes and legal rights over AI training data are no longer abstract or theoretical issues.
In January 2026, a lawsuit involving Figma alleged that customer-created design data was used for AI training without explicit consent. At the same time, Hollywood actors launched coordinated campaigns against unauthorized AI training.

Together, these cases raise a fundamental question:

Who owns data used to train AI models, and where are the legal limits?

This article focuses not on speculation, but on what creators, designers, and users can realistically claim as legal rights in 2026.


The Core Issues in the Figma AI Lawsuit (January 2026)

In January 2026, a class-action lawsuit was filed against Figma, a widely used collaborative design platform.

The central allegation was straightforward:

👉 User-created design files were used for AI training without explicit, informed consent.

Why This Case Matters

This lawsuit is significant because it goes beyond standard terms-of-service disputes.

  • It directly challenges the legal nature of AI training itself

  • It potentially involves commercial client work and proprietary designs

  • It exposes the unresolved boundary between “product improvement” and “AI model training”

Key Points Raised by Users

  1. No separate consent for AI training purposes

  2. Risk of secondary infringement through style replication

  3. No compensation framework for data contributors


Why Hollywood Actors Are Pushing Back Against AI Training

At the same time, Hollywood creators were taking action. Actors and writers publicly opposed AI systems trained on faces, voices, and performance styles without permission.

The movement was led by SAG-AFTRA, which framed AI training as a labor rights issue rather than a purely technical one.

Core Demands of the Hollywood Campaign

  • Explicit prior consent before AI training

  • Fair compensation for digital replication or reuse

  • Transparency regarding training scope, duration, and usage

Their message was clear:

“AI training is not just a technology issue. It is a labor and rights issue.”


When Can Creators Demand Compensation? (2026 Standards)

The most common question creators ask is simple:

If my data was used for AI training, do I have a right to compensation?

Conditions Where Compensation Is Actively Debated

CriterionExplanation
Copyright-protected workDesigns, images, videos, scripts, or other original creations
IdentifiabilityA recognizable person, brand, or distinctive style
Commercial AI useTraining tied to monetized AI products

When substantial similarity between AI outputs and original works can be demonstrated, compensation claims become legally viable.


The “Prior Use Defense” Debate: “We Already Used It”

Many companies rely on what is often called the prior use defense.

The argument is simple:

“The data was publicly available and already used for training in the past, so no liability exists.”

Why This Argument Is Increasingly Challenged (2026)

  • Public availability ≠ unlimited AI training permission

  • AI training involves replication, transformation, and reuse

  • Legal harm often occurs when AI outputs enter the market, not when training happened

As a result, prior use defense is less a shield and more the starting point of legal disputes.


What Creators and Users Should Check Right Now

📌 Key Terms to Look for in Platform Policies

  • “AI training”

  • “Machine learning”

  • “Model improvement”

  • “Secondary use of user content”

📌 Practical Steps You Can Take

  • Check whether AI training opt-out options exist

  • Separate client or confidential work from general uploads

  • Request data usage records if disputes arise


One Question You Should Not Ignore

The central issue of the AI era is this:

👉 Should rights disappear simply because technology makes something possible?

The Figma lawsuit and Hollywood campaigns suggest the answer is no.
AI training data rights are moving from a legal gray zone into a negotiation and compensation framework.


Next Article in This Series

How far does the law allow AI training on user data?

→ The next article will explain the legal boundaries of AI training data in 2026.