AI Pulse
regulation

AI Copyright Law: The Battle for the Soul of Creativity

A 3,000-word analysis of the legal precedents defining 2025. From the New York Times vs. OpenAI to the end of 'Fair Use' in training data.

Legal Policy Desk
24 min read
AI Copyright Law: The Battle for the Soul of Creativity

The $100 Billion Question

If an AI scans 1,000 paintings by Van Gogh to learn how to paint, is it "learning" or "infringing"? In 2025, the answer to that question is being decided in courtrooms in New York, London, and Brussels. The stakes are nothing less than the future of the human creative economy.

As we move through the middle of the decade, the "Move Fast and Break Things" era of AI data scraping has hit a wall of lawyers. This is a comprehensive guide to the lawsuits, the rulings, and the new "Licensing Economy" that is reshaping how AI is built.


1. The "Fair Use" War: NYT vs. OpenAI

The most consequential lawsuit of 2024–2025 is The New York Times vs. OpenAI.

  • The NYT Argument: OpenAI "stole" millions of copyrighted articles to train its models. When a user asks ChatGPT for the news, the model often produces "near-verbatim" replicas of NYT reporting, bypassing the paywall and stealing the platform's traffic.
  • The OpenAI Defense: They argue that training is "Fair Use." Just as a human reporter reads other newspapers to understand the world, an AI reads to learn the "structure of language."
  • The 2025 Turning Point: Several judges have ruled that while "summarizing" a news article is likely legal, "Regurgitating" it verbatim is a clear copyright violation. This has forced OpenAI to implement "Hallucination-as-a-Safety-Feature," intentionally preventing the model from quoting long passages of copyrighted text.

2. Who Owns the Output? The "Human Spark" Rule

In 2025, the US Copyright Office has maintained its controversial stance: AI-generated works cannot be copyrighted.

The Thaler Ruling

In a hallmark 2024 case, developer Stephen Thaler tried to copyright an image generated by his AI, "Creativity Machine." The court ruled that copyright law is reserved for "Human Authorship."

  • The Result: If you generate a logo with Midjourney, you can use it, but you don't "own" it. Your competitor could steal that logo, and you would have no legal ground to sue them for copyright infringement.
  • The "Creative Control" Grey Area: In 2025, lawyers are arguing about how much human input is needed. If you write 100 prompts, edit the image in Photoshop, and spend 10 hours "guiding" the AI, does that count as a "Human Spark"? So far, the courts are leaning toward "No."

3. The Shift to the "Licensing Economy"

Sensing the legal winds changing, the major AI players have pivot and started paying for data.

  • OpenAI x Axel Springer: A landmark 2024 deal where OpenAI paid "tens of millions" to use content from Politico and Business Insider.
  • Adobe Firefly: Adobe’s major competitive advantage in 2025 is that its AI is "Clean." It was trained only on images from Adobe Stock that they already owned the rights to. For enterprise companies (like Nike or Disney), Firefly is the only safe choice.
  • Reddit & Stack Overflow: Both platforms began charging millions for "API Access" in 2024, realizing that their user-generated data was the "Crude Oil" of the AI age.

4. The EU AI Act and "Copyright Transparency"

As discussed in our EU AI Act Guide, the European Union now requires AI providers to publish "Detailed Summaries" of the data used for training.

  • The End of the Secret Model: In 2025, it is no longer legal in Europe to release a "Black Box" model where the training data is unknown. This opens the door for artists to scan these summaries and sue if they find their work was used without a license.

5. Derivative Works and "Digital Twins"

A new legal frontier in 2025: Voice and Likeness. With the rise of "Voice Cloning" (see our Deepfakes Guide), celebrities like Scarlett Johansson have sued AI companies for using "Soundalike" voices.

  • The Tennessee ELVIS Act: The first US state law to explicitly protect an individual’s voice and likeness from AI replication. Similar federal laws are expected by late 2025.

Conclusion: The Death of the "Public Web"?

The result of these legal battles is a "Closing" of the internet. In the 2000s and 2010s, information was free and open. In 2025, every major website is putting up "Do Not Scrape" signs.

We are moving toward a "Walled Garden" Internet, where high-quality data is locked behind billion-dollar licenses. For the average user, this means AI will become more expensive but more accurate. For the creators, it is a desperate attempt to claw back value in a world where the "Machine" can copy anything in a millisecond.

The battle for the soul of creativity is just beginning. By 2030, we will know if "Human Content" is a protected luxury or an obsolete relic.

Subscribe to AI Pulse

Get the latest AI news and research delivered to your inbox weekly.