Claude AI settles lawsuit over using authors' books for training
Anthropic, the company behind the Claude AI model, just settled a big lawsuit with US authors who said their books were used—without permission—to teach the AI.
This deal ends a legal fight that involved significant financial penalties and puts a spotlight on how tech companies use creative work to build smarter bots.
The case was led by authors like Andrea Bartz
Back in June 2025, a judge said Anthropic's use of some copyrighted books for training was "fair use," but copying many pirated books into their library definitely wasn't okay. That move put them at risk for massive penalties.
The case was led by authors like Andrea Bartz, but now the trial is off thanks to this settlement.
What's fair game when training AI with copyrighted stuff?
This is a big moment for anyone following how AI learns from human creativity.
The confidential settlement highlights ongoing debates about what's fair game when training AI with copyrighted stuff—especially if it's pirated.
It's another reminder that as AI gets smarter, figuring out what's fair to creators is only getting trickier.