LOADING...
Summarize
Anthropic quietly settles lawsuit over AI trained on pirated books
Training AI on copyrighted books was deemed fair use by the court

Anthropic quietly settles lawsuit over AI trained on pirated books

Aug 27, 2025
12:20 pm

What's the story

Anthropic, an artificial intelligence (AI) company, has settled a class action lawsuit with a group of US authors. The authors accused Anthropic of copyright infringement by using their works, including pirated books, to train its Claude AI models. The settlement was announced in a filing with the Ninth Circuit Court of Appeals on Tuesday.

Legal proceedings

Settlement specifics undisclosed

The specifics of the settlement remain undisclosed, and Anthropic has yet to comment on the matter. The case, Bartz v. Anthropic, revolves around the use of books as training data for large language models. A previous ruling had deemed Anthropic's use of these books as fair use. However, due to many of them being pirated, the company still faced hefty financial penalties for its actions in this case.

Company stance

Anthropic welcomed earlier ruling

Despite the legal challenges, Anthropic had welcomed the earlier ruling, calling it a win for generative AI models. The company had said, "We believe it's clear that we acquired books for one purpose only—building large language models—and the court clearly held that use was fair." This statement came after a federal judge ruled in June that training its chatbot Claude on millions of copyrighted books did not violate copyright law.

Accusations

Authors claimed 'large-scale theft'

The authors, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, had claimed in their lawsuit last year that Anthropic's practices constituted "large-scale theft." They alleged that the San Francisco-based company was "seeking to profit from strip-mining the human expression and ingenuity behind each one of those works."