Humane Ingenuity logo

Humane Ingenuity

Subscribe
Archives
September 5, 2025

Will a Landmark AI Settlement Make Authors Feel Whole?

The remuneration from Bartz v. Anthropic may not provide what writers really want: respect, recognition, and readers

by Dan Cohen

A black and white photo of library shelves in partial darkness
Hannah Travis, Kennedy Library, CC BY-NC 2.0

A landmark settlement was just unveiled in a seminal lawsuit, Bartz v. Anthropic, that may shift the relationship between human creators and the artificial intelligence that has learned from them. Anthropic, the company that makes the AI chatbot Claude, was sued a year ago by three authors of books that were used by Anthropic to train their bot. Although the judge who was hearing the case, William Alsup, found that using books to train AI was fair use due to its transformational nature—a critical ruling—he also thought that Anthropic may have violated copyright law by downloading millions of those books from the darker corners of the web, and let that portion of the complaint proceed toward a trial. A few weeks ago, Judge Alsup further decided that the three writers named in the lawsuit could represent the authors of millions of books—that is, they were a class, and the case instantly became a much more momentous class action lawsuit.

Given the high statutory damages associated with each copyright violation and the large number of volumes involved—at up to $150,000 per violation, potentially an astronomical sum north of $100 billion that could put Anthropic out of business—a settlement suddenly became much more likely. We now know that Anthropic has agreed to pay at least $1.5 billion in compensation to the writers of approximately 500,000 books. (The number of qualifying books may grow, in which case the pool will grow in kind; per-book compensation will probably be decreased by lawyer fees, which could be up to 25% of the total.)

This is perhaps the first of many sizable payments from AI vendors to authors, performers, and artists. At first blush, the conclusion of Bartz v. Anthropic in the little guy’s favor seems like a clear loss for the big guy: an industry that has mined the words, images, and sounds of others to create a technology that can extrude words, images, and sounds about anything, on demand, has finally been brought to heel.

* * *

The excitement of such a victory among those who write for a living is understandable. The AI industry is perhaps the most skilled player in Silicon Valley’s game of moving fast, breaking things, and not asking for forgiveness. Anthropic, OpenAI, Meta, and other large AI purveyors have been hoovering up any content they can find to train their models, much of it on the web but some of it in harder to reach places, like books. Over 40 lawsuits against these companies have swiftly followed. The scale of the Bartz settlement, with the judge in the case affirming that the authors of millions of books have a common cause, seems like it could usher in a rebalancing of power and money between AI companies and the creative individuals who write novels and songs, or who act, sing, or paint.

Sometimes, however, the financial settlement of a lawsuit, even one as eye-popping as this one, is as good for the defendants as it is for the plaintiffs. That very well may be the case here. The AI industry has been on a spending spree that has rivaled the boom times of previous transformational technologies, from the railroads to the internet, with trillions being spent on the latest chips, massive data centers, and gigantic pay packages that dwarf the compensation of even the most successful authors. Put in the context of AI capital expenditures for 2025 alone, forecast to top $300 billion, what’s a few billion here or there for the raw materials to feed the machines, divvied up among millions of creators? Money continues to surge into AI to fund its expansion; just a few days ago, Anthropic received $13 billion in fresh funding that valued the company at a staggering $183 billion.

With numbers like these, it’s easy to see that settling copyright lawsuits, especially over some indiscretions rather than over fair use, may quickly become a marginal additional cost of doing business in the world of artificial intelligence. And, as an important corollary, the size of that enormous ante will keep others from challenging the big AI companies at the playing table. Small AI startups will not have the cash to buy in, and may fear losing a fair use case, thus entrenching the handful of existing incumbents. More idiosyncratic and noncommercial uses for AI may also be curtailed. Researchers in universities who want to explore the frontiers of AI, for instance, won’t be able to acquire a large number of recent creative works, and may be stuck using works that are in the public domain—that is, old. We may be left with an unintended consequence of what seems like an unalloyed win for authors in Bartz: that the very companies that have been most inconsiderate about using materials crafted by writers will be strengthened by a regime in which a billion-dollar check is the entry fee for developing the best AI.

* * *

For authors, what might this post-Bartz future look like, dominated by transactional arrangements with the largest AI vendors? I have gotten a preview. Recently the publisher of one of my books asked me and other authors on their list if we would be willing to license our works for AI training in exchange for compensation. According to the publisher, a university press that is refreshingly flexible in publishing arrangements and honest about the state of the market, the anticipated payment will be less than $100. Another one of my book publishers has estimated $25 in royalty income per title for AI training this year, with possibly a bit more in subsequent years. A third book, and a number of articles I have written, are in one of the sketchy online libraries that Anthropic downloaded, likely making me a member of the Bartz class action lawsuit and thus in line for an additional one-time payment. (You can check to see if you are possibly in the Bartz class using this search tool on The Atlantic by Alex Reisner. Ithaka S&R maintains a Generative AI Licensing Agreement Tracker if you’re curious about additional potential compensation.)

Needless to say, I will be unable to retire on my AI-training or copyright-violation rewards, which raises a difficult, broader question: in this unsettling age of accelerating artificial intelligence, will mere cash payments, even a significant one like the check from Bartz, truly make authors whole? Many authors would undoubtedly love to receive extra pay for their books, however modest. Being a writer is hard work, and the thousands of hours it takes to write a book means that you make less than the minimum wage for every release that falls short of a bestseller. Every dollar earned from writing seems worth savoring.

But I suspect that most authors, including those named in Bartz v. Anthropic, are after something more than remuneration: recognition for their ideas and narratives, respect for their human endeavor and unique ways of expressing themselves, and more readers who value them as thinkers and communicators. The issue with AI is not that it uses the text of writers to train its models, which are, after all, just a bunch of inscrutable numbers and math that only AI researchers can appreciate. It is that these models are then used to generate new text that competes with the work of authors, whether they are writers of novels, news stories, or blogs, and worse than that, without crediting them. Human writers are used to other writers digesting and incorporating their work, and even adopting their ideas, but in return, they expect attribution. This is a long tradition that AI breaks. The Bartz settlement will allow this experiment in competing artificial authorship to continue apace, and perhaps even safeguard its future.

* * *

What might fulsome compensation for authors look like, beyond the occasional token payment? Credit, often embodied in citations, is a good place to start. Wouldn’t it be nice if AI properly cited books and other cultural artifacts? With retrieval-augmented generation (RAG), AI chatbots can now include links to the websites they have scanned and learned from. Yet there is currently little functionality in the leading AI bots approaching the formal networks of associations familiar to any user of a library: the footnotes, endnotes, and bibliographies that ground our knowledge in a firm array of human-authored and -vetted work. Given that many books may be merged to produce a summary output on a topic, it may be difficult to open the black box of an LLM to extract citations or to highlight specific volumes or articles, but this is an area where AI companies could devote more targeted research and develop new user experiences that recognize more rigorously the origins of our understanding.

Rather than acting as a substitution for books, AI could do more to encourage the full reading of texts written by humans. As Google Books did two decades ago—also by digitizing books without permission, it should be noted—AI has the potential to vastly improve the search interfaces that guide us through the grand library of human writing. As teachers, students, and AI companies themselves have begun to realize, summarization and other AI shortcuts lead to poorer learning outcomes and worse cognitive fitness overall. More handoffs from AI discovery to long-form writing would be a welcome improvement. One possible approach, Anthropic’s relatively new Model Context Protocol (covered in this newsletter earlier), allows Claude to scan collections for volumes and documents of relevance to a query, but keeps those collections separate from the AI itself. Claude and other AI chatbots could do more through decentralized mechanisms such as this to point the reader to original sources that should be engaged with in a deeper, more focused way.

It is worth pressing on these ideas since the apparent win for authors in Bartz is shakier when we remember that Judge Alsup ruled that it was fair use to train a model on the physical books that Anthropic bought and scanned. The downloads from the dark web, and their storage in a library for future training, were what struck him as potentially illegal. It is not inconceivable that the AI companies with the means to do so will simply acquire books legally and train future models on them. Google, through its Books project, has already digitized millions of books from the shelves of library partners, and has no need to drop by—or to buy outright—the local bookstore. If other AI companies follow suit, authors may not be so happy about the compensation they received during what, in retrospect, was a relatively brief transactional phase.


Read more:

  • AI and Libraries, Archives, and Museums, Loosely Coupled

    A new framework provides a way for cultural heritage institutions to take advantage of the technology with fewer misgivings, and to serve students, scholars, and the public better

  • Books are Big AI's Achilles' Heel

    AI companies may have the money and the data centers, but they are badly in need of what humble libraries have in abundance

Don't miss what's next. Subscribe to Humane Ingenuity:
custom