Skip to content

Readers and Robots

October 15, 2025

Last month, I came across an article in the Chronicle of Higher Education, “What a Landmark AI Settlement Means for Authors,” in which Dan Cohen outlines the recent Bartz vs Anthropic case, a lawsuit that is helping shape how artificial intelligence learns from the creative work of humans. This was one of those reads that stuck with me, sparking questions when the topic of AI pops us which, on our campus, is often!

In summation, novelist Andrea Bartz discovered her books had been used to train Anthropic’s AI chatbot, Clause. In her own New York Times piece, “I Sued Anthropic, and the Unthinkable Happened” (Sept. 29, 2025), Bartz described the horror of finding that technology had “reduced decades of intense work to text files gobbled up by algorithms in a fraction of a second”.

Bartz and other authors took on Anthropic in a class action lawsuit with an army of lawyers and ultimately settled for $1.5 billion, compensating the authors of approximately half a million books (or, at least their publishers). Initially, Anthropic faced potential damages of $150,000 per violation, an amount that could have exceeded $100 billion and put the company out of business. One could argue that such penalties would have been justified – papers have been retracted and tenure revoked, and overall academic integrity questioned for less.

From a library perspective, I care about fair use, access, and attribution, all principles that make knowledge sharing possible. When teaching research skills, I emphasize the importance of giving credit where it’s due. As AI continues its “learning”, shouldn’t it also be citing its sources? Ideally, AI would behave like a responsible student: acknowledging its references and leading readers back to the original text rather than replacing these sources.

In several of my library orientation classes, I demonstrate how to evaluate AI-generated information. We pull up content created by AI and then try to trace and evaluate the sources, if they even exist! Some LLMs (Google Gemini) include citations, while others (ChatGPT), lack any actual accreditation. When prompted for sources, they hallucinate or credit sources such as Reddit in ways that (should) raise more questions than answers.

As AI becomes a regular tool in all forms of information gathering, how do we keep the human voice at the center? It’s a question that I’ve been considering as I watch students use AI to edit their papers and use it myself to polish messages, emails, and, in complete transparency, this very post!

What is the role of the library in this? Historically, (most) librarians have not resisted technology but guided its use with care. While many still yearn for the days of old card catalog drawers and magazine files, our work has always been about connecting people with trustworthy information, however it evolves. If AI represents a new kind of reader, perhaps librarians can help it become a better one that recognizes where the knowledge is coming from and that scholarship is conversation that we’re clearly going to be navigating together.

I’d also like to use this final space to continue making connections – Beaver County Library System LIBRARYCON ’25 is happening this Saturday on our campus! Whether you’re nearby or in the mood for a road trip, consider joining us in celebrating libraries with authors, illustrators, live entertainment, costume contests, therapy bunnies, local organizations, Cryptids, food trucks, local vendors, readers, (probably!) robots, and SO MUCH MORE!

No comments yet

Leave a comment