In the latest skirmish between Sarah Silverman and other authors against Chat GPT-maker OpenAI, OpenAI submitted a new decision from a California federal court in support of its attempt to dismiss the Silverman plaintiffs’ claims. According to OpenAI, that other court rejected theories and claims that are nearly identical to Silverman’s claims against OpenAI. If the court hearing Silverman’s claims agrees, copyright holders looking to sue AI companies in the future may find themselves facing long odds on certain claims.

The new California decision cited by OpenAI comes in the wake of a similar decision in a case involving an AI image generator. Like the court in that image-generator case, the new decision cited by OpenAI dismissed most of the plaintiffs’ copyright claims and other claims, although it did so with leave to amend all but one state-law negligence claim. The court in this new decision rejected as “nonsensical” the plaintiffs’ argument that large language models (or LLMs) “are themselves infringing derivative works,” holding that “[t]here is no way to understand the [LLMs] themselves as a recasting or adaptation of any of the plaintiffs’ books.” Similarly, the court rejected the notion that “every output of the [LLMs] is an infringing derivative work,” stating that “the complaint offers no allegation of the contents of any output, let alone of one that could be understood as recasting, transforming, or adapting the plaintiffs’ books. Without any plausible allegation of an infringing output, there can be no vicarious infringement.”

The court continued by holding that “[t]he plaintiffs are wrong to say that, because their books were duplicated in full as part of the [large language model] training process, they do not need to allege any similarity between [the AI] outputs and their books to maintain a claim based on derivative infringement.” Instead, the court held that “because the plaintiffs would ultimately need to prove” at trial that “the outputs . . . are similar enough to the plaintiffs’ books to be infringing derivative works,” the plaintiffs need to “adequately allege it at the pleading stage.” The court in the new decision also dismissed the plaintiffs’ state-law claims as pre-empted by the Copyright Act. As in the AI image case, however, the court in the new decision did not dismiss the plaintiffs’ claim “alleging that the unauthorized copying of the plaintiffs’ books for purposes of training [large language models] constitutes [direct] copyright infringement.” 

As we’ve previously written, Silverman and her fellow plaintiffs have argued that the LLM underlying ChatGPT is an infringing derivative work, and that every ChatGPT output is necessarily infringing because ChatGPT was trained on copyrighted materials. If the court agrees with the courts in the AI image case and in the new decision cited by OpenAI, however, those theories will not fly. Instead, in order to claim that OpenAI is liable for vicarious infringement, Silverman and her compatriots would have to allege that ChatGPT’s outputs are substantially similar to their copyrighted works. Doing so might be a challenge: even if a user asks ChatGPT to produce text “in the style of Sarah Silverman,” ChatGPT’s response may not remotely resemble any passage from Silverman’s book.   

The court in the Silverman v. OpenAI case may or may not agree with these other courts. If it does, that would not necessarily doom all of Silverman’s claims. The fact that Silverman and her co-plaintiffs may have difficulty pleading vicarious infringement would not preclude them from pursuing claims of direct infringement based on the training of ChatGPT’s underlying large language model. Both the AI image decision and the new decision allowed similar direct infringement claims to stand. Moreover, like the plaintiffs in the other cases, Silverman and her fellow authors will likely have an opportunity to amend their other claims. 

Bottom line: courts are leaning in favor of the AI companies so far, but the battle is just beginning.