We’ve all heard about so-called AI “hallucinations,” when AI programs like ChatGPT make up “facts” that are not true.  For example, lawyers have gotten in trouble for citing fake AI-generated court cases, as we wrote about here and here. But could the creator of the AI platform itself also be held accountable?

Conservative radio

As we’ve previously written, the rise of generative AI has led to a spate of copyright suits across the country. One major target of these suits has been OpenAI. Actor/comedian Sarah Silverman and author Paul Tremblay are among the plaintiffs to bring suit in California, while authors George R.R. Martin, John Grisham, and others have filed in New York. The lawsuits allege that OpenAI used the plaintiffs’ creative content without permission to train OpenAI’s generative AI tool in violation of the U.S. Copyright Act. OpenAI moved to dismiss the majority of claims in the Silverman and Tremblay cases on several bases: (1) the Copyright Act does not protect ideas, facts, or language; (2) the plaintiffs cannot show that outputs from OpenAI’s large language model (“LLM”) tool are substantially similar to the original content used to train the tool; and (3) any use of copyright-protected content by OpenAI’s tool constitutes fair use, and thus is immune to liability under the Act. Yesterday, Plaintiffs hit back, noting that OpenAI hasn’t moved to dismiss the “core claim” in the lawsuits—direct infringement.Continue Reading Famous Authors Clap Back at OpenAI’s Attempt to Dismiss Claims Regarding Unauthorized Use of Content for Training LLM Models

We previously wrote about the widely-publicized Southern District of New York case involving lawyers who submitted papers citing non-existent cases generated by the artificial intelligence program ChatGPT, Mata v. Avianca, Inc. The judge overseeing the matter held a lengthy, and tense, hearing on June 8, 2023, before a packed courtroom, and then issued a decision on June 22, 2023 sanctioning the lawyers involved. The case has grabbed attention by highlighting some of the real risks of using AI in the legal profession, but the case’s primary lessons have nothing to do with AI.

The June 8 Hearing

On June 8, 2023, the judge in the Mata case held a hearing on the issue of whether to sanction two of plaintiff’s lawyers, and the law firm at which they worked, for their conduct. The courtroom was filled to capacity, with many would-be observers directed to an overflow courtroom to watch a video feed of the hearing. 

As set forth in our prior update, the plaintiff’s first lawyer submitted an affirmation on March 1, 2023, in opposition to the defendant’s motion to dismiss, which was written by the second lawyer, but contained citations to non-existent cases. Thereafter, the defendant pointed out that it could not find these cases in a filing on March 15, and the Court issued an order on April 11 directing the plaintiff’s lawyer to submit an affidavit attaching the identified cases. The first lawyer did so on April 25 (attaching some of the “cases”, and admitting he could not find others), but did not reveal that all of the identified cases were obtained via ChatGPT.  Only after the Court issued a further order on May 4 directing the lawyer to show cause as to why he should not be sanctioned for citing non-existent cases did the first lawyer finally reveal the involvement of the second lawyer and the role of ChatGPT in the preparation of the submissions.Continue Reading Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial

The integration of artificial intelligence (AI) into the legal field has brought about numerous advancements, revolutionizing the way lawyers approach research and case preparation. However, recent incidents have raised concerns regarding the reliability and ethical implications of relying solely on AI models for legal research.

The New York Case – A Cautionary Tale

In a