As we’ve previously written, the rise of generative AI has led to a spate of copyright suits across the country. One major target of these suits has been OpenAI. Actor/comedian Sarah Silverman and author Paul Tremblay are among the plaintiffs to bring suit in California, while authors George R.R. Martin, John Grisham, and others have filed in New York. The lawsuits allege that OpenAI used the plaintiffs’ creative content without permission to train OpenAI’s generative AI tool in violation of the U.S. Copyright Act. OpenAI moved to dismiss the majority of claims in the Silverman and Tremblay cases on several bases: (1) the Copyright Act does not protect ideas, facts, or language; (2) the plaintiffs cannot show that outputs from OpenAI’s large language model (“LLM”) tool are substantially similar to the original content used to train the tool; and (3) any use of copyright-protected content by OpenAI’s tool constitutes fair use, and thus is immune to liability under the Act. Yesterday, Plaintiffs hit back, noting that OpenAI hasn’t moved to dismiss the “core claim” in the lawsuits—direct infringement.

Continue Reading Famous Authors Clap Back at OpenAI’s Attempt to Dismiss Claims Regarding Unauthorized Use of Content for Training LLM Models

Pat Muffo, Partner in Seyfarth’s Intellectual Property practice presented the “AI for Patent Attorneys: Opportunities and Challenges” session as part of myLawCLE’s “AI for Lawyers: A practical guide to generative AI, copyright, privacy, and more” webinar series. The webinar session is available on demand on Thursday, August 31, 2023 through the myLawCLE website.

Artificial intelligence

Several U.S. courts are addressing lawsuits brought by artists alleging that AI-generated art infringes on copyrights held by the artists for their artwork. In one of those cases, a California federal judge recently indicated that he would dismiss the bulk of the plaintiffs’ complaint, while giving them a chance to re-plead their claims. A written decision from the court is forthcoming, and that decision could be an important one for plaintiffs and defendants alike in current and future AI-related copyright cases.

In Andersen, et al. v. Stability AI Ltd., et al., Case No. 3:23-cv-00201-WHO (N.D. Cal.), three artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—brought suit against Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Plaintiffs alleged that Stability AI “copied and scraped” billions of images to train an AI tool called “Stable Diffusion.” These images allegedly included those originally created by the plaintiff artists. Meanwhile, the other two defendants created programs allowing users to access Stability AI’s tool, which generates images in response to text prompts entered by users. Plaintiffs asserted that the defendants’ conduct resulted in, among other things, copyright infringement of the plaintiffs’ artwork. Plaintiffs also argued that the defendants engaged in vicarious copyright infringement by permitting their users to enter text prompts that resulted in infringing images.

Continue Reading California Court Casts Doubt on Copyright Claims Relating to AI Images

We previously wrote about the widely-publicized Southern District of New York case involving lawyers who submitted papers citing non-existent cases generated by the artificial intelligence program ChatGPT, Mata v. Avianca, Inc. The judge overseeing the matter held a lengthy, and tense, hearing on June 8, 2023, before a packed courtroom, and then issued a decision on June 22, 2023 sanctioning the lawyers involved. The case has grabbed attention by highlighting some of the real risks of using AI in the legal profession, but the case’s primary lessons have nothing to do with AI.

The June 8 Hearing

On June 8, 2023, the judge in the Mata case held a hearing on the issue of whether to sanction two of plaintiff’s lawyers, and the law firm at which they worked, for their conduct. The courtroom was filled to capacity, with many would-be observers directed to an overflow courtroom to watch a video feed of the hearing. 

As set forth in our prior update, the plaintiff’s first lawyer submitted an affirmation on March 1, 2023, in opposition to the defendant’s motion to dismiss, which was written by the second lawyer, but contained citations to non-existent cases. Thereafter, the defendant pointed out that it could not find these cases in a filing on March 15, and the Court issued an order on April 11 directing the plaintiff’s lawyer to submit an affidavit attaching the identified cases. The first lawyer did so on April 25 (attaching some of the “cases”, and admitting he could not find others), but did not reveal that all of the identified cases were obtained via ChatGPT.  Only after the Court issued a further order on May 4 directing the lawyer to show cause as to why he should not be sanctioned for citing non-existent cases did the first lawyer finally reveal the involvement of the second lawyer and the role of ChatGPT in the preparation of the submissions.

Continue Reading Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial

The integration of artificial intelligence (AI) into the legal field has brought about numerous advancements, revolutionizing the way lawyers approach research and case preparation. However, recent incidents have raised concerns regarding the reliability and ethical implications of relying solely on AI models for legal research.

The New York Case – A Cautionary Tale

In a

Last week, a joint statement was issued by four federal agencies expressing their apprehension regarding the use of AI for discriminatory or anticompetitive purposes and outlining their plans for regulation. This comes on the heels of Elon Musk requesting a “pause” in AI development and meeting with Senator Chuck Schumer to guide the statutory framework

Don’t worry, machines haven’t completely replaced humans as artists—at least, not yet. But the U.S. Copyright Office is considering the possibility.

The Copyright Office recently declared that it will not grant protection over AI-generated works, upholding its longstanding rule that non-human authors cannot own copyright. At the same time, the Office is well aware that

The Supreme Court yesterday declined to hear a case brought by a computer scientist whose “invention” was in fact created by artificial intelligence. Stephen Thaler was appealing a Federal Circuit decision that interpreted the Patent Act to require a human “inventor” for purposes of obtaining a patent. The invention at issue was conceived of by