Photo of Owen Wolfe

Thanksgiving is the start of the holiday season, a beloved time of the year where family and friends gather over delicious meals to share and create memories and to express love and gratitude. Thanksgiving also commences the annual Thanksgiving showdown, when silent (or not so silent) competitiveness emerges in the kitchen and beyond.

Activate: Feast

In a relatively scathing opinion finding the plaintiffs’ Complaint “defective in numerous respects,” a district court judge has thrown out most of the claims a group of artists has asserted against AI platforms that allegedly used the artists’ copyrighted works without permission. The order in Andersen et al. v. Stability AI Ltd. provides an important preview on courts’ tolerance for AI-related copyright lawsuits moving forward—including a similar class action filed by actor/comedian Sarah Silverman and other authors.

As we previously wrote, the Andersen case relates to “Stable Diffusion,” an AI platform that generates images in response to user prompts. According to Plaintiffs, Stable Diffusion scraped the internet to copy and store billions of copyrighted images without consent or licenses to train the programs.  (For another good summary of the case and the claims, check out this post from The Fashion Law).  Continue Reading Some Stability For AI Defendants: Judge Dismisses All But One Claim in Andersen et. al., v. Stability AI LTD., et. al.

As our colleagues reported in this Seyfarth Shaw Legal Update, President Biden signed a comprehensive Executive Order addressing AI regulation across a wide range of industries and issues. Intellectual property is a key focus. The Order calls on the U.S. Copyright Office and U.S. Patent and Trademark Office to provide guidance on IP risks and related regulation to address emerging issues related to AI.Continue Reading White House Directs Copyright Office and USPTO to Provide Guidance on AI-Related Issues

The latest briefing in Silverman v. OpenAI reads like that old REM song, “The End of the World as We Know It.” OpenAI has responded to the Plaintiffs’ claims that OpenAI’s popular platform ChatGPT has infringed their copyright with disaster-laden references to Michael Jordan and “the future of artificial intelligence.”

As we’ve previously written

We’ve all heard about so-called AI “hallucinations,” when AI programs like ChatGPT make up “facts” that are not true.  For example, lawyers have gotten in trouble for citing fake AI-generated court cases, as we wrote about here and here. But could the creator of the AI platform itself also be held accountable?

Conservative radio

As we’ve previously written, the rise of generative AI has led to a spate of copyright suits across the country. One major target of these suits has been OpenAI. Actor/comedian Sarah Silverman and author Paul Tremblay are among the plaintiffs to bring suit in California, while authors George R.R. Martin, John Grisham, and others have filed in New York. The lawsuits allege that OpenAI used the plaintiffs’ creative content without permission to train OpenAI’s generative AI tool in violation of the U.S. Copyright Act. OpenAI moved to dismiss the majority of claims in the Silverman and Tremblay cases on several bases: (1) the Copyright Act does not protect ideas, facts, or language; (2) the plaintiffs cannot show that outputs from OpenAI’s large language model (“LLM”) tool are substantially similar to the original content used to train the tool; and (3) any use of copyright-protected content by OpenAI’s tool constitutes fair use, and thus is immune to liability under the Act. Yesterday, Plaintiffs hit back, noting that OpenAI hasn’t moved to dismiss the “core claim” in the lawsuits—direct infringement.Continue Reading Famous Authors Clap Back at OpenAI’s Attempt to Dismiss Claims Regarding Unauthorized Use of Content for Training LLM Models

Several U.S. courts are addressing lawsuits brought by artists alleging that AI-generated art infringes on copyrights held by the artists for their artwork. In one of those cases, a California federal judge recently indicated that he would dismiss the bulk of the plaintiffs’ complaint, while giving them a chance to re-plead their claims. A written decision from the court is forthcoming, and that decision could be an important one for plaintiffs and defendants alike in current and future AI-related copyright cases.

In Andersen, et al. v. Stability AI Ltd., et al., Case No. 3:23-cv-00201-WHO (N.D. Cal.), three artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—brought suit against Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Plaintiffs alleged that Stability AI “copied and scraped” billions of images to train an AI tool called “Stable Diffusion.” These images allegedly included those originally created by the plaintiff artists. Meanwhile, the other two defendants created programs allowing users to access Stability AI’s tool, which generates images in response to text prompts entered by users. Plaintiffs asserted that the defendants’ conduct resulted in, among other things, copyright infringement of the plaintiffs’ artwork. Plaintiffs also argued that the defendants engaged in vicarious copyright infringement by permitting their users to enter text prompts that resulted in infringing images.Continue Reading California Court Casts Doubt on Copyright Claims Relating to AI Images

We previously wrote about the widely-publicized Southern District of New York case involving lawyers who submitted papers citing non-existent cases generated by the artificial intelligence program ChatGPT, Mata v. Avianca, Inc. The judge overseeing the matter held a lengthy, and tense, hearing on June 8, 2023, before a packed courtroom, and then issued a decision on June 22, 2023 sanctioning the lawyers involved. The case has grabbed attention by highlighting some of the real risks of using AI in the legal profession, but the case’s primary lessons have nothing to do with AI.

The June 8 Hearing

On June 8, 2023, the judge in the Mata case held a hearing on the issue of whether to sanction two of plaintiff’s lawyers, and the law firm at which they worked, for their conduct. The courtroom was filled to capacity, with many would-be observers directed to an overflow courtroom to watch a video feed of the hearing. 

As set forth in our prior update, the plaintiff’s first lawyer submitted an affirmation on March 1, 2023, in opposition to the defendant’s motion to dismiss, which was written by the second lawyer, but contained citations to non-existent cases. Thereafter, the defendant pointed out that it could not find these cases in a filing on March 15, and the Court issued an order on April 11 directing the plaintiff’s lawyer to submit an affidavit attaching the identified cases. The first lawyer did so on April 25 (attaching some of the “cases”, and admitting he could not find others), but did not reveal that all of the identified cases were obtained via ChatGPT.  Only after the Court issued a further order on May 4 directing the lawyer to show cause as to why he should not be sanctioned for citing non-existent cases did the first lawyer finally reveal the involvement of the second lawyer and the role of ChatGPT in the preparation of the submissions.Continue Reading Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial