Gadgets, Gigabytes, & Goodwill Blog editors, Lauren Leipold and Owen Wolfe, co-authored an article, “Rules for use of AI-generated evidence in flux,” in Reuters and Reuters’ Westlaw Today. The Seyfarth attorneys discussed how generative AI prompts and outputs are discoverable in litigation, even those that were part of pre-suit investigation, and that parameters around

Sarah Silverman and her fellow author plaintiffs are fighting a judge’s recent order requiring them to disclose the prompts and outputs they used in preparation for filing their class action lawsuit against ChatGPT owner OpenAI. The judge is giving OpenAI until July 24 to respond to the plaintiffs’ argument that the material should be shielded

The class of plaintiff authors seeking to hold OpenAI liable for copyright infringement has faced yet another setback. The U.S. District Court for the Northern District of California has knocked out the majority of their claims, refusing to accept the blanket allegation that “every output of the OpenAI Language Model is an infringing derivative work.” However, the court has allowed the plaintiffs another chance to cure many of the deficiencies in their pleadings, so the battle is not yet over.

As we’ve previously reported, named plaintiffs including Paul Tremblay, Sarah Silverman, and Michael Chabon have filed class action lawsuits against several companies associated with popular Large Language Model tools like ChatGPT. The lawsuits claim that because the defendants copied their original works of authorship to use as training material for the LLMs, the AI companies are liable under the federal Copyright Act and various state tort laws. For a quick recap of the theories they are asserting, check out our recent AI Update.Continue Reading The Latest Chapter in Authors’ Copyright Suit Against OpenAI: Original Pleadings Insufficient

In the latest skirmish between Sarah Silverman and other authors against Chat GPT-maker OpenAI, OpenAI submitted a new decision from a California federal court in support of its attempt to dismiss the Silverman plaintiffs’ claims. According to OpenAI, that other court rejected theories and claims that are nearly identical to Silverman’s claims against OpenAI. If the court hearing Silverman’s claims agrees, copyright holders looking to sue AI companies in the future may find themselves facing long odds on certain claims.

The new California decision cited by OpenAI comes in the wake of a similar decision in a case involving an AI image generator. Like the court in that image-generator case, the new decision cited by OpenAI dismissed most of the plaintiffs’ copyright claims and other claims, although it did so with leave to amend all but one state-law negligence claim. The court in this new decision rejected as “nonsensical” the plaintiffs’ argument that large language models (or LLMs) “are themselves infringing derivative works,” holding that “[t]here is no way to understand the [LLMs] themselves as a recasting or adaptation of any of the plaintiffs’ books.” Similarly, the court rejected the notion that “every output of the [LLMs] is an infringing derivative work,” stating that “the complaint offers no allegation of the contents of any output, let alone of one that could be understood as recasting, transforming, or adapting the plaintiffs’ books. Without any plausible allegation of an infringing output, there can be no vicarious infringement.”Continue Reading “The Plaintiffs Are Wrong”: OpenAI Submits New Authority in Attempt to Knock Out Sarah Silverman’s Claims

The latest briefing in Silverman v. OpenAI reads like that old REM song, “The End of the World as We Know It.” OpenAI has responded to the Plaintiffs’ claims that OpenAI’s popular platform ChatGPT has infringed their copyright with disaster-laden references to Michael Jordan and “the future of artificial intelligence.”

As we’ve previously written

We previously wrote about the widely-publicized Southern District of New York case involving lawyers who submitted papers citing non-existent cases generated by the artificial intelligence program ChatGPT, Mata v. Avianca, Inc. The judge overseeing the matter held a lengthy, and tense, hearing on June 8, 2023, before a packed courtroom, and then issued a decision on June 22, 2023 sanctioning the lawyers involved. The case has grabbed attention by highlighting some of the real risks of using AI in the legal profession, but the case’s primary lessons have nothing to do with AI.

The June 8 Hearing

On June 8, 2023, the judge in the Mata case held a hearing on the issue of whether to sanction two of plaintiff’s lawyers, and the law firm at which they worked, for their conduct. The courtroom was filled to capacity, with many would-be observers directed to an overflow courtroom to watch a video feed of the hearing. 

As set forth in our prior update, the plaintiff’s first lawyer submitted an affirmation on March 1, 2023, in opposition to the defendant’s motion to dismiss, which was written by the second lawyer, but contained citations to non-existent cases. Thereafter, the defendant pointed out that it could not find these cases in a filing on March 15, and the Court issued an order on April 11 directing the plaintiff’s lawyer to submit an affidavit attaching the identified cases. The first lawyer did so on April 25 (attaching some of the “cases”, and admitting he could not find others), but did not reveal that all of the identified cases were obtained via ChatGPT.  Only after the Court issued a further order on May 4 directing the lawyer to show cause as to why he should not be sanctioned for citing non-existent cases did the first lawyer finally reveal the involvement of the second lawyer and the role of ChatGPT in the preparation of the submissions.Continue Reading Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial

The integration of artificial intelligence (AI) into the legal field has brought about numerous advancements, revolutionizing the way lawyers approach research and case preparation. However, recent incidents have raised concerns regarding the reliability and ethical implications of relying solely on AI models for legal research.

The New York Case – A Cautionary Tale

In a

If there is anything movies like The Terminator have shown us, it’s that AI systems might one day become self-aware and wreak havoc.  But until Skynet becomes self-aware, let’s enjoy the AI toy that is quickly becoming a part of our daily lives. Some Samsung employees recently discovered that playing with AI models like ChatGPT may have unexpected consequences. These employees used ChatGPT for work and shared sensitive data, such as source code and meeting minutes. This incident was labeled as a “data leak” due to fears that ChatGPT would disclose the data to the public once it is trained on the data. In response, many companies took action, such as banning or restricting access, or creating ChatGPT data disclosure policies.

First, let’s talk about ChatGPT’s training habits. Although ChatGPT does not currently train on user data (its last training session was in 2021), its data policy for non-API access says it may use submitted data to improve its AI models. Users are warned against sharing sensitive information, as specific prompts cannot be deleted. API access data policy is different, stating that customer data is not used for training/tuning the model, but is kept for up to 30 days for abuse and misuse monitoring. API access refers to access via ChatGPT’s API, which developers can integrate into their applications, websites, or services. Non-API access refers to accessing ChatGPT via the website. For simplicity, let’s focus on non-API access. We’ll also assume ChatGPT has not been trained on user data yet – but, like Sarah Connor warning us about Judgment Day, we know it’s coming. Our analysis will mainly focus on ChatGPT.  As noted below, this analysis may change based on a given usage policy of a chatbot.

This situation brings to mind the classic philosophical question: If a tree falls in a forest and no one’s around to hear it, does it make a sound? In our AI-driven world, we might rephrase it as: If we share our secrets with an AI language model like ChatGPT, but the information remains unused, does it count as trade secret disclosure or public disclosure of an invention?Continue Reading Spilling Secrets to AI: Does Chatting with ChatGPT Unleash Trade Secret or Invention Disclosure Dilemmas?