As we’ve previously written, the rise of generative AI has led to a spate of copyright suits across the country. One major target of these suits has been OpenAI. Actor/comedian Sarah Silverman and author Paul Tremblay are among the plaintiffs to bring suit in California, while authors George R.R. Martin, John Grisham, and others have filed in New York. The lawsuits allege that OpenAI used the plaintiffs’ creative content without permission to train OpenAI’s generative AI tool in violation of the U.S. Copyright Act. OpenAI moved to dismiss the majority of claims in the Silverman and Tremblay cases on several bases: (1) the Copyright Act does not protect ideas, facts, or language; (2) the plaintiffs cannot show that outputs from OpenAI’s large language model (“LLM”) tool are substantially similar to the original content used to train the tool; and (3) any use of copyright-protected content by OpenAI’s tool constitutes fair use, and thus is immune to liability under the Act. Yesterday, Plaintiffs hit back, noting that OpenAI hasn’t moved to dismiss the “core claim” in the lawsuits—direct infringement.

Continue Reading Famous Authors Clap Back at OpenAI’s Attempt to Dismiss Claims Regarding Unauthorized Use of Content for Training LLM Models

Last month, Congress was essentially “abducted” by the testimony of Air Force veteran David Grusch. He boldly asserted that the government is playing a galactic game of hide and seek with unidentified aerial phenomenon (UAP) (or UFO) technology. Grusch further claimed that the U.S. government has a secretive crash retrieval program and suggested that the U.S. has obtained bodies of extraterrestrial origin.

While many have met his claims with skepticism, others argue that this could be just a glimpse into the earth-shattering revelation that we are not the sole inhabitants of the universe. Grusch’s bold revelations have ignited speculation that governments worldwide, alongside contractors, might be reverse-engineering UAP technologies for defense. This raises the question: Are we on the brink of a UAP technological arms race?

Could patent applications serve as a tangible avenue for identifying ET’s blueprints?

Continue Reading Unraveling the UAP Enigma: Are Patents the Gateway to Alien Tech?

The Federal Circuit partially refuted the long held assumption that the trademark applicant has the burden of proving third party marks were in use when determining the strength of the applicant mark. The panel led by Judge Dyk found that when determining the conceptual strength of trademarks, “absent proof of non-use [of registered marks], use

The deal market reached historic levels in recent years, with record-setting merger and acquisition activity in 2021. Markets have since cooled, with capital becoming harder to find. But any company preparing to sell within the next five years should consider the more common IP issues that arise during the legal due diligence process.

IP Ownership

Several U.S. courts are addressing lawsuits brought by artists alleging that AI-generated art infringes on copyrights held by the artists for their artwork. In one of those cases, a California federal judge recently indicated that he would dismiss the bulk of the plaintiffs’ complaint, while giving them a chance to re-plead their claims. A written decision from the court is forthcoming, and that decision could be an important one for plaintiffs and defendants alike in current and future AI-related copyright cases.

In Andersen, et al. v. Stability AI Ltd., et al., Case No. 3:23-cv-00201-WHO (N.D. Cal.), three artists—Sarah Andersen, Kelly McKernan, and Karla Ortiz—brought suit against Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Plaintiffs alleged that Stability AI “copied and scraped” billions of images to train an AI tool called “Stable Diffusion.” These images allegedly included those originally created by the plaintiff artists. Meanwhile, the other two defendants created programs allowing users to access Stability AI’s tool, which generates images in response to text prompts entered by users. Plaintiffs asserted that the defendants’ conduct resulted in, among other things, copyright infringement of the plaintiffs’ artwork. Plaintiffs also argued that the defendants engaged in vicarious copyright infringement by permitting their users to enter text prompts that resulted in infringing images.

Continue Reading California Court Casts Doubt on Copyright Claims Relating to AI Images

The new social media platform Threads was launched on July 5, 2023. Reports indicate that within the first day of launch, more than 30 million users have signed up. The app is designed for text-based conversations instead of photo updates. As users rush to join the platform, brands should also prioritize claiming accounts in order

We previously wrote about the widely-publicized Southern District of New York case involving lawyers who submitted papers citing non-existent cases generated by the artificial intelligence program ChatGPT, Mata v. Avianca, Inc. The judge overseeing the matter held a lengthy, and tense, hearing on June 8, 2023, before a packed courtroom, and then issued a decision on June 22, 2023 sanctioning the lawyers involved. The case has grabbed attention by highlighting some of the real risks of using AI in the legal profession, but the case’s primary lessons have nothing to do with AI.

The June 8 Hearing

On June 8, 2023, the judge in the Mata case held a hearing on the issue of whether to sanction two of plaintiff’s lawyers, and the law firm at which they worked, for their conduct. The courtroom was filled to capacity, with many would-be observers directed to an overflow courtroom to watch a video feed of the hearing. 

As set forth in our prior update, the plaintiff’s first lawyer submitted an affirmation on March 1, 2023, in opposition to the defendant’s motion to dismiss, which was written by the second lawyer, but contained citations to non-existent cases. Thereafter, the defendant pointed out that it could not find these cases in a filing on March 15, and the Court issued an order on April 11 directing the plaintiff’s lawyer to submit an affidavit attaching the identified cases. The first lawyer did so on April 25 (attaching some of the “cases”, and admitting he could not find others), but did not reveal that all of the identified cases were obtained via ChatGPT.  Only after the Court issued a further order on May 4 directing the lawyer to show cause as to why he should not be sanctioned for citing non-existent cases did the first lawyer finally reveal the involvement of the second lawyer and the role of ChatGPT in the preparation of the submissions.

Continue Reading Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned