The integration of artificial intelligence (AI) into the legal field has brought about numerous advancements, revolutionizing the way lawyers approach research and case preparation. However, recent incidents have raised concerns regarding the reliability and ethical implications of relying solely on AI models for legal research.
The New York Case – A Cautionary Tale
In a recent incident, a lawyer in New York relied on ChatGPT for legal research. The AI model provided the lawyer with case citations, which were then incorporated into a brief. Unfortunately, the opposing counsel discovered that the cited cases did not exist, prompting them to request verification. When the lawyer turned to ChatGPT again, it became apparent that the cases were fabricated. The Southern District of New York responded by issuing an order to show cause, potentially leading to sanctions against the lawyer for citing fraudulent cases.
Judge Starr’s Order and its Implications
In response to the New York case and concerns about AI-generated legal content, Judge Starr of the Northern District of Texas issued an order requiring lawyers to certify that no portion of any filing in the case would be drafted by generative AI. The order further emphasized that any language drafted by AI, including quotations, citations, paraphrased assertions, and legal analysis, should be checked for accuracy by a human being before submission to the court.
Judge Starr also expressed skepticism about the use of AI in legal briefing, stating that “legal briefing is not one of [AI’s] uses in the law.” The judge argued that AI models can produce inaccurate or irrelevant information, often referred to as “hallucinations,” and lack the allegiance to a client that an attorney possesses.
Balancing the Concerns and Advantages of AI in Legal Research
It is crucial to consider a more nuanced perspective on the capabilities and potential benefits of AI in the legal field. While the New York incident highlights the risks associated with solely relying on AI-generated content, it is essential to recognize that AI tools can still play a valuable role in legal research and brief writing. AI models, such as ChatGPT, can assist attorneys in sifting through vast amounts of legal information, providing initial insights, and helping to craft more persuasive arguments.
Moreover, AI models have the potential to enhance the clarity and coherence of legal briefs. They can suggest alternative wording, identify potential weaknesses in arguments, and aid in organizing complex legal analyses. By leveraging the strengths of AI, lawyers can augment their research capabilities and improve the overall quality of their work.
Where do we Go From Here?
Judge Starr’s order emphasizes the need for human supervision and verification of AI-generated content before submission to the court. This serves as an important reminder that attorneys hold ultimate responsibility for the accuracy and integrity of their legal filings.
Additionally, ongoing efforts to improve AI models’ reliability and transparency are necessary. Developers should work towards minimizing “hallucinations” and ensuring that AI models are well-trained and regularly updated with accurate legal information. Transparent documentation of AI-generated outputs and potential biases can also assist lawyers in critically evaluating the information provided.