In the past few months, the use of AI has rapidly spread from education to the legal profession. From a user perspective, the economic appeal of AI-generated works is undeniable. Lawyers and firms that effectively leverage emerging AI technologies will be able to provide their services at a reduced cost, with greater efficiency, and with higher odds of favorable outcomes in litigation.
Consider one of the most time-consuming tasks in litigation: distilling the most important, relevant information from a vast collection of documents produced during discovery. AI can significantly accelerate this process, doing work in seconds that might take even the most productive lawyers take days or weeks to complete. There are, however, limits to what lawyers can—or should—rely on AI technology for.
In a recent headline-making case, a New York lawyer used Chat GPT for assistance with writing an affirmation in opposition to a motion to dismiss, to his and a colleague’s peril. Attorneys Peter LoDuca and Steven Schwartz’s firm has been suing the Colombian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. In an affirmation responding to Avianca’s motion to dismiss, plaintiff’s counsel cited more than half a dozen non-existent cases, including “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines,” and “Miller v. United Airlines.”
When neither counsel for Avianca nor the court could locate the cited cases, the court ordered LoDuca—who was counsel of record and signed the offending filing—to show cause why he should not be sanctioned. LoDuca submitted an affidavit indicating that he personally had not performed any of the research or written the affirmation. Instead, Schwartz had done the research and writing, which LoDuca then signed and filed the affirmation because Schwartz was not admitted to practice in the United States District Court for the Southern District of New York. Schwartz, in turn, filed an affidavit indicating that he had “consulted the artificial intelligence website Chat GPT in order to supplement the legal research” for the filing. Schwartz further attested that he relied on ChatGPT and was unaware that ChatGPT’s contents could be false.
The judge in the case was not moved and sanctioned the attorneys and their firm for submitting false filings to the court. Although the judge “only” imposed $5,000.00 in monetary sanctions, the publicity of the case has perhaps irreparably damaged these attorneys’ reputations with the bench and bar.
This cautionary tale is a reminder that AI tools, while useful, are just that—tools. AI is not a substitute for thoughtful—and reliable—writing and advocacy. AI generators are typically trained by analyzing vast databases and synthesizing information—information which may or may not be accurate or consistent. The ordinary users of the AI generator will likely have little-to-no idea what algorithms or source databases were used to train the AI system. This case is also a reminder that lawyers should always verify information and citations submitted to a court, as an adversary and the judge will almost certainly check the sources. Lawyers’ ethical obligations—to clients, adversaries, and courts—still require that lawyers, not AI, take responsibility for maintaining the integrity of the judicial system. Failing to heed these lessons will place lawyers at risk of severe penalties, such as liability for legal malpractice, suspension, or even disbarment.
StraightforWARD Legal Advice:
Legal professionals with questions about AI and professional liabilities should contact Ross G. Currie at (215) 647-6604 or rcurrie@thewardlaw.com.