NextFin News - Gordon Rees Scully Mansukhani LLP, one of the largest law firms in the United States, has issued a formal apology to a federal bankruptcy judge after a court filing was found to contain multiple fabricated citations generated by artificial intelligence. The incident, which culminated in a public reprimand for a former senior counsel at the firm, highlights the escalating friction between the legal profession’s push for efficiency and the inherent risks of "hallucinations" in generative AI models. While the firm itself avoided formal sanctions, it has already reimbursed more than $55,000 in legal fees to opposing counsel to mitigate the fallout from the erroneous submissions.
The case, presided over by U.S. Bankruptcy Judge Christopher Hawkins in Alabama, centered on filings related to the Jackson Hospital bankruptcy. According to court records, the firm initially denied the use of AI in its research process before later admitting that a senior attorney had relied on unverified, machine-generated output. Judge Hawkins noted that while the firm took "reasonable steps" to address the systemic risks after the error was discovered, the individual attorney’s failure to verify the citations constituted a breach of professional and ethical duties. The firm has since implemented a mandatory "cite checking" policy specifically designed to catch AI-generated fiction before it reaches a judge’s desk.
Legal technology analyst Stephen Wu (Silicon Valley Law Group), who has long advocated for a cautious, "human-in-the-loop" approach to legal automation, suggests that this case is a bellwether for the industry. Wu’s position, which often leans toward strict regulatory oversight of AI in high-stakes litigation, emphasizes that the "nondelegable duty" of an attorney to verify their work remains the ultimate safeguard. He argues that the Gordon Rees incident proves that even Am Law 100 firms—those with the deepest pockets and most robust IT infrastructures—are not immune to the pitfalls of large language models. This perspective is increasingly common among legal ethics experts, though it contrasts with the more aggressive "AI-first" stance taken by some legal tech startups that claim human error is a greater risk than machine hallucination.
The financial implications of these errors are becoming quantifiable. Beyond the $55,000 reimbursement in the Jackson Hospital case, the reputational damage to a firm of Gordon Rees’s stature—employing roughly 1,800 lawyers—is significant. This is not an isolated event; similar reprimands have been issued by judges in Illinois and South Carolina over the past year. The recurring nature of these incidents suggests that the legal industry’s current "honor system" for AI usage may be insufficient. Some judges have begun requiring "AI disclosures" for every filing, a move that has met with resistance from practitioners who argue it adds unnecessary administrative burden to an already strained system.
The tension lies in the economics of modern law. Clients are increasingly unwilling to pay for hours of manual case law research that they believe can be performed in seconds by a computer. However, the Gordon Rees apology serves as a reminder that the cost of a "hallucination" often far exceeds the savings of the initial automation. As firms continue to integrate these tools, the focus is shifting from whether to use AI to how to build redundant verification layers that can survive the scrutiny of a skeptical bench. The era of "trust but verify" has arrived in the courtroom, with the emphasis heavily weighted on the latter.
Explore more exclusive insights at nextfin.ai.

