An Indiana attorney would be fined $15,000 for submitting legal briefs containing deceptive case citations manufactured by ChatGPT. The fine, recommended by US Magistrate Judge Mark J. Dinsmore, is higher than any fine ever imposed for the same kind of AI-facilitated legal misconduct.
Lawyer Sanctioned for Fabricated Case Citations Generated by AI
Rafael Ramirez, representing HooserVac LLC in a case involving a retirement fund, included made-up case citations in three legal briefs filed in October 2024. When the court could not locate these cases, Judge Dinsmore instructed Ramirez to apologize for his actions.
“Citing to a case that simply does not exist is something else again,” Judge Dinsmore said in his December ruling. “Mr. Ramirez provides no suggestion of an explanation for how a case citation fabricated whole cloth ended up in his brief.Â
The most obvious explanation is that Mr. Ramirez employed an AI-generative tool to assist him in drafting his brief and neglected to verify the citations.”
Ramirez later conceded to employing generative AI but professed ignorance about the technology’s ability to create fictional cases. He conceded to violating Federal Rule of Civil Procedure 11, which requires lawyers to make sure that their claims are based on available facts or have a good likelihood of being substantiated through investigation.
Despite this experience, Ramirez still uses AI tools, but he reports that he has had legal education courses in AI in law. Judge Dinsmore deemed that insufficient, labeling Ramirez’s conduct as “particularly sanctionable” due to his “failure to meet that most elementary of requirements” of the practice of law.
AI Hallucinations Lead to Heavy Sanctions for Attorney
The judge imposed sanctions of $5,000 on each of Ramirez’s three briefs filled with fabricated cases and referred Ramirez to the chief judge for potential further disciplinary proceedings.Â
Judge Dinsmore was especially harsh on Ramirez for incompetent representation and for giving the court false information.
This case is about a new phenomenon of AI abuse in legal practice. Two attorneys and their law firm were penalized $5,000 in June 2023 by a Manhattan trial judge for using AI-hallucinated legal research produced by ChatGPT. More recently, in January 2025, Wyoming attorneys brought nine AI-hallucinated cases in a suit against Walmart and Jetson Electric Bikes over a hoverboard fire.
Legal experts warn that such incidents underscore the dangers of hasty reliance on AI technology in the practice of law. Generative AI may be useful for research and drafting, but without critical judgment and ethical responsibility that inform the practice of law, it is of little use.
Bar associations across the country are now developing best practices for ethical AI use in law. Most require human screening of all content produced by AI, particularly citations and case references.
“The technology is fast-moving, yet lawyers’ essential ethical responsibilities are not altered,” said legal technology expert Sarah Chen. “Verifying and fact-checking aren’t optional, whether content is produced by an associate or generated by AI.”
As courts are more and more faced with the role of AI in the practice of law, this case is making a loud statement that software is not an excuse for lawyers from living up to their professional duty. For Ramirez and others like her, there is a sobering cost to the lesson—one that will encourage lawyers to use AI more hesitantly and critically next time.