A Utah attorney has learned an expensive lesson about artificial intelligence after being sanctioned by the state’s court of appeals for submitting legal documents containing fabricated case citations generated by ChatGPT.
Richard Bednar found himself on the wrong side of the law when the Utah Court of Appeals discovered that a brief he filed contained references to court cases that simply don’t exist. The case has become another cautionary tale about the risks of relying on AI tools without proper verification.
ChatGPT Hallucinations Lead to Fabricated Legal Citations in Court Filing
The trouble began when Bednar and co-counsel Douglas Durbano filed what appeared to be a routine petition for interlocutory appeal. However, when the opposing counsel reviewed the document, they quickly spotted something amiss. Several case citations looked suspicious, and further investigation revealed they were completely fabricated.
“It appears that at least some portions of the Petition may be AI-generated, including citations and even quotations to at least one case that does not appear to exist in any legal database,” the respondent’s counsel noted in court documents. The fake citations could only be found in ChatGPT responses, not in any legitimate legal database.

One of the most glaring examples was a case titled “Royer v Nelson” – a case that exists nowhere except in the AI system’s imagination. The brief also included references to real cases that had nothing to do with the legal issues at hand.
Attorney Takes Responsibility for AI-Generated Fabricated Citations
When confronted with the evidence, Bednar didn’t try to deny what had happened. He acknowledged the errors and apologized for the mistake. During an April hearing, both Bednar and his attorney accepted full responsibility for the fabricated legal authorities that came from ChatGPT.
The explanation revealed a concerning breakdown in legal oversight. According to Bednar’s account, an unlicensed law clerk, a recent law school graduate – had written the brief using ChatGPT. The problem was that Bednar failed to independently verify the accuracy of the citations before filing the document with the court.
The law clerk responsible for the error was subsequently terminated from the firm. Durbano, the co-counsel, was not involved in creating the problematic petition.
The Court’s Response
The Utah Court of Appeals took a measured but firm stance on the incident. While acknowledging that AI can be a useful legal research tool, the court emphasized that attorneys cannot abdicate their fundamental responsibility to ensure accuracy.
“We agree that the use of AI in the preparation of pleadings is a legal research tool that will continue to evolve with advances in technology,” the court stated. “However, we emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings.”
The court made it clear that Bednar had “fell short of their gatekeeping responsibilities as members of the Utah State Bar” by submitting documents with fake precedents.
The sanctions imposed on Bednar were substantial and multifaceted. He was instructed to pay the other party’s attorney fees for both the petition and hearing. He is also required to pay his own client for time lost in preparing the defective filing and for appearing at the follow-up hearing.
As part of his punishment, Bednar must make a $1,000 donation to “And Justice for All,” a Utah legal nonprofit.
Bednar also offered to pay any other related attorney costs in order to “make amends” for the problem.
An Emerging Issue around ChatGPT
This is something that arises as increasingly sophisticated AI software is released onto the marketplace. While such software can potentially aid in legal writing and research, it needs to be closely watched by human attorneys to avoid just this kind of cringeworthy and expensive error.
The incident serves as a reminder that even with all its abilities, artificial intelligence can confidently provide erroneous information. To lawyers, the ramifications of such an error extend far beyond embarrassment – they can result in sanctions, reputations destroyed, and fines.
As technology keeps improving, the legal field will need to come up with improved practices to integrate the tools without compromising on the accuracy and integrity required of the justice system.