Attorneys in the U.S. are increasingly submitting legal briefs created with the help of AI, and these documents sometimes include fake case citations. As a result, courts are issuing record numbers of penalties, and AI is becoming so integrated into legal tools that current rules requiring transparency may no longer be effective.
Summary
- Last year saw a rapid surge in court sanctions against lawyers for AI-generated briefs containing fictitious citations, and the rate is still climbing — a researcher tracking the trend recorded 10 cases from 10 different courts in a single day.
- A federal court may have set a new record last month with an order for an Oregon lawyer to pay $109,700 in sanctions for filing AI-generated errors, while Nebraska and Georgia supreme courts held public hearings over hallucinated case citations.
- OpenAI was sued in March by Nippon Life Insurance Company of America, which alleged a woman was using ChatGPT as a legal adviser, producing frivolous lawsuits — a charge OpenAI called meritless.
Lawyers in the U.S. are increasingly submitting legal briefs created with the help of AI, but these briefs often contain fake citations. As a result, courts are issuing record numbers of penalties. The problem is becoming so widespread in legal software that some experts believe rules requiring disclosure of AI use are no longer effective. NPR’s investigation from April 3rd shows that penalties for errors originating from AI have dramatically increased since 2025 and continue to rise in 2026. This trend has serious implications for all industries, including cryptocurrency, as the strength of their legal defenses relies on accurate and reliable legal briefs.
The numbers keep rising
Damien Charlotin, a researcher at HEC Paris who tracks legal penalties for mistakes made by AI, says the number of cases isn’t slowing down. He recently noted ten cases coming from ten different courts in a single day. According to Charlotin, the problem arises because AI is very effective, but still makes errors. A recent example involved lawyers for MyPillow CEO Mike Lindell, who each received a $3,000 fine for submitting legal documents with fake references.
Last month, a lawyer in Oregon faced a hefty penalty – over $109,700 in fines and court costs – ordered by a federal court. This isn’t an isolated incident, as state supreme courts are also addressing similar misconduct. In February, Nebraska’s highest court questioned an attorney from Omaha about made-up legal citations and reported him for potential disciplinary action. A similar situation occurred in Georgia in March. “It’s surprising to see lawyers continuing these practices, especially given the recent media coverage,” noted Carla Wale, associate dean at the University of Washington School of Law.
Why disclosure rules won’t work
Several courts are now asking lawyers to identify any parts of their legal documents created with the help of artificial intelligence. However, Joe Patrice, a legal journalist with Above the Law, believes these rules won’t be practical for long. He explains that AI is becoming so common in legal work that lawyers would essentially have to label everything as “AI-assisted,” making the rule pointless. Furthermore, the way law firms charge for their work is actually speeding up AI adoption. Because AI can quickly create drafts, firms are feeling pressure to change how they bill clients, and Patrice suggests this pressure might lead lawyers to use AI-generated content without carefully checking it.
The Justice Department recently changed its approach to prosecuting cryptocurrency developers, partly because they argued that code itself isn’t illegal unless someone intends to use it for criminal purposes. This requires careful legal analysis, something that AI-generated legal briefs often miss. A recent court case in Texas was dismissed, in part because the judge referenced a DOJ memo outlining these standards for prosecuting developers, showing how the quality of legal reasoning in cases involving AI and related technologies directly impacts the rules and regulations for the entire crypto industry.
The OpenAI lawsuit
Artificial intelligence is now facing legal challenges that go beyond simple mistakes made by AI systems. In March, OpenAI was sued by Nippon Life Insurance, claiming a woman used ChatGPT for legal advice and, as a result, filed unnecessary lawsuits against the insurance company. The lawsuit argues that OpenAI is essentially practicing law without a license. OpenAI responded to NPR, stating the claims were completely unfounded. However, legal expert Wale believes AI won’t replace lawyers, but rather those who learn to use it effectively will be the ones who succeed. She sees this as the future of the legal profession.
Read More
- Brent Oil Forecast
- Gold Rate Forecast
- Silver Rate Forecast
- ETH PREDICTION. ETH cryptocurrency
- USD ISK PREDICTION
- EUR ILS PREDICTION
- Is Pi Network’s Price About to Take Another Dive? 🤔📉
- Aave V4 Gets Unanimous Approval: What This Means for Ethereum Lending!
- USD MXN PREDICTION
- PI PREDICTION. PI cryptocurrency
2026-04-04 02:40