Is there "missing AI misuse?"
What five AI misuse cases in Iowa and Sen. Grassley's notes to federal judges teach us.
Catching up on AI in Iowa 2025
I am working my way through AI-related disciplinary cases for attorneys and pro se litigants in chronological order, but I need to jump ahead to provide an update for Iowa. There have been multiple AI disciplinary cases in Iowa, all since July 2025. Additionally, Iowa’s Senator Chuck Grassley, Senate Judiciary Committee Chairman, had written to two federal judges in July 2025 regarding their alleged use of generative artificial intelligence (AI) to draft court orders with little to no human verification.
In July, the first (Iowa Supreme Court Attorney Disciplinary Board v. Royce D. Turner) involved a disbarred attorney referencing hallucinated cases while attempting to get reinstated. In August, the second case (Luke v. State) was the first involving a pro se litigant referencing hallucinated cases. In September, the third (Turner1 v. Garrels) and fourth (Nelson v. Navient) involved additional pro se litigants referencing hallucinated cases; Nelson v. Navient was the first in federal court. In October, the fifth (In the Int. of R.A.) was the first involving a practicing attorney referencing hallucinated cases in an active case, as opposed to the attorney Turner, who was disbarred.
Also in July, Senate Judiciary Chairman Sen. Grassley wrote to two federal judges regarding their alleged miseuse of generative AI. Grassley’s oversight inquiry follows public reporting that U.S. District of Mississippi Judge Henry T. Wingate and U.S. District of New Jersey Judge Julien Xavier Neals issued court orders containing serious factual inaccuracies, prompting allegations of AI use.
Judge Wingate’s response regarding Jackson Federation of Teachers, et al. v. Lynn Fitch, et al.,
Judge Wingate explained that Perplexity was used in the drafting of the incorrect material. The quote reinforces two important points I’ve made recently in my writing. First, AI tools hallucinate, including Perplexity (despite false claims to the contrary in Perplexity’s advertising for its new Comet browser). Second, junior employees’ AI misuse can get senior employees in trouble, so it is important to have a policy and training in place for everyone.
In the case of the Court’s Order issued July 20, 2025, a law clerk utilized a generative artificial intelligence (“GenAI”) tool known as Perplexity strictly as a foundational drafting assistant to synthesize publicly available information on the docket. The law clerk who used GenAI in this case did not input any sealed, privileged, confidential, or otherwise non-public case information.
Judge Neals’s response regarding In re CorMedix Inc. Securities Litigation
Judge Neals noted that ChatGPT was misused by an intern. As noted above from Judge Wingate’s response, it is clear that oversight of junior employees and written generative AI policies are both essential.
As referenced in the Senator’s letter, a “temporary assistant,” specifically, a law school intern, used CHATGPT to perform legal research in connection with the CorMedix decision. In doing so, the intern acted without authorization, without disclosure, and contrary to not only chambers policy but also the relevant law school policy. My chambers policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders. In the past, my policy was communicated verbally to chambers staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns, pending definitive guidance from the AO through adoption of formal, universal policies and procedures for appropriate AI usage […] I would be remiss if I did not point out as well that the law school where the intern is a student contacted me after the incident to, among other things, inform me that the student had violated the school’s strict policy against the use of GenAI in their internships.
AI Everywhere
I frequently point out that many legal research tools also have AI features now. Although the AI misuse mentioned above involved generalist consumer AI tools (Perplexity and ChatGPT), there are AI features in legal tools like Westlaw and LexisNexis too. The Administrative Office (AO) letter to Sen. Grassley also makes this point. “With the increasing use of AI platforms such as OpenAI’s ChatGPT and Google Gemini, and integration of AI functions in legal research tools, AI use has become more common in the legal landscape.”
My hypothesis: Missing AI misuse
The five Iowa cases were all from 2025, as was the Senate oversight of the two judges. These cases were all identified by third parties, not self-reported by the individuals responsible for misusing the AI. With so many people adopting generative AI without understanding its risks and limitations, it seems highly unlikely that in Iowa only five people have misused this new technology and that 100% of misuse was caught.
Further, if federal judges with teams that are meant to conduct multiple stages of review can still end up publishing early drafts with unreviewed AI errors, how much more does this apply to solo firms, small firms, and pro se litigants? Rather, it seems more likely that there have been additional material AI-generated errors that have not yet been identified. Perhaps the errors were not identified because they would not have changed the outcome of the case. Perhaps the errors did change the outcome and will eventually be identified on appeal, as in the Georgia state case Shahid v. Essam.
In refusing the wife’s motion to reopen the case so that it could be defended, the trial judge relied on two fictitious cases which were presented to the court by the husband’s lawyer. Those cases were cited in the order.
Therefore, attorneys may benefit from learning about generative AI, even if they have no intention of using it. Learning the ways it can go wrong may aid attorneys faced with other parties’ AI misuse, including opposing counsel, pro se litigants, expert witnesses, or even judges.
Not the same Turner.

