Two U.S. federal judges have acknowledged that members of their staff used artificial intelligence tools to help draft recent court rulings — documents later criticized for containing factual errors.
The admission came in response to an inquiry by U.S. Senate Judiciary Committee Chairman Chuck Grassley, who sought explanations after lawyers flagged inaccuracies in the rulings.
In letters released on Thursday, Judge Henry Wingate of Mississippi and Judge Julien Xavier Neals of New Jersey said their staff had used AI tools — including OpenAI’s ChatGPT and Perplexity — without proper authorization or disclosure.
Judge Neals said a law school intern used ChatGPT for research in a securities case, leading to a decision being “released in error” before it was withdrawn. He has since introduced a written AI policy and stricter review procedures.
Judge Wingate revealed that a clerk used Perplexity to “synthesize publicly available information” for a civil rights ruling, which was later replaced after “a lapse in human oversight.”
Grassley commended the judges for their transparency but called for stronger judicial safeguards on AI use.
“Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” he said.
AI misuse in courtrooms has been an increasing concern, with several lawyers across the U.S. fined or sanctioned for submitting briefs containing fabricated or inaccurate information generated by AI tools.














