US senator asks judges if they used AI in withdrawn court rulings

Oct 6 (Reuters) – U.S. Senate Judiciary Committee Chairman Chuck Grassley on Monday asked two federal judges to answer questions about whether artificial intelligence was used to prepare recent orders that contained “substantive errors”.
Grassley, an Iowa Republican, sent letters to U.S. District Judge Julien Xavier Neals in New Jersey and U.S. District Judge Henry Wingate in Mississippi. The two judges in a pair of separate, unrelated lawsuits in July withdrew written rulings after lawyers in the cases said they contained factual inaccuracies and other serious errors.
Sign up here.
Grassley asked the judges whether and how they, their law clerks or court staff used generative AI or automated tools to prepare orders in the cases.
He also asked them to explain the “human drafting and review” done before issuing the orders, the cause of the errors, and measures their chambers have taken to guard against similar errors in the future.
The letters noted that lawyers have increasingly faced scrutiny from judges across the country for apparent misuse of AI. Judges have levied fines or other sanctions in dozens of cases over the past few years after lawyers failed to vet the output the technology generated.
“No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy,” Grassley wrote. “Indeed, Article III judges should be held to a higher standard, given the binding force of their rulings on the rights and obligations of litigants before them.”
Representatives from Neals and Wingate’s chambers did not immediately respond to requests for comment.
Wingate, in Jackson, Mississippi, replaced an order in July that he issued in a civil rights lawsuit, after lawyers for the state said in a court filing that it contained “incorrect plaintiffs and defendants” and included allegations that were not in the complaint.
He later declined to give an explanation for the original ruling, saying it contained “clerical errors referencing improper parties and factual allegations,” and that he issued a new opinion after correcting the mistakes. He also declined to make the original, faulty ruling available on the public docket after a request from lawyers for the state.
Neals, in Newark, New Jersey, withdrew a ruling he issued in a securities lawsuit after defense attorneys told the court that the decision made factual errors and included quotes lawyers said were not in the cited cases.
A person familiar with the circumstances in the New Jersey case had previously told Reuters that research produced using artificial intelligence was included in a draft decision that was inadvertently placed on the public docket before a review process.
A temporary assistant had prepared the research, the person had said, adding that the court’s chambers has a strict policy against the unauthorized use of AI to support opinions.
In both cases, the judges did not say in court filings how the apparent errors were included in their decisions.
Grassley’s letters asked why the original rulings were removed from the court dockets and whether the judges will restore them “to preserve a transparent history of the court’s actions in this matter.”
Our Standards: The Thomson Reuters Trust Principles.


