Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. (AP Photo/Richard Drew)
Two personal injury attorneys were fined $5,000 on Thursday for using fake cases and quotes generated by ChatGPT in court documents and then lying about it in open court. The Manhattan judge who imposed the fine left attorneys Steven A. Schwartz and Peter LoDuca to their conscience to decide whether the bogus record warranted a personal apology to the judges.
Schwartz submitted an affidavit to the court explaining that he had used the artificial intelligence program ChatGPT to “supplement the legal research” when drafting the documents.
LoDuca, the attorney of record in the case, signed a brief filed with the court containing citations of cases that did not exist. Schwartz, the attorney with three decades of experience who drafted the document, did not sign the filing because he is not licensed to practice in federal court.
Schwartz later told the judge that the program now “has proven to be unreliable”. Ultimately, however, Schwartz was skeptical of the reliability of the AI-generated case law, but instead of researching the matter himself, he again turned to ChatGPT to find out if the case was. real.
“Much harm arises from the submission of false opinions,” wrote Kevin Castel, U.S. District Judge appointed by George W. Bush.
Castel listed a litany of “misdeeds” that could result from even innocent error:
The opposing party is wasting time and money exposing the deception. The Court’s time is taken from other important efforts. The client may be deprived of arguments based on genuine legal precedents. There is potential damage to the reputation of the judges and courts whose names are falsely invoked as the authors of the false opinions and to the reputation of a party who is attributed with fictitious conduct. It promotes cynicism about the legal profession and the American justice system. And a future litigant may be tempted to defy a court decision by dishonestly invoking doubt about its authenticity.
The lawyers in this case, however, had not been so innocent, according to Castel. According to the judge’s findings, the lawyers “doubled down” on the false cases cited, then “only started to uncover the truth” much later – after the court had already put penalties on the table.
To make matters worse, as the ChatGPT saga began to unfold via court filings, LoDuca submitted a document stating that he would be on vacation and could not respond quickly regarding the veracity of the quotes. This claim also turned out to be false.
Castel summarizes his discovery:
Mr. LoDuca’s statement was false and he knew it was false at the time he made the statement. When questioned by the Court during the penalty hearing, Mr. LoDuca admitted that he was not out of the office on vacation.
Later, LoDuca admitted that he made the statement regarding a vacation to cover for Schwartz, who was out of the office and needed more time to prepare for the upcoming false citation hearing.
“The effect of the lie was to conceal Mr. Schwartz’s role in preparing the March 1 Affidavit and April 25 Affidavit and to conceal Mr. LoDuca’s lack of a significant role in confirming the truthfulness of the statements in his affidavit. This is proof of Mr. LoDuca’s subjective bad faith.
The judge also looked closely at the cases generated by ChatGPT for the two lawyers – and found them to be grossly insufficient. One case showed “flaws in style and reasoning” that would not normally be found in an appeal decision.
“His legal analysis is gibberish,” Castel said.
Castel also took issue with Schwartz’s attempt to downplay ChatGPT’s role in his legal research. The judge included the following colloquium in his penalty order:
THE COURT: But ChatGPT did not complement your research. That was your research, wasn’t it?
Mr. SCHWARTZ: That is correct. It has become my last resort. So I guess that’s correct.
Castel wrote that characterizing ChatGPT as a supplement was “a deceptive attempt to mitigate his actions by creating the false impression that he had done other significant research on the matter and was not relying exclusively on a chatbot. IA, when in truth and in fact that was the only source of his substantive arguments.
The judge also called out Schwartz’s skepticism of ChatGPT’s case law. The court documents included smartphone screenshots in which Schwartz asked ChatGPT, “Is Varghese a real case” and “Are the other cases you provided fake?”
When ChatGPT replied that it had provided “real” authorities which could be found through Westlaw, LexisNexis and the Federal Reporter.
Finding that the lawyers’ conduct had been in “bad faith”, Castel imposed penalties under Rule 11 on the lawyers. According to Castel’s order, the lawyers must pay $5,000 in court for their misdeed.
Castel ordered that the lawyer inform his client and any judge whose names were wrongly invoked in the documents. However, Castel declined to order an apology, writing, “The Court will not require an apology from the defendants because a forced apology is not a sincere apology. Any decision to apologize rests with the defendants. »
You can read the full Sanctions Order here.
Do you have a tip we should know? [email protected]