A lawyer used ChatGPT to write a legal brief that was completely 'bogus'

Do repost and rate:

Ethical concerns have been raised by educational institutions which fear students could use artificial intelligence (AI)-powered chatbots like ChatGPT to write their research papers or essays, thus affecting the academic integrity of the task.

But even if they did use the chatbot for research-related work, one would assume that they would check the veracity of the results produced by the chatbot. But this wasn't even the case with a bar-certified lawyer, Mr. Steven A. Schwartz.

In an embarrassing turn of events, a New York-based lawyer has landed himself in hot water with a Manhattan federal court after he used ChatGPT to do legal research for a case, reported The New York Times.

Related
  • Students' future in question after lecturer fails entire class for using ChatGPT 
  • ChatGPT: Departments in Australia given freedom to experiment with AI tool 
  • Regulators turn to old laws to tackle AI technology like ChatGPT 

All would have gone well for Mr. Schwartz had the citations and judicial decisions mentioned in the briefing had not been a figment of ChatGPT’s imagination.

The lawsuit had been filed by a New York-based law firm, Levidow, Levidow & Oberman, on behalf of their client Roberto Mata who wanted to sue the airline Avianca after he alleged he was injured by a serving cart during a flight.

After Avianca filed the motion to throw out the case, Mr. Schwartz, who has been a practicing attorney for the last 30 years, submitted a 10-page brief with citations of similar cases in the past. But neither the lawyers representing Avianca nor the judge himself could find the cases cited in the paper.

“The Court is presented with an unprecedented circumstance,” wrote U.S. District Judge Kevin Castel in a show cause notice. “A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.”

Of the citations submitted, “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” further said the notice dated May 4, 2023.

There's a name for the 'bogus' information provided by ChatGPT

Developers call it 'hallucination.' It's when generative AI tools like ChatGPT spew out false answers to prompts. These answers aren't real and do not match the data it has been trained on. A 'hallucinating' AI tool could, for example, create false news reports or give a list of lawsuits that do not exist.

Mr. Schwartz replied to the court notice on May 25, accepting and apologizing for using OpenAI’s chatbot for his research. He said in his reply: “That your affiant (Schwartz) greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

Interestingly, Schwartz was not directly representing the client. It was, in fact, another lawyer at Levidow, Levidow & Oberman, Peter LoDuca, who had asked Schwartz to help him out with the citation brief. Schwartz added in his reply: “That Peter LoDuca, Esq. had no role in performing the research in question, nor did he have any knowledge of how said research was conducted. 

He also attached screenshots of his queries and responses provided by ChatGPT as proof that he “had no intent to deceive this Court nor the defendant.”

The court has called for a hearing on June 8 to discuss sanctions against Mr. LoDuca.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.

Regulation and Society adoption

Ждем новостей

Нет новых страниц

Следующая новость