ChatGPT Can be an Evil Genie

Lawyers Sanctioned for Using ChatGPT

by Bill Honaker, The IP Guy

ChatGPT is like the evil Genie; you’re granted your wish, then deeply regret asking. For example, you wish to live forever… but don’t include staying young.

The recent case involving New York lawyers using ChatGPT to write their brief is one of those real life wishes gone wrong. They filed a brief that contained cases supporting their position, but the cases didn’t exist. The AI just made them up. When I first heard about the case, I thought, “That’s embarrassing, but it’s new technology and they were probably just exploring how it works, and just made a mistake.” But there was much more.

I read the Court’s 48-page opinion sanctioning the lawyers. They were fined $5,000, and had to write apologies to all of the judges who’s fictional cases had been created by the AI. I was surprised the sanctions were so low. While the court held that the lawyers acted in bad faith, the law firm disagreed, saying that they made a good faith mistake. They did not know that a piece of technology could make up cases out of whole cloth. But when you read the facts, seems the judge got it right and probably should’ve sanctioned them even more.

There were two attorneys involved, Steven Schwartz and Peter LoDuca, from the firm Levidow, Lvidow & Oberman. Judge Castel seemed most upset about the attorneys not coming clean, and just admitting what they had done. Judge Castel wrote:

But if the matter had ended with Respondents coming clean about their actions shortly after they received the defendant’s March 15 brief questioning the existence of the cases, or after they reviewed the Court’s Orders of April 11 and 12 requiring production of the cases, the record now would look quite different. Instead, the individual Respondents doubled down and did not begin to dribble out the truth until May 25, after the Court issued an Order to Show Cause why one of the individual Respondents ought not be sanctioned.

The defendant’s lawyers looked for the cases, couldn’t find them, and advised the court. The court also couldn’t find the cases, and ordered the respondents to provide the court with copies. For any lawyer red flags would be flying high.

These lawyers later testified that they didn’t have access to commonly used research tools and went back to ChatGPT to find the missing cases. Schwartz did the initial research on ChatGPT. After the Court’s ordered that they produce the cases, Schwartz asked ChatGPT to confirm that the cases were real and asked it to provide copies. ChatGPT confirmed they were real and gave excerpts (again all fake). These were provided to the Court to prove that they were real cases.

But all were hallucinations, the word used when AI makes things up. What my Mom called lies. LoDuca further compounded their problems by asking for an extension to respond, saying that he was on vacation, which was later found to be, well a hallucination, or as the Court called it, like my Mom would have, a lie.

Two lessons are clearly gleaned from this train wreck. First, when you make a mistake, just come clean. When the opponent indicated that the cases didn’t exist, these guys should have dropped everything to find them and correct their mistake. If they had done so, they wouldn’t have been in numerous articles including the New York Times.

The other lesson is that ChatGPT makes things up. It hallucinates wildly, it doesn’t think; it just creates words to make your wish (prompt) come true. Check out my previous article regarding other issues with ChatGPT.

Bill Honaker, The IP Guy

About the Author:

Bill Honaker, “The IP Guy” is a former USPTO Examiner, a partner with Dickinson-Wright, and author of the forthcoming book, Invisible Assets – How to Maximize the Hidden Value in Your Business. To download a sample chapter, click here.

To get answers to your questions click here. To schedule a time to talk, you can access my calendar by clicking here, you can email Bill@IPGuy.com, or call me at 248-433-7381.

 

Leave a comment