A New York federal judge has set a precedent in holding a lawyer accountable for the misuse of AI in drafting court filings. The case, which involved a toy company suing merchants over alleged trademark infringement and false advertising, was terminated due to the lawyer's repeated use of fake citations generated by an AI program.
The lawyer, Steven Feldman, had been using various AI tools to review and cross-check citations, but he failed to catch his own errors. The judge ruled that Feldman repeatedly and brazenly violated Rule 11, which requires attorneys to verify the cases they cite, despite multiple warnings.
Feldman's use of AI in drafting court filings was criticized by the judge, who said it was "extremely difficult to believe" that an AI did not generate the overwrought prose in his filings. The judge accused Feldman of dodging the truth and evading her questions during a hearing.
In her ruling, the judge noted that Feldman's research methods were "redolent of Rube Goldberg," suggesting that he was relying on overly complicated and artificial means to conduct legal research. She also criticized Feldman for failing to fully accept responsibility for his mistakes and for not appreciating the gravity of the situation.
As a result of the ruling, Feldman's client must turn over any stolen goods in their remaining inventory and disgorge profits. The court has also issued an injunction preventing additional sales of stolen goods and is requiring refunds for customers who bought them.
The case highlights the need for greater transparency and accountability in the use of AI in legal research. While AI can be a useful tool, it must not replace human judgment and due diligence.
The lawyer, Steven Feldman, had been using various AI tools to review and cross-check citations, but he failed to catch his own errors. The judge ruled that Feldman repeatedly and brazenly violated Rule 11, which requires attorneys to verify the cases they cite, despite multiple warnings.
Feldman's use of AI in drafting court filings was criticized by the judge, who said it was "extremely difficult to believe" that an AI did not generate the overwrought prose in his filings. The judge accused Feldman of dodging the truth and evading her questions during a hearing.
In her ruling, the judge noted that Feldman's research methods were "redolent of Rube Goldberg," suggesting that he was relying on overly complicated and artificial means to conduct legal research. She also criticized Feldman for failing to fully accept responsibility for his mistakes and for not appreciating the gravity of the situation.
As a result of the ruling, Feldman's client must turn over any stolen goods in their remaining inventory and disgorge profits. The court has also issued an injunction preventing additional sales of stolen goods and is requiring refunds for customers who bought them.
The case highlights the need for greater transparency and accountability in the use of AI in legal research. While AI can be a useful tool, it must not replace human judgment and due diligence.