AI's growing role in the legal profession is inevitable. However, the recent incident of U.S. lawyers using an AI chatbot, ChatGPT, to generate fictitious case citations, underscores the critical need for understanding, respecting, and responsibly utilizing this technology.

In recent years, artificial intelligence has offered promising advancements to the legal profession. The use of AI to manage vast amounts of legal data can streamline legal research, making it efficient and comprehensive. However, it's important to ask, does the adoption of AI absolve lawyers from their responsibility of verifying the credibility and authenticity of their sources?

Misunderstanding of AI's Capabilities

The law firm, Levidow, Levidow & Oberman, argued that they made a "good faith mistake," failing to believe that AI could fabricate cases. Their defence signifies a grave misunderstanding or lack of awareness of the nature of AI, particularly in legal settings. AI's capabilities are predominantly dependent on the information it's fed. AI doesn't discern between factual or false data - it merely processes what it's given. Shouldn't lawyers then be wary of completely relying on such tools without conducting a thorough vetting process?

Debunking the Argument of 'Good Faith Mistake'

The 'good faith mistake' defence put forward by the lawyers raises a controversial issue - can ignorance of technology's capabilities be considered a valid defence in this age of digital proficiency? History has shown that ignorance of the law is not a viable defence in court; one could argue the same should apply in the realm of technology, especially when it plays a crucial role in the case at hand.

The Judicial Standpoint

The Judge, P. Kevin Castel, stated that while using AI for assistance in legal practices isn't inherently improper, lawyers have an ethical responsibility to ensure the accuracy of their filings. This ruling not only presents a strong stand against the misuse of AI but also emphasizes the need for lawyers to exercise due diligence in their professional conduct. Could this be a turning point in how AI usage is governed within legal practices?

Repercussions and Learning Points

The $5,000 fine imposed on the lawyers serves as a stern reminder of the professional responsibility that legal practitioners bear, even in the face of advancing technology. More importantly, it serves as a significant turning point in the narrative around AI's role in the legal profession. As technology progresses, does the responsibility of legal practitioners increase?


As we embrace AI's role in the legal profession, it's critical to remember that technology is a tool, not a substitute for professional judgment and responsibility. It's essential to consider, how can we ensure that this incident serves as a lesson for the legal profession to better understand and responsibly use AI in their practice? This incident should lead to broader conversations, clear guidelines, and educational initiatives surrounding the ethical use of AI in law. After all, the credibility of our legal system depends on it.

Share this post