Practising Law With Artificial Intelligence
- October 31, 2025
- Clayton Rice, K.C.
Generative artificial intelligence is a specialized field of artificial intelligence that uses generative models to produce text and other forms of data. Sometimes called GenAI, these models learn the patterns and structures of vast amounts of data and then use that information to produce new data based on natural language requests or prompts. GenAI tools are now used in fields as diverse as health care, the financial sector and justice systems. This year has produced a catalogue of court rulings worldwide that exposed the dangers of trusting AI tools without verification.
1. Introduction
On September 26, 2025, the Alberta Court of Appeal released the opinion in Saroya v Reddy, a civil dispute involving undertakings, in which a unanimous panel advised Calgary lawyer Christopher Souster of Nimmons Law Office that “the lawyer whose name appears on the filed document bears ultimate responsibility for the material’s form and contents”. (here) The factum filed by Mr. Souster contained “references to fabricated case authorities” often called hallucinations in the world of artificial intelligence. Mr. Souster explained that, when issues about the factum were raised, he contacted the contractor he hired and was assured that “a large language model was not used.” It appears, however, that what the contractor told Mr. Souster may not have been true. The ruling presents an opportunity to discuss the notice to the public and the legal profession by the Alberta courts regarding the integrity of court submissions; decisions of Canadian courts on the use of GenAI in legal research; and, related publications by the Law Society of Alberta.
2. Background
In an article titled Attorneys – Track AI Hallucination Cases With This New Tool published by Forbes on July 18, 2025, digital forensics expert, Lars Daniel, discussed a new database maintained by French data scientist and lawyer, Damien Charlotin, of HEC Paris Innovation & Entrepreneurship Institute in Paris, France. The AI Hallucination Cases database tracks documented cases where lawyers “submitted artificial intelligence-generated fake legal citations to courts worldwide.” (here) The database contained over two hundred documented cases at that time. More recently, in an article titled Mistake-filled legal briefs show the limits of relying on AI tools at work published by The Associated Press on October 30, 2025, business reporter Cathy Bussewitz said M. Charlotin’s catalogue contains “at least 490 court filings in the past six months”. (here) “The database illustrates that this is more than just a rare glitch,” Mr. Daniel said. “It’s a systematic problem affecting attorneys across all practice areas and experience levels.”
If you are a lawyer, let me first tell you what you should not do if your brief catches fire in court because an AI hallucination was camouflaged in the weeds. On October 1, 2025, Judge Joel M. Cohen of the Supreme Court of the State of New York, in Mineola, New York, released the ruling in Ader v Ader involving an application by the executor of an estate for sanctions against the defendant for including inaccurate citations and quotations in a brief that was produced by an AI tool. (here) The defendant’s attorney, Michael Fourte, then submitted another brief explaining the use of AI which was also written with a large language model. (here) “In other words, counsel relied upon unvetted AI – in his telling, via inadequately supervised colleagues – to defend his use of unvetted AI,” an incredulous Judge Cohen said. In making a costs award, including attorney’s fees, Judge Cohen reminded us all that lawyers prejudice their clients and do a disservice to the court and the profession when they fail to check their work. “[C]ounsel’s duty of candor to the Court cannot be delegated to a software program,” he said.
3. Notice Issued by Alberta Courts
On October 6, 2023, the Alberta courts issued a notice to the public and the legal profession titled Ensuring the Integrity of Court Submissions When Using Large Language Models. (here) The notice does not prohibit the used of AI tools and specifically recognizes that “emerging technologies often bring both opportunities and challenges, and the legal community must adapt accordingly.” The notice endorses three principles – caution, reliance and verification. First, lawyers and litigants are urged to exercise caution when referencing legal authorities or analysis derived from large language models. Second, for all references to case law, statutes or commentary, it is essential that parties rely exclusively on authoritative sources such as court websites, commercial publishers or public services such as CanLII. Third, AI-generated submissions must be verified with “meaningful human control.” Verification can be achieved through cross-referencing with reliable legal databases to ensure citations hold up to scrutiny. The three principles recur throughout the case law and publications by the Law Society of Alberta.
4. The Emerging Case Law
On May 20, 2025, Justice F.L. Myers of the Ontario Superior Court of Justice issued the ruling in Ko v Li stemming from an order requiring a senior lawyer, Jisuh Lee, to show cause why she should not be held in contempt of court for including “fake precedent court cases” in the applicant’s factum. (here) Justice Myers agreed with Justice D.M. Masuhara of the British Columbia Supreme Court who held in Zhang v Chen that citing fake cases “is an abuse of process” and can lead to “a miscarriage of justice.” (here) One of the issues involved Ms. Lee’s failure to certify in the factum under the Ontario Rules of Civil Procedure that she was “satisfied as to the authenticity of every authority cited in the factum.” Justice Myers observed that the Ontario Civil Rules Committee enacted the requirement “to bring home to all lawyers the need to check and not to trust factums [sic] generated by AI or by others.” Ms. Lee’s failure arose when she “signed, delivered and used the factum without ensuring that the cases were authentic”. Ms. Lee apologized and acknowledged that the factum was prepared using ChatGPT. She successfully purged her contempt and Justice Myers held the public interest would not be served by proceeding with the show cause hearing.
On June 24, 2025, Justice Catharine Moore of the Federal Court released the ruling in Hussein v Canada endorsing the view of Justice Myers in Ko that “it is not the use of AI itself that is the concern here.” In Hussein, Justice Moore said “the real issue is not the use of generative artificial intelligence but the failure to declare that use.” (here) On May 7, 2024, the Federal Court had issued a practice direction titled Notice to the Parties and the Profession on the Use of Artificial Intelligence in Court Proceedings that requires disclosure of the use of GenAI. The rationale underpinning the disclosure rule is that opposing counsel and the court will be on notice and “can do the necessary due diligence.” The practice direction further states that the inclusion of the disclosure declaration, in and of itself, will not attract an adverse inference by the court. “The Court acknowledges the significant benefits of artificial intelligence, particularly in busy practices where cost efficiencies are being sought and is not trying to restrict its use,” Justice Moore said. “The concern is that there be some protection against the documented potential deleterious effects of its use.”
In Saroya, the Alberta Court of Appeal emphasized that counsel and self-represented litigants “should not expect leniency” where they failed to adhere to the requirements of the notice issued by the Alberta courts. In most situations, courts will likely consider remedies under the Rules of Court but may also consider initiating contempt proceedings or a referral to the Law Society of Alberta. In this case, the panel invited the parties to provide further submissions on whether the appellant’s lead counsel should be directed to pay a cost award. In R v Chand, however, Justice J.F. Kenkel of the Ontario Court of Justice adopted a remedial approach where erroneous case citations were used in defence counsel’s written submissions. Justice Kenkel required counsel to prepare new submissions without using GenAI although he also indicated there would be a “discussion” at the end of the trial about how the initial submissions were prepared. (here) It is important to emphasize that the Alberta notice does not contain a disclosure requirement and the Saroya court was silent on the issue. Mr. Souster, however, told Canadian Lawyer magazine that he changed his practice and now includes an “assurance” in his court filings that they are not prepared by an AI tool. (here)
5. Law Society of Alberta Publications
The resources available on the Law Society of Alberta website include The Generative AI Playbook, Why a Generative AI Use Policy?, Gen AI Rules of Engagement for Canadian Lawyers and Generative AI and Technological Competence: Quick Tips for Alberta Lawyers. I will only comment on The Generative AI Playbook. (here) It was the only Law Society resource cited in Saroya. The Playbook begins with a reminder that Rule 3.1-2 and Commentary of the Code of Conduct requires all members of the profession to “develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities.” (here) I will give you four of the top twelve recommendations made in the Playbook.
- Protect Client Confidentiality: GenAI tools do more than just respond to your prompts. They also use your data to train and improve themselves. There is no guarantee that they will keep your information confidential. While you may be able to opt out of your data being used for training purposes, given GenAI’s rapidly developing capabilities, you should not submit confidential information as part of a prompt.
- Protect Client Privacy: In 2023, the Privacy Commissioner of Canada announced that it had launched an investigation into OpenAI because of a complaint alleging the collection, use and disclosure of personal information without consent. (here) Understand how GenAI systems use the data you input. Don’t enter sensitive or personal information you would not want published on the internet. Don’t submit prompts that could undermine public trust in the legal system if they were disclosed.
- Use But Verify: For all GenAI’s remarkable capabilities, research conducted by GenAI remains problematic. It may contain hallucinations putting your reputation and your clients’ interest in jeopardy. It may not be up to date. It may rely on biased or incomplete source material. Use the technology when appropriate but always verify the outputs through analysis and fact-checking.
- Experiment with GenAI: Innovate. GenAI represents a new and powerful technology that you can use in a multitude of ways to improve the client experience, the delivery of legal services and the practise of law. Be on the lookout for new ways to improve all of these. But know your limits.
The Playbook emphasizes that, while opportunities for the use of GenAI are numerous, it comes with “key risks” to confidentiality and reliability. Diligence and staff training prior to use are necessities.
6. Conclusion
ChatGPT, developed by OpenAI, exploded onto the scene in 2022 and became the fastest growing software in history. While it appears to be “the most recognized AI tool for lawyers” it is not the only option available. (here) Microsoft Copilot uses GenAI to draft emails and presentations. It can increase efficiency in preparation of client communications and can also create basic legal documents. Casetext CoCounsel, made for the legal profession, uses AI for legal research and to assist in drafting documents. Sana is an AI platform designed to help organizations retrieve knowledge across internal documents. Harvey is an AI legal assistant designed to help automate legal tasks. But the guidelines for using all these AI tools are the same – use the technology to improve efficiency and productivity but verify the outputs through analysis and fact-checking.
