A Supreme Court judge has reprimanded a British Columbia lawyer for citing two AI-generated “hallucinations” in a legal filing and has ordered her to compensate opposing counsel for their time. Justice David Masuhara, in his Feb. 26 ruling, ordered lawyer Chong Ke to personally compensate the lawyers representing her client’s ex-wife. Judge Masuhara said it was “appropriate” for Ms. Ke to pay opposing counsel for the time it took them to discover the cases she planned to reference had been created by ChatGPT, a free AI system that allows users to refine conversations and automate tasks. Although Ms. Ke withdrew the AI-generated cases when she realized they were fake, Judge Masuhara said he was troubled by the occurrence. He emphasized the importance of competence in the selection and use of any technology tools, including those powered by AI.
Ms. Ke represents Wei Chen, a multi-millionaire businessman, in his divorce from Nina Zhang, who lives with their three children in West Vancouver. The court last December ordered Mr. Chen to pay $16,062 a month in child support after his annual income was calculated at $1 million. Ms. Ke filed an application prior to the ruling so Mr. Chen’s children could travel to China. The legal notice cited a case in which a mother took her child aged 7 to India for six weeks and another case granting a mother’s application to travel with the child aged 9 to China for four weeks to visit her parents and friends. The error was discovered after Ms. Zhang’s lawyers asked for copies of the cases because they couldn’t locate them based on their citation numbers.
Despite the mistake, Ms. Ke expressed deep remorse and acknowledged her lack of knowledge of the risks of using an AI program. She apologized to the court and opposing counsel for her error. Although opposing counsel asked for special costs for abuse of process, Judge Masuhara declined, believing Ms. Ke to be sincere in her apology. This incident is not the first time a lawyer has been sanctioned for using ChatGPT. In a similar case in the U.S., a judge imposed sanctions on two New York lawyers for submitting a legal brief with fictitious case citations generated by ChatGPT. The judge found the lawyers acted in bad faith and made false and misleading statements to the court.