9 May, 2024

The Law Society of Ontario (LSO) — the entity that governs how lawyers and paralegals operate in the province — has issued a white paper on lawyer use of artificial intelligence. The 31-page report includes a short checklist to get started and best practices tips.
This white paper is a must-read for any lawyer or paralegal who might use AI in their practice. The overarching principle is that AI should be used as a tool and not as a replacement for professional expertise and judgment. Anyone using AI in a business setting might find the checklist and tips starting on page 19 useful.
AI risks
The requirements and suggestions are not surprising, and track AI issues we have been following. Download this AI Guide to learn more.
Risks discussed in the white paper include:
- Confidentiality breaches are caused by AI tools retaining and using inputs.
- AI hallucinations such as fabricating court decisions.
- Embedded bias in the output.
- AI chatbot conversations being construed as legal advice.
AI best practices
Best practices include:
- Creating a firm policy on AI use including when and how it can be used.
- Doing diligence on tools before using them.
- Getting AI training.
- Verifying output.
- Understanding privacy and security settings.
- Understanding what the tool does with input.
- Be careful with using chatbots.
- Ensure AI decision-making aligns with human rights legislation.
The white paper includes factors to consider whether one should disclose to clients that AI is being used.
It also cautions non-lawyers that using AI to provide legal services to the public would be contrary to the Law Society Act.
My thoughts
The risks of AI use of course depend on what one is doing with it. For example, using AI to create an image for something like this blog post carries virtually no risk. However, using AI to review or draft documents carries a high risk.
In my view, there are two top risks for lawyers using AI. One is the risk of breaching confidentiality or privacy by having an AI tool review documents or inputting sensitive information into prompts. The other is relying on AI output as accurate.
It is inevitable that someone in a law firm will experiment with AI tools. If they don’t understand the risks and issues around it, it could at the very least be embarrassing for the firm. Even if a law firm does not want to embrace or experiment with AI tools, it should consider advising lawyers and staff what not to do with it.
David Canton is a business lawyer and trademark agent at Harrison Pensa with a practice focusing on technology, privacy law, technology companies and intellectual property. Connect with David on LinkedIn and Twitter.
Generated AI image: ©moon – stock.adobe.com