11 April, 2024

Artificial intelligence is everywhere. AI will be looked back on as more impactful than the industrial revolution. Trying to sort out what AI can and shouldn’t do, how to use it, and navigating the various ethical and legal issues around it can be overwhelming. It has the potential to make many things better and easier — but also to disrupt the way we do things and even take jobs away. Many countries are scrambling to pass laws to regulate its use.
This post sets out a few high-level things to keep in mind.
AI hallucinations
AI text output is notorious for being inaccurate. It tends to hallucinate. Be skeptical of anything AI tells you. For example, AI chatbots can give bad advice. There have been several incidents where lawyers have filed briefs with courts that contain fictitious case law.
Nefarious purposes
AI-generated images and video can be misleading or damaging. Take the high road and don’t create them for nefarious purposes. Don’t accept that everything you see is real, especially if published by someone with a cause or an axe to grind.
AI ownership
It is clear that AI output is not owned by anyone, not even the person who prompted the AI to create it. On the one hand, using AI to create images or video is a good way to avoid the risk of inadvertently using someone else’s copyrighted work. On the other hand, if you use AI to create something that might have value to you or your business, be aware that you can’t assert copyright or ownership over it.
Content risk
Many AI tools retain the document, question, or task that you input. If your input includes personal, confidential, or proprietary information you might be violating privacy laws or confidentiality obligations. Or it might put the protection of your own IP at risk. If you want to use AI for that kind of task, pay close attention to the platform’s promises, and don’t use those that retain your input.
AI laws
AI laws are emerging and will take time to get sorted out and come into force. If you are creating AI tools, take some time to investigate where those laws are headed and design compliance into the tool as best you can. Be especially cautious if your AI might make decisions that affect people or include controversial tech like facial recognition.
Victim recourse
If you find yourself the victim of false AI images or false AI text output, you may have some remedies. If, for example, they appear on social media it would be worth checking the platform’s publication rules. If they do report it using the platform’s tools the platform might remove them.
AI policies
Businesses and organizations should consider implementing AI policies to control rogue employee use of AI tools in ways that might be embarrassing or cause harm. If your business or organization uses AI tools to create and publish material, consider disclosing that fact.
Keep up to date
To keep on top of legal and ethical issues around AI, download our ebook: AI: Issues in Artificial Intelligence containing an updated compilation of posts we have written on the subject. And, sign up for our weekly Techlaw newsletter. For the record, this post was written by a human without AI assistance. The image accompanying it was, however, created using Gemini — a generative artificial intelligence chatbot developed by Google.
David Canton is a business lawyer and trademark agent at Harrison Pensa with a practice focusing on technology, privacy law, technology companies and intellectual property. Connect with David on LinkedIn and Twitter.