15 February, 2023

Artificial Intelligence and machine learning tools are coming at us so fast it’s like drinking from a fire hose. The dawn of AI and the analog-to-digital conversion that enabled it will be considered as significant as the Industrial Revolution. Both bring new opportunities, disruption, fear, uncertainty, and change. A significant difference, though, is the speed at which this is happening.
To quote from a Wired article, “artificial intelligence is here. It’s overhyped, poorly understood, and flawed but already core to our lives—and it’s only going to extend its reach.”
AI has been in development and in use for some time. It drives things like recommendation engines for online shopping and video providers, and facial recognition to unlock our phones. The recent launch of free publicly available tools like Dall-E to create images and ChatGPT to create text have brought AI into the public eye.
Microsoft and Google recently announced they are including AI in their search engines and other products. There is a huge amount of competition in AI-assisted search as well as AI content creation.
Machine-learning AI
AI in its current form is not intelligent. It is machine learning, meaning it looks at large databases of existing material and generates results in response to natural language queries based on what it learned. Dall-E, for example, learns from a database of images of art and photographs. ChatGPT learns from a database of written material.
It is more like a parrot than Data from Star Trek.
Nobody asked if our images, information, writing, and art could be used by AI to learn about and create potentially competing works. Scraping online images of our faces for use in facial recognition tools has been banned for privacy reasons. Despite our willing publication of photos of our faces, we didn’t consent to them being used to create an online police lineup. Might similar logic be applied to AI use of our content for learning purposes?
Much art and writing are derivative in that it is influenced by what we have seen and read before. That’s what style is all about. But laws and ethical rules kick in when that morphs into copyright violations and plagiarism. Are these AI tools nothing more than “high-tech plagiarism”?
Are AI copyright and plagiarism issues like privacy and practical obscurity? Before the web, many things were considered public information. But putting documents online that, for example, used to be only in paper records in a courthouse that few would take the time and effort to find and look at makes them available to anyone with a browser. Practical obscurity effectively hid from view things that were considered public. But the notion of those being public had to be reconsidered when they were easy to access. In that context, what should private or confidential information really mean? Do we have to similarly rethink the notions of ethics, copyright, and derivative works when AI is creating things based on pre-existing works?
AI ownership
Attempts to attribute ownership, authorship, or invention for patents and copyright to AI or non-human creators have not been successful. Does that mean anything generated by an AI tool is in the public domain and free for anyone to use or copy? Should we have some sort of AI watermark or disclosure obligation when publishing AI-generated works? How much human manipulation and skill must be added to AI-generated art or text before that human is considered the creator, author, or owner?
AI output sounds confident and convincing. But it is notoriously inaccurate and unreliable. Even Google produced a wrong answer when they debuted their product. Is AI text output destined to be obtuse, specious, and inaccurate? Are we going down the path of lowering writing to the standard of a supermarket tabloid?
We already have a huge problem with misinformation, confirmation bias, and the lack of a desire to base decisions and opinions on facts. Is AI going to make it worse? How do we prevent this? How do we tell what is nonsense?
Is the confident misinformation AI presents the nature of the beast, or is it in part because the material it learns is of dubious quality? After all, garbage in, garbage out. Can the misinformation issue be corrected? AI output can read like link bait articles found on the web or social media that have no substance, don’t provide what they promised, or rely on sensationalism to attract attention. Perhaps AI has just learned what humans, unfortunately, do well?
Lots of questions
How do we deal with malicious uses of AI, such as those that use it to create malware or use it to impersonate for fraudulent purposes?
How do we deal with the issues of AI bias and transparency?
Will AI replace [insert your job here]?
How will AI change how I do my job, and how fast? How do we learn the new skill of prompt drafting?
AI tools will support, supplant, or replace things people do. At what point will it become negligent for humans to perform certain tasks or make certain decisions without using AI tools that can do them better, more accurately, and faster?
Will AI ever become sentient? If so, will it be like Data from Star Trek or Skynet from the Terminator? How do we control and direct that?
There is a myriad of AI ethical frameworks. The EU and Canada are contemplating legislation to govern AI. Legislators don’t understand most tech/internet/social media issues, and IMHO gets most legislation that touches it horribly wrong. So how can they possibly legislate AI issues in a way that prevents harm, but doesn’t get in the way of progress?
Impact on web search
Will our online material have to focus less on SEO (search engine optimization) and more on being optimized for AI queries?
In a future where AI chat supplants search and web traffic goes down, how do you promote yourself and your products? How does social media fit into this? Will AI chat tools effectively unite social media platforms?
Now if only these questions could be answered at the same speed AI is developing.
David Canton is a technology and AI lawyer at Harrison Pensa with a practice focusing on technology, privacy law, technology companies and intellectual property. Connect with David on LinkedIn and Twitter.
Image credit: ©phonlamaiphoto – stock.adobe.com