Skip to content
Harrison Pensa LLP
ID verification of a woman's eye using artificial intelligence

23 August, 2023

Privacy risks of AI for ID verification

The use of facial recognition software for ID verification is controversial for several reasons. A recent report by Calgary based legal advocacy organization Justice Centre for Constitutional Freedoms details privacy risks of using AI based technology for identity verification. It points out that while using facial recognition and other biometrics to authenticate people is tempting, it comes with some serious privacy issues.

There are two fundamentally different ways to use facial recognition or other biometric based tech that knows who we are.

ID verification

The first is to verify that we are who we say we are. In other words, that this really is David trying to enter his locked office door or access his bank account. This kind of use has many issues that need to be considered and dealt with, but they can be addressed.

Identification

The second is to simply identify us. For example, a video feed of a public space that recognizes that David walked by this spot at 5:00 PM on Thursday. That is much more invasive than mere verification. Some countries have embraced this technology under the rationale of law enforcement being able to tell who committed a crime.

England is an example of a country that has embraced massive CCTV surveillance networks. That seems odd for a country that has one of the toughest private sector privacy laws in the world. It’s not the only country that seems to have a significant disconnect by being tough on private sector use of personal information, but insisting that the state needs massive surveillance powers.

Canadian perspective

In Canada, Privacy Commissioners have weighed in on this issue, banning Cambridge Analytica from scraping our images from the internet and selling those to police. They have also published their thoughts on how police use of facial recognition should be limited.

The problem

Facial recognition using artificial intelligence to identify people is notoriously inaccurate, especially when dealing with people who are not white. It has a serious embedded bias problem. Many jurisdictions have banned the use of it for that reason. The problem is the undercurrent is that it will be okay to use it once the bias /accuracy issue is sorted out.

The longer lasting problem, though, is not accuracy. Assuming the bias / accuracy problem can be fixed, the bigger problem is privacy.

Think of it this way. Police can only collect and retain our fingerprints if we are charged with a serious crime. So why should they be able to take our images or other biometric information from various sources without our knowledge or consent and use that to track our every move?

David Canton is a business lawyer and trademark agent at Harrison Pensa with a practice focusing on technology, privacy law, technology companies and intellectual property. Connect with David on LinkedIn and Twitter.

Image credit: ©Sergey Nivens – stock.adobe.com

A headshot of David Canton.
About the author

David Canton

Consultant
  • Business Law & Financial Services,
  • Data Protection,
  • e-Commerce,
  • Information Technology,
  • Intellectual Property,
  • SaaS,
  • Software Licenses,
  • Technology and Privacy Law
Meet David

Get connected

Sign up for our newsletter to stay up to date with current events, news and articles

Newsletter Sign-Up (Posts)

CASL
This field is for validation purposes and should be left unchanged.
Loading...