European citizens may soon have protections most Americans lack: control over the use of their face recognition data.
Facial recognition tech remains largely unregulated, and we’ve seen what police and other government operators can get away with in the absence of any meaningful rules. A few cities in the U.S. have outright banned its use by city agencies, but globally, it still remains a wild west of unbridled surveillance. The European Commission reportedly intends to counter this unjust reality.
The EU has regulation in the works that will give citizens more power over how their facial recognition data is used, senior officials told the Financial Times. The plan will reportedly limit “the indiscriminate use of facial recognition technology” by both companies as well as public authorities. It will also give citizens the right to know when their facial recognition data is being used, according to a source who spoke with the Financial Times.
According to the report, the restrictions on face recognition tech are part oa broader plan to address the use of artificial intelligence and to “foster public trust and acceptance” in this type of technology. A document obtained by the Financial Times stated that the intention is to “set a world-standard for AI regulation” with “clear, predictable and uniform rules … which adequately protect individuals.”
“AI applications can pose significant risks to fundamental rights,” the document reportedly states. “Unregulated AI systems may take decisions affecting citizens without explanation, possibility of recourse or even a responsible interlocutor.”
There have been a number of reports on how facial recognition tech has been inaccurate, misused, and abused, ranging from the tremendously dumb to unsettlingly disturbing and, in some cases, life-endangering. Which is oftentimes the point.
The reported EU plan to target “indiscriminate” usage of facial recognition tech in public areas would extend to both public and private entities. Officials and private companies have mostly been able to continue to deploy this technology in flawed and unethical ways because there are very few explicit laws and legal requirements for transparency that would effectively limit or end their usage.
The EU’s plans are reportedly still in their early stages, so it’s unclear what the exact parameters of the regulation will be. Still, sweeping legislation drafted to address technology that, unfortunately, is already being irresponsibly utilized is progress. The U.S. has yet to consider such a far-reaching plan; however, we’ve seen progress here as well, with three cities instituting bans on the technology and others considering similar prohibitions.
It’s also unclear if a massive surveillance system can coexist with individual rights to privacy and data. Sure, granting individuals the right to know exactly how their biometric data is being used is an important step toward transparency, but if they don’t like what they learn, what recourse will they have?
And while some regulation is better than no regulation, some human rights advocates and ethical technologists might argue that simply putting limitations on a technology that can easily target vulnerable communities is not enough—that instead, we should ban it outright.
Not all privacy experts believe that the technology needs to be banned everywhere forever; appropriate uses cases may be possible. But while we’re seeing a litany of abuses, and until it’s proven that this technology causes more good than harm, it shouldn’t be used, especially by powerful government agencies and private corporations. It’s an argument the EU should consider as it decides how to ensure this invasive technology isn’t abused. Ultimately. its most ethical deployment may be no deployment at all.