Blog

Class aptent taciti sociosqu ad litora

Facial Recognition: The People Push Back

  • October 15, 2020
  • Heather Ferg

Fully-automated identification technology can undermine personal privacy and public anonymity in an instant. Combined with other tools of mass surveillance, it bridges the gap between the physical and digital worlds. Despite serious privacy and human rights concerns, the use of facial recognition software has spread across the globe with breathtaking speed. It is widely used in both the public and private sectors, often without the knowledge, let alone consent, of average citizens. This post reviews the basics of how facial recognition works, the breadth of its implementation and some of the concerns associated with its unrestricted expansion.

1. The Basics

A category of biometric software, facial recognition systems use computer algorithms to isolate numerous details of a person’s face (called nodal points) such as the distance between the eyes and the shape of the nose. These details are converted to a mathematical representation or map that is stored as a “face print” or “facial template” and then used for comparison to a live capture or other digital image in a data base. When the police have a photo of a person of interest (sometimes called a probe photo), it can be fed into the program and the algorithm will produce a “match” drawn from database(s) of known people.  Some systems rank matches based on a probability score (which can vary depending on how the confidence parameters are set), while others purport to return positive identifications. A system that calculates probability ranks the potential matches in order of the likelihood of an actual match. (See here for a more detailed explanation.)

2. The Breadth

The use of facial recognition technology has proliferated in recent years at a stunning rate. It is used by the public sector in areas such as immigration, national security and domestic law enforcement. Countries around the world have implemented systems that rely upon or are complemented by facial recognition software. The widespread installation of cameras has been a larger part of the “Smart Cities” trend in municipal development and, as Tamir Israel of the Canadian Internet Policy and Public Interest Clinic explains in a new report, facial recognition technology is “fuelling an unprecedented level of surveillance” worldwide.  In the key findings of the report, Mr. Israel explains that facial recognition now provides a fully automated, surreptitious and powerful means of identifying otherwise anonymous individuals. Once identified, the software can “pervasively link them to rich digital profiles” and provide whoever may be deploying the technology with “a means of mapping digital functionality to the physical world […]”. (at p. 1).

In 2016, researchers at the Center on Privacy & Technology at Georgetown University in Washington, D.C. reported that law enforcement face recognition networks included over 117 million American adults (here). At least five major police departments – including agencies in Chicago, Dallas, and Los Angeles – either claimed to run real-time face recognition off street cameras, bought technology that can do so, or expressed a written interest in buying it. In January 2020, the New York Times reported (here) that in the previous year, over 600 law enforcement agencies had begun using Clearview AI which runs on a database of more than three billion images scraped from the internet.

Facial recognition is used extensively in Asia, particularly in China and Hong Kong. In a 2018 article published by The Atlantic on the Chinese surveillance state (here), Anna Mitchell and Larry Diamond reported that 100% of Beijing is covered by security cameras, and the nation was expected to have 450 million cameras in place by 2020. As reported in The Guardian last year (here), facial scans are now mandatory for all Chinese mobile phone users.

Other countries use facial recognition to complement less sophisticated systems. In Argentina, the government maintains a national database called the National Register of Fugitives and Arrests to track those suspected of criminality. According to the MIT Technology Review (here), the database is kept in the form of a plain text spreadsheet readily accessible on the internet. In Buenos Aires, the database is used to feed a facial recognition system that pulls images from the country’s photo identification registry. The system is used to live monitor citizens and scan for suspected criminals at the city’s train stations. This process has recently come under fire as an investigation by Human Rights Watch (here) revealed that the database publishes the private information of children (an international human rights violation) and then subjects them to facial recognition targeting.

In the private sector, facial recognition technologies are sold (in a largely unregulated fashion) for a wide variety of applications. For example, Face First, a California AI company, cheerfully boasts at least 21 ways that facial recognition technology “is currently being used to make the world safer, smarter and more convenient.” They include:

  • Preventing retail crime by instantly identifying “known shoplifters;”
  • Providing targeted advertising by scanning faces while people look at things like gas station screens;
  • Diagnosing diseases;
  • Protecting police when they unwittingly stop wanted murderers for traffic violations;
  • Recognizing VIPs at sporting events so they can be offered free swag;
  • Stopping toilet paper thieves and limiting the distribution of toilet paper in public restrooms (dispensers can be programmed not to release more toilet paper to the same person until 9 minutes have gone by);
  • Tracking church attendance to better target donors and identify who to “reach out” to (i.e. nag) to get them to come to church more often; and
  • Finding lost pets. (here)

3. The Concerns

The privacy implications of the widespread use of the technology are obvious. Having one’s biometric data collected and then stored is inherently invasive. Where the data is shared or “repurposed” the problem is multiplied. There are concerns about accuracy, particularly for people who are not white males (here), and the consequences of mis-identification, especially in the law enforcement or national security contexts, can be devastating. There is an overall lack of transparency and, without regulation, there is nothing to guard against discriminatory implementation and the increased mass surveillance and over-policing of marginalized people.

4. The Push Back

Signs are emerging that citizens and municipalities are not content to have their public spaces indiscriminately scanned and scrutinized by anyone who can afford some software and a camera. Three examples are briefly discussed below.

(a) San Diego, California

Last month, the mayor of San Diego ordered that the 3,000 cameras embedded in the city’s “Smart Streetlights” be turned off. As reported in the San Diego Tribune (here), controversy arose last year when the people of San Diego learned that their new streetlights had cameras in them. This latest order was in response to public outcry after the city planned on giving exclusive camera access and management to the police without any public oversight. The cameras will remain off until the city can draft policies that ensure transparency, oversight and accountability.

(b) Portland, Oregon

On September 9, 2020, the city of Portland approved municipal ordinances banning the use of facial recognition technologies by city bureaus and private entities. The public space ordinance (here) recognized the right of citizens and visitors to enjoy public spaces with a reasonable assumption of privacy and anonymity and that indiscriminate use of the technology would “degrade civil liberties.” It also acknowledged that while such technologies may have laudable benefits, the risk of misidentification and misuse are always present. Surveillance (including facial recognition) “must be transparent, accountable and designed in ways that protect personal and collective privacy […]”. The ban covers any automated or semi-automated processes that assist in identifying, detecting or characterizing an individual or captures information about them based on their face.

(c) Cardiff, Wales

In a recent landmark ruling (here), the Court of Appeal in London held that the public deployment of facial recognition technology by the South Wales police breached Article 8 of the European Convention on Human Rights. Article 8, which applies to all member states of the Council of Europe, guarantees “the right to respect for […] private and family life”. The ruling represents an important step in the protection of privacy in public places.

In September 2019, a man named Ed Bridges challenged the use of facial recognition technology by the South Wales police after his image was captured twice: once while Christmas shopping in a busy marketplace and once at a peaceful anti-arms protest held outside a local arena. The police were scanning the areas using a live automated facial recognition technology called “ARF Locate”. The technology was capable of scanning 50 faces per second and capturing the associated facial biometrics. The police had stationed ARF equipped cameras at the two locations and programmed them to scan for people on watch lists. The watch lists used photos taken from police databases and included people who were wanted on warrants, unlawfully at large, vulnerable or in need of protection. The watch lists also included people suspected of committing crimes, people whose presence at a particular event caused “particular concern” (presumably for the police) and those of “possible interest” for “intellegence purposes” (para 13).

Mr. Bridges was unaware of the cameras and did not consent to his image being captured. He argued that the use of AFR was not “in accordance with the law” under Article 8(2). While he was unsuccessful in the Divisional Court, the appellate ruling held that his privacy rights had been breached because of the manner in which the watch lists were populated (anyone of interest to the police could be included) and the police failed to determine if the software had inherent racial or sex-based biases (paras 124 and 164). According to a statement reported by The Guardian shortly after the decision was released (here), the police did not plan to appeal and viewed the ruling as “a judgment that we can work with.”

5. Conclusion

As COVID-19 has spread across the world, the widespread use of non-medical masks and face coverings has posed challenges to the mass use of facial recognition technology. Some programs simply don’t work. Apple’s Face ID will not recognize a user when they are wearing a mask. In May 2020, Apple introduced a pass code work-around as part of iOS 13.5.

Others are apparently having more success. As tech writer Jeremy Khan reported (here), a number of companies claim to have software capable of carrying out facial recognition despite the use of masks. They include Amazon’s Rekognition, China’s Sensetime, Russia’s NtechLab and Corsight in Israel.  However, the level of accuracy in identifying mask wearers remains unclear, particularly when the data sets used for comparison are comprised of people who are not wearing masks.

As the facial recognition industry scrambles to adapt, the right to anonymity in public spaces is at the forefront of public consciousness. It appears the people have started to push back.

Comments are closed.