Just a Face in the Crowd
- November 6, 2016
- Clayton Rice, K.C.
Do you have a right to privacy in the image of your own face? Is there a difference in privacy interest between the digital image of your face taken by a government agency that regulates drivers’ licences and one taken by a security camera in a public parking garage? Is there a distinction between the hallway of a condominium building and an airport terminal or border crossing? Can the police use technology to continuously scan the faces of pedestrians walking by a street surveillance camera in real time? What are the constitutional implications under s. 8 of the Canadian Charter of Rights and Freedoms and the Fourth Amendment to the Constitution of the United States?
These questions raise the burgeoning use of face recognition systems by state agencies to collect biometric data. Biometrics involves the collection of personal characteristics that are used to identify and label individuals. The identifiers that are catalogued are generally physiological rather than behavioural. A face recognition system is a computer application designed to identify an individual from a digital image or video source. It is similar to other biometrics such as fingerprints and iris recognition. The computer application is designed to achieve identification by comparing a digital image to those in a facial database. Biometric characteristics are part of the core of personal information that must receive a measure of constitutional protection.
There is nothing new about the collection of biometric data by the state. In Canada, the acquisition of fingerprints on arrest is governed by the Identification of Criminals Act, RSC, 1985, c. I-1. And warrants for the seizure of bodily substances for DNA analysis may be granted by provincial court judges under s. 487.05(1) of the Criminal Code. These are examples of narrowly targeted seizures of biometric data that have passed constitutional scrutiny in Canada. What I am concerned with here is the harvesting and use of biometric digital imagery by facial recognition systems that is unregulated. The invasion of privacy is twofold: (a) the collection of biometric digital imagery and (b) the retention of it in unknown databases for unknown periods of time for unknown purposes.
2. Face Recognition Technology
On October 18, 2016, the Center on Privacy & Technology at Georgetown University Law Center in Washington, D.C. published a watershed report titled The Perpetual Line-Up: Unregulated Police Face Recognition In America authored by Clare Garvie, Alvaro Bedoya and Jonathan Frankle. The researchers compared the use of face recognition technology to a “perpetual line-up” where innocent citizens are vacuumed into law enforcement investigations without their consent and, in many instances, without their knowledge. It didn’t take long for posts to appear to the blogs of the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF). Here are some of the key findings summarized in the report, at pp. 2-4:
- Law enforcement face recognition networks include over 117 million American adults – and may soon include many more. Roughly one in two American adults has their photographs searched this way.
- Major police departments are exploring real-time face recognition on live video surveillance camera. Real-time face recognition lets police continuously scan the faces of pedestrians walking by a street surveillance camera. At least five major police departments – including agencies in Chicago, Dallas, and Los Angeles – either claimed to run real-time face recognition off street cameras, bought technology that can do so, or expressed an interest in buying it.
- Law enforcement face recognition is unregulated and in many instances out of control. No state has passed a law comprehensively regulating police face recognition.
- Most law enforcement agencies are not taking adequate steps to protect free speech. Of the 52 agencies that were found to use (or have used) face recognition, the researchers found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech.
- Police face recognition will disproportionately affect African Americans. Due to disproportionately high arrest rates, systems that rely on mugshot databases likely include a disproportionate number of African Americans. Despite these findings, there is no independent testing regime for racially biased error rates. In interviews, two major face recognition companies admitted that they did not run these tests either.
- Agencies are keeping critical information from the public. Of 52 agencies, only four have a publicly available use policy. And only one agency, the San Diego Association of Governments, received legislative approval for its policy.
- Major face recognition systems are not audited for misuse. Maryland’s system, which includes the license photos of over two million residents, was launched in 2011. It has never been audited. Only nine of 52 agencies indicated that they log and audit their officers’ face recognition searches for improper use. Of those, only one agency, the Michigan State Police, provided documentation showing that their audit regime was actually functional.
In the post to the EFF blog titled Memo to the DOJ: Facial Recognition’s Threat to Privacy is Worse Than Anyone Thought dated October 18, 2016, Dave Maass wrote: “While we do give up a small amount of privacy when we walk around in public, we must preserve our ability to blend in as just a face in the crowd.” In the post to the ACLU blog titled ACLU Urges Justice Department To Investigate Police Use Of Face Recognition, also on October 18, 2016, legislative counsel Neema Singh Guliani was quoted as saying: “Half of all adults in the country are in government face recognition databases, yet the vast majority of law enforcement agencies using this technology lack clear policies, audits to ensure accuracy, and transparency.” And Clare Garvie, one of the authors, was reported as describing the lack of oversight this way: “Face recognition is a powerful technology that requires strict oversight. But those controls by and large don’t exist today. With only a few exceptions, there are no laws governing police use of the technology, no standards ensuring its accuracy, and no systems checking for bias. It’s a wild west.” (See: Ava Kofman. Study: Face Recognition Systems Threaten The Privacy Of Millions. The Intercept. October 18, 2016)
The wild west comment by Ms. Garvie is reflected in Key Finding 3 of the report, at p. 2, which is particularly disturbing: “By tapping into driver’s license databases, the FBI is using biometrics in a way it’s never done before. Historically, FBI fingerprint and DNA databases have been primarily or exclusively made up of information from criminal arrests or investigations. By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans. This is unprecedented and highly problematic.” [Emphasis in original] (See also: Kaveh Waddell. Half of American Adults Are in Police Facial-Recognition Databases. The Atlantic. October 19, 2016)
3. A Call For Legislation
The report concluded with these questions and a proposed model face recognition bill. First, the questions, at p. 72:
“Are we comfortable with a world where anyone with a driver’s license is automatically enrolled in a virtual, perpetual line-up? Are we comfortable with a world where the government can find anyone, at any time, by scanning the faces of people on the sidewalk? Are we comfortable with a world where this technology is less accurate on African Americans, yet more likely to be used to try to identify them?
Technology will not wait for us to answer these questions. Neither will law enforcement. Yet state legislatures and Congress have not passed a single law to comprehensively regulate police use of face recognition – and the Supreme Court has never formally recognized a right to privacy in public. With little to guide them, most – though not all – police departments have not taken adequate steps to rein in this surveillance technology.”
Second, the model face recognition bill, at p. 102, is tailored to either Congress or a state legislature:
- the federal bill would control all federal and state law enforcement (1) access to all arrest photo databases and driver’s license and ID photo databases, and (2) use of real-time face recognition;
- the state bill would control (1) state law enforcement access to arrest photo databases, (2) state and federal law enforcement access to the driver’s license and ID photo databases maintained by that state, and (3) state law enforcement use of real-time face recognition within the state.
The legal regime in Canada is different. First, unlike the Supreme Court of the United States, the Supreme Court of Canada has recognized a general right to privacy under s. 8 of the Charter of Rights. Second, a reasonable expectation of privacy in Canadian constitutional law is not compromised by disclosure of information to a third party by virtue of the restricted purpose doctrine. This means that, for example, when a digital image is provided to a provincial licensing authority to obtain a driver’s licence, an individual’s right to privacy in that image is not relinquished for all other purposes. Third, the constitutional right to privacy in Canada incorporates the doctrine of privacy as anonymity and thus includes public places. (See: R v Dyment,  2 SCR 417 per La Forest J., at paras. 429-30; R v Wise,  1 SCR 527 per La Forest J., at paras. 71 and 83; and R v Spencer,  2 SCR 212 per Cromwell J., at para. 44)
The case law is minimal. In most instances facial recognition technology crops up as simply part of the factual background. However, it has been held that Alberta’s universal photograph requirement for issuance of drivers’ licences is a reasonable limit on religious freedom under s. 1 of the Charter. The photographs are maintained in the province’s “facial recognition data bank”. It appears that similar databanks are maintained by other provinces. And the Canada Passport Order (2004) provides for the use of “biometric facial recognition technology” in Canadian passports. The issuance of passports in Canada is governed by Crown perogative. (See: Alberta v Hutterian Brethern of Wilson Colony,  2 SCR 567 per McLachlin C.J., at para. 4; R v Wainwright, 2012 BCPC 123 per Chaperon P.C.J., at para. 4; and, Khadr v Canada,  2 FCR 218 per Phelan J., at para. 83)
Very little is therefore known about the use of face recognition technology in Canada. On November 3, 2014, the Calgary Police Service was the first law enforcement agency to use software to compare mugshots with crime scene photographs and video when it acquired NeoFace from NEC Corporation of America. According to Insp. Rosemary Hawkins, the technology would not be used to identify members of the general public on the street. “It will be used to identify subjects involved in criminal activity under police investigation, ” she said, “and the image searched against our mugshot database, which holds photos of people that have been processed on charges.” That was two years ago. (See: Facial recognition software to aid Calgary police in future investigations. CBC News. November 3, 2014; and, Jim Bronskill. Canada to test facial-recognition technology at various border locations. Global News. January 8, 2016)
Because the extent of acquisition and retention of biometric face recognition imagery in Canada is unknown, it should not be left to developments in constitutional litigation even if it is clear that warrantless seizure or retention would violate s. 8 of the Charter. Litigation leaves too much uncertainty – and for too long. As the authors said in the Georgetown report, at p. 72: “Technology will not wait for us to answer these questions. Neither will law enforcement.” It is time for Parliament to act. A bill is overdue.