Blog

Class aptent taciti sociosqu ad litora

Face Recognition Technology

  • April 28, 2019
  • Clayton Rice, Q.C.

Face recognition technologies use biometrics to map facial features from a photograph or video. The information is then compared with a database of known faces to find a match. The use of these technologies by governments and the private sector is widespread. Airports, public transit facilities, law enforcement, mobile phone manufacturers, social media platforms and shopping malls are all experimenting with face recognition systems and the associated flood of biometric data. The unregulated use of these technologies poses an existential threat to freedom.

1. How does face recognition work?

Face recognition systems use computer algorithms to select details about a person’s face such as the distance between the eyes and the shape of the nose. These details are converted to a mathematical representation and compared to other faces in a database. The data about a particular face is called a template. Some face recognition systems calculate a probability instead of making a positive identification. A system that calculates probability ranks potential matches in order of likelihood. (See: Electronic Frontier Foundation. Street-Surveillance Project. Face Recognition, at p 2)

2. China’s Social Credit System

The most significant government surveillance program using face recognition technologies is China’s social credit system. It has been reported that 42% of the world’s video surveillance cameras are in China. In an article titled Discipline and Punish: The Birth of China’s Social-Credit System published by The Nation on January 23, 2019, Rene Raphael and Ling Xi said that words such as “honesty (cheng) and credibility (xin)” appear on state propaganda posters that “accompany a growing panoply of public and private mechanisms that assess individuals, officials, businesses, and professional sectors and reward the good and punish the bad.”

The purpose of the system is to connect security cameras in public spaces with private cameras and integrate them into one surveillance platform. It uses face recognition and artificial intelligence to monitor 1.4 billion people. No small achievement, that. In an article published in the January 7, 2018, edition of The Washington Post titled China’s watchful eye, Simon Denyer reported that the system aims to create a “Police Cloud” – a vast database of records that links everything to everyone’s identity card and face – to track where they are, what they are doing, what they believe, who they associate with and “ultimately assign them a ‘social credit’ score based on whether the government and their fellow citizens consider them trustworthy.”

Citizens deemed to be dishonest may be banned from airline travel, buying a house, maintaining certain types of employment, or may be liable to have assets frozen by the government. Less severe consequences, such as having one’s image posted on electronic billboards as a means of opprobrium and social control, are frequently reported for infractions like jaywalking. As the social credit system punishes the disobedient for not paying transportation fares or cheating at video games, it also rewards those considered trustworthy for donating blood or doing community work.

The judicial branch of the Communist Party of China also plays its role. Following the recent release of government statistics, Andy Wong of The Associated Press reported that people deemed untrustworthy have been blocked from the purchase of more than 25 million plane and train tickets. And the Chinese courts have added 13.5 million entries on a blacklist. In a piece titled Chinese courts have put on social-credit punishment list about 13.5 million people deemed untrustworthy published in The Globe and Mail edition of April 19, 2019, Nathan Vanderklippe described the “digital panopticon” this way:

“The release of statistics comes as China works to set in place a social-credit system intended to assess citizens’ reliability and ultimately to govern their ability to participate in a wide range of activities, such as exporting goods, accessing government contracts, converting currency and receiving Communist Party promotions. That system is intended to govern 19 key areas of dishonesty including spreading online rumours and false information, committing financial fraud, delivering unlicensed medical treatment, evading taxes, cheating on tests and fixing sports matches.”

Chinese government statistics are not the only source of information. With every database comes the risk of hacking and leaks. On February 13, 2019, security researcher Victor Gevers of the GDI Foundation in The Hague, The Netherlands, disclosed an exposed database owned by SenseNets that tracked the location of 2.5 million residents of Xinjiang – China’s largest province and home to the minority Uighurs population. The database contained profiles including identification card data, sex and nationality. There was also a list of GPS coordinates revealing locations where users had been seen and “trackers” that appeared to be locations of public cameras. The trackers were associated with terms such as “mosque”, “hotel” and “internet cafe”. (See: Catalin Cimpanu. Chinese company leaves Muslim-tracking facial recognition database exposed online. ZDNet. February 14, 2019)

The pervasiveness of Chinese face recognition technology is generating new companies that manufacture sophisticated equipment at lower prices and for sale outside China. In an article titled Made in China, Exported to the World: The Surveillance State published by The New York Times on April 24, 2019, Paul Mozur, Jonah Kessel and Melissa Chan reported that 18 countries are using intelligence monitoring systems made in China and 36 have received training in topics like “public opinion guidance”. “With China’s surveillance know-how and equipment now flowing to the world,” they wrote, “critics warn that it could help underpin a future of tech-driven authoritarianism, potentially leading to a loss of privacy on an industrial scale. Often described as public security systems, the technologies have darker potential uses as tools of political repression.”

3. The Consent Problem

It is impossible to give an informed and intelligent consent to the unknown. Many surveillance cameras operate covertly. Others, although visible in public places – on buildings, in retail outlets and sports arenas – are practically invisible to undiscerning passers-by going about their daily lives. Face recognition technology has been used by British law enforcement to scan Christmas shoppers in London and Taylor Swift’s security team deployed it to root out stalkers during her Reputation tour last year. (See: Gabrielle Canon. Surveillance fears grow after Taylor Swift uses face recognition tech on fans. The Guardian. December 13, 2018)

Although the use by the Swift team raised ethical questions about privacy, a public malaise coloured by a sense of futility is taking hold moored in the belief that consent to the use of face recognition technology cannot exist. In a recent poll conducted by Threatpost 53% of respondents said they don’t believe consent is possible in real-life face recognition applications and 32% said consent should require prior notice of its use. But, consistent with a general sense of discomfort and futility, only 10% said that consent embodies the ability to opt out. (See: Lindsey O’Donnell. Facial Recognition ‘Consent’ Doesn’t Exist, Threatpost Poll Finds. Threatpost. April 26, 2019)

4. The Broader Question

We all have a privacy interest in our own image that extends to public spaces whether the use of face recognition technology is known or unknown. The broader question is: Where is this technology headed? It invites the image of a new technological Silk Road. It is an important question in the context of international security concerns over the potential involvement of Huawei, China’s technology giant, as a supplier of components for new 5G networks. Huawei is also a manufacturer of face recognition systems.

According to the 2016 study by Georgetown University, Center on Privacy & Technology, titled The Perpetual Line-Up, the identities of approximately 117 million American adults were stored by law enforcement in unregulated face recognition databases. That was three years ago. And face recognition technology is playing an increasing role with deployment in real-time by body cameras, dash cams and drones. It is, as Jay Stanley of the American Civil Liberties Union described it – the wild west.

5. Conclusion

I will leave you with some better news. The innovative company D-ID based in Tel Aviv, Israel and Palo Alto, California has developed artificial intelligence technology that makes images unrecognizable to face recognition algorithms while keeping them similar to the human eye. “This allows people to store, share and utilize images and videos,” said CEO Gil Perry, “without having to worry about their faces being picked up, identified and misused by automated face recognition tools.” D-ID is a 2019 Netexplo award winner. According to Netexplo VP Marcus Goddard, the new technology is likely to have a far-reaching impact on privacy protection. (See: Stephen Mayhew. D-ID honored for its facial recognition-blocking technology. BiometricUpdate. April 18, 2019)

Comments are closed.