Blog

Class aptent taciti sociosqu ad litora

Behind the Black Mirror

  • February 28, 2018
  • Clayton Rice, K.C.

A new report was published this month by twenty-six researchers at renowned universities and non-governmental organizations including the University of Oxford, the University of Cambridge and the Electronic Frontier Foundation warning that artificial intelligence technology creates dangerous opportunities for hackers, political operatives and oppressive governments.

The report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018) contains four dystopian vignettes that Will Knight, the senior editor at the MIT Technology Review, described as “taken straight out of the Netflix science fiction show Black Mirror“. Here they are.

Scenario 1: The Smarter Phishing Scam

An administrator for a building’s robot security system spends some time on Facebook during the workday. She sees an advertisement for a model train set and downloads a brochure that is infected with malware. Scammers used artificial intelligence to figure out from information she posted publicly that she is a model train enthusiast and designed the brochure just for her. When she opens it, hackers are cut loose to spy on her computer. The hackers obtain her username and password for the building security system and take control of it.

Scenario 2: The Malware Epidemic

A hacking group based in eastern Europe takes a machine-learning technique used for defending computer systems and adapts it to build a more tenacious and pernicious piece of malware. The program uses techniques similar to those found in the Go-playing AI AlphaGo to continually generate new exploits. Computers that are well maintained are immune but older systems and smart devices are infected. Millions of people are forced to pay a ransom to recover their machines. To make things worse, attempts to counteract the malware using another exploit end up “bricking” many of the smart systems they were supposed to save.

Scenario 3: The Robot Assassin

A cleaning robot infiltrates Germany’s ministry of finance by blending in with legitimate machines returning to the building after a shift break. The next day, the robot performs routine cleaning tasks, identifies the finance minister using facial recognition technology, approaches her, and detonates a concealed bomb. Investigators trace the robot to an office supply store in Potsdam where it was purchased with cash and the trail goes cold.

Scenario 4: A Bigger Big Brother

A man is furious about rampant cyber attacks and the government’s inability to act. He writes online posts about the dangers, orders materials to make protest signs and buys materials to make smoke bombs he plans to use after giving a speech at a local park. The police show up at his office the next day and inform him that their “predictive civil disruption system” identified him as a potential threat. He leaves in handcuffs. (See: The Malicious Use of Artificial Intelligence Report, at pp. 24-9; and, Will Knight. The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI. MIT Technology Review. February 21, 2018)

The Report recommended that further research focus on the ethical questions and dual use complexities raised by artificial intelligence technology. Restricting the dissemination of dangerous information has been done with other dual use technologies that have weapons potential. The Report states, at p. 52: “Because of the dual-use nature of AI, many of the malicious uses of AI outlined in this report have related legitimate uses. In some cases, the difference between legitimate and illegitimate uses of AI could be one of degree or ensuring appropriate safeguards against malicious use. For example, surveillance tools can be used to catch terrorists or oppress ordinary citizens. Information content filters could be used to bury fake news or manipulate public opinion. Governments and powerful private sector actors will have access to many of these AI tools and could use them for public good or harm. This is why a public dialogue on appropriate uses of AI technology is critical.”

And Mr Knight put the problem this way: “AI presents a particularly thorny problem because its techniques and tools are already widespread, easy to disseminate, and increasingly easy to use – unlike, say, fissile material or deadly pathogens, which are relatively hard to produce and therefore easy to control. Still, there are precedents for restricting this kind of knowledge. For example, after the US government’s abortive attempt to impose secrecy of cryptography research in the 1980s, many researchers adopted a voluntary system of submitting papers to the National Security Agency for vetting.”

Let’s now take a look at two specific applications in the legal system that also gained traction this month.

1. Voice Cloning

In an article titled Who wanted a future in which AI can copy your voice and say things you never uttered? Who?! published in the February 22, 2018, edition of The Register, Katyanne Quach reported on software that can listen to someone’s voice a few times and then mimic their speech – according to a paper published by researchers with the AI team at the Chinese Internet giant Baidu. The researchers introduced two different approaches to building a neural cloning system: speaker adaptation and speaker encoding – described this way: “Speaker adaptation involves training a model on various speakers with different voices. The system learns to extract features from a person’s speech in order to mimic the subtle details of their pronunciation and rhythm. Speaker encoding involves training a model to learn the particular voice embeddings from a speaker, and reproduces audio samples with a separate system that has been trained on many speakers.”

That is an example of a potential dual use software. A father can configure an audiobook reader with his own voice to read bedtime stories for his daughter when he is away on business. But a voice that sounds identical from two different sources could wreck havoc in identifying targets in a wiretap investigation and with proving voice identification.

2. Predictive Policing

In another article titled Artificial intelligence could identify gang crimes – and ignite an ethical firestorm published by ScienceMag on February 28, 2018, Matthew Hutson reported on an new algorithm designed to automate the process of identifying gang crimes based on four pieces of information: the primary weapon, the number of suspects, the neighbourhood and location (such as an alley or street corner) and where the crime took place. Mr Hutson described the algorithm from a presentation of the new program at the Artificial Intelligence, Ethics and Society (AIES) conference this month in New Orleans:

“To classify crimes, the researchers invented something called a partially neural network. A neural network is made of layers of small computing elements that process data in a way reminiscent of the brain’s neurons. A form of machine learning, it improves based on feedback  – whether its judgments were right. In this case, researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) in California from 2014 to 2016 on more than 50,000 gang-related and non-gang-related homicides, aggravated assaults, and robberies.

The researchers then tested their algorithm on another set of LAPD data. The network was ‘partially generative’, because even when it did not receive an officer’s narrative summary of a crime, it could use the four factors noted above to fill in that missing information and then use all the pieces to infer whether a crime was gang-related. Compared with a stripped-down version of the network that didn’t use this novel approach, the partially generative algorithm reduced errors by close to 30%.”

But the research did not address the concerns that continue to pester developments in the fields of artificial intelligence and machine learning that are moving at breakneck speed. What assurance is there that the training data was not biased? What happens when someone is mislabeled as a gang member? And Blake Lemoine, a Google software engineer, was reported as asking: “[Were] the researchers also developing algorithms that would help heavily patrolled communities predict police raids?”

Conclusion

Artificial intelligence is the technology of the early twenty-first century. It drives the current running though this piece – how to regulate artificially intelligent dual technology, its application in specific fields – in this case, two applications in the legal system, and the ethics of designing artificial moral agents. The heat in the kitchen apparently went up during the AIES conference when Hau Chen, a computer scientist at Harvard University who presented the work on predictive policing, responded that he couldn’t be sure how the new tool would be used. “I’m just an engineer,” he said. That apparently prompted Mr Lemoine to quote a lyric from a song about the wartime German rocket scientist Wernher von Braun: “Once the rockets are up, who cares where they come down?”

Mr Lemoine then walked out.

Comments are closed.