Blog

Class aptent taciti sociosqu ad litora

Deepfakes

  • September 18, 2019
  • Clayton Rice, K.C.

Deepfake is a technique for the synthesis of a human image based on artificial intelligence. It is used to combine an image onto a source image by a technique called generative adversarial network. Generative algorithms are getting so good that synthesized media will soon become indistinguishable from reality. In an unregulated legal environment deepfakes raise questions of privacy rights and free speech.

1. Why is this scary?

In an article titled In the Deepfake Era, Counterterrorism Is Harder published in The Atlantic on September 11, 2019, Amy Zegart of the Hoover Institution at Stanford University discussed recent examples of deepfake videos that show how authentic and widely available they are. A doctored video of U.S. House Speaker Nancy Pelosi that made her appear drunk went viral on Facebook. When Facebook refused to take it down a technology startup posted a deepfake of Mark Zuckerberg on Instagram bragging about his power to rule the world. “Imagine this for a second: one man with total control of billions of people’s stolen data,” he said. “Whoever controls the data, controls the future.”

The scary part is not restricted to photographs and videos. The first known use of deepfake audio to impersonate a voice was recently reported in a cyber heist case. An executive in an energy firm based in Britain thought he was talking to his boss when it was actually an AI-based fake. The fraudulent call resulted in the transfer of $243,000. (See: Catherine Stupp. Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. The Wall Street Journal. August 30, 2019)

“The potential for deep fake deceptions in global politics gets scary very quickly,” Zegart said. “Imagine a realistic-seeming video showing an invasion, or a clandestine nuclear program, or policy makers discussing how to rig an election. Soon, even seeing won’t be believing.” Zegart went on to argue that a “wholesale reimagining of intelligence for a new technological era” is missing. “In the future,” she concluded, “intelligence will increasingly rely on open information collected by anyone, advanced code and platforms that can be accessed online for cheap or for free, and algorithms that can identify how American intelligence agencies can gain and sustain the edge while safeguarding civil liberties in a radically different technological landscape.”

2. Manipulating Evidence

In a report titled Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence (2019) published by Data & Society, authors Britt Paris of Library and Information Science at Rutgers University, and Joan Donovan of the Kennedy School at Harvard University, argue that deepfakes have a long pedigree in media manipulation and are unlikely to be fixed solely by technology. “[T]he relationship between media and truth has never been stable,” they assert. “There has always been a politics of evidence around audio visual media – how evidence changes and is changed by its existence in cultural, social, and political structures.”

Paris & Donovan argue that “objective documentation of truth” is a 19th century legal construct. Relying on Laws of Men and Laws of Nature: The History of Scientific Expert Testimony in England and America (2007) by Tal Golan, Paris & Donovan emphasize how photographic evidence slowly became admissible in court on a case by case basis since the 1850s. In the 19th century, witness testimony was the preferred source of proof and written historical records were taken as fact by historians. Oral testimony and written records were viewed as “less mysterious than newer technologies, such as photography.” As a result of the distrust of image-capture technology, judges and juries were provided with “transparency into the process of image capture” by expert testimony.

Visual evidence is not only interpreted in the courtroom but also in the field of journalism. Paris & Donovan argue that, in journalism, visual evidence plays a key role in constructing public opinion. Journalists also serve as experts in the politics of evidence by “deciding how to frame media as representative of truth to the public.”

During the first Gulf War in the early 1990s broadcast journalism defined how the West perceived the conflict. “News media turned the Gulf War into a fight between evenly matched opponents,” Paris & Donovan state, “but did not mention or show the uneven damage and death toll: 20,000 dead Iraqi soldiers and an estimated 75,000 civilian casualties, compared with 286 deaths of American soldiers.” Referring to The Gulf War Did Not Take Place (1993) by media theorist Jean Baudrillard, they argue that “US media mythologized the conflict through the stylized misrepresentation of events, justifying a colonization effort by a country with a strong and well-resourced military. These images were real images. What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television.”

Two technological developments set the stage for what Paris & Donovan call “cheap fakes” – the rise of social media platforms and the camera feature of mobile phones. “The combination of mobile cameras and social media,” they contend, “radically expanded the possibilities of who could distribute photographic media, to whom, and at what speeds.” Techniques of cheap fakes include photoshopping, lookalikes, decontextualization, and speeding and slowing moving images.

In a piece titled Photoshop for Democracy posted to MIT Technology Review dated June 4, 2004, cultural scholar Henry Jenkins analyzed a number of online communities that used image manipulation software and social media to make “participatory culture [become] participatory government.” He referred to an image with faces of George W. Bush, Colin Powell and Donald Rumsfeld grafted onto the Three Stooges and argued that this use of Photoshop is a meaningful way for the public to hold elected officials accountable.

Paris & Donovan adopt Jenkins’ argument about Photoshop as illustrative of an important component of the history of evidence. “While the already-dominant shapers of discourse centralize power through expert interpretation,” they assert, “members of the public can redistribute that power when they have the ability to spread expressive messages at larger scales, with less expert oversight than ever before.” The image of the Three Stooges drives the point home. The manipulated image does not make an evidentiary claim. It is like a political cartoon or a protest sign – an expression of political belief. Image manipulation tools like Photoshop can therefore only have real consequences for individuals when expression is not clearly distinguishable from evidence.

3. Conclusion

Deepfakes are not going away and it is unlikely that the problems associated with image manipulation will be solved solely by legislation or the courts.

There can be no question in Canadian law that there is a reasonable expectation of privacy in the image of one’s own face. Although it has been held that a photograph may not rise to the level of core biographical information, or that it represents the “ultimate essence” of an individual’s “personal being”, it has been held that a photograph is “personal information” under the Privacy Act, RSC 1985, c P-21, s 3. Yet, it is common for law enforcement to obtain photographs contained in government databases without judicial authorization. (See e.g., R v Flintroy, 2018 BCSC 1692 per Williams J. at para 25)

In an article titled AI can’t protect us from deepfakes, argues new report posted to The Verge blog on September 18, 2019, Zoe Schiffer quoted David Greene, Civil Liberties Director at the Electronic Frontier Foundation, who emphasized that fake videos can have important uses in political commentary, parody, and anonymizing people who need identity protection. “If there’s going to be a deepfakes law,” Greene said, “it needs to account for free speech.”


Comments are closed.