Blog

Class aptent taciti sociosqu ad litora

Machine Learning and Mass Manipulation

  • June 15, 2021
  • Heather Ferg

What are the consequences when machine learning techniques are brought to bear on the individual? Not only is large scale manipulation possible, it can be achieved with relative ease. By manipulating what people are exposed to, content providers can achieve targeted influence on an individual’s emotional state, political leanings and beliefs about the world at large. Emotional manipulation can be weaponized in large scale campaigns that intrude on the only space more sacred than the home: the smartphone.

1. Introduction

In my last two posts to On The Wire, I explained how algorithms work and how machine learning is used in all manner of data processing. In Machine Learning: Privacy in Processing, I explained how data is gathered and interpreted. In The Algorithms Shaping Our World, I covered how deep learning techniques expose connections and correlations invisible to human analysts. Predictive models know us better than we know ourselves. In this instalment, I will discuss some of the more glaring examples of what can happen when these tools are brought to bear on individuals.

2. Emotional Manipulation

The ability of content providers to impact a user’s emotional state by simply tweaking the algorithms gained significant attention in 2014 when the story broke of a now infamous Facebook study. The company manipulated the feeds of almost 700k unknowing users to see if they could influence their emotions. They could.

Facebook’s content is primarily presented to a user on the “News Feed” component of their homepage. As Facebook contains more content than anyone one could ever view, the materials displayed or omitted are filtered by ranking algorithms. The algorithms are designed to maximize user engagement.

In the study (here), researchers manipulated the extent to which people were exposed to positive emotional content or negative emotional content. Three million posts were analyzed. They found that when people were exposed to positive emotional content, they posted more positive content of their own. When they were exposed to negative content, they posted more negative content themselves. People exposed to fewer emotional posts (either positive or negative) were less expressive.

The study termed the phenomenon “emotional contagion” and concluded that “the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks, and providing support for previously contested claims that emotions spread via contagion through a network” (here).

While the idea that our moods are impacted by the material we are exposed to is not particularly controversial, the study was unnerving. It was conducted without the knowledge or consent of users and demonstrated the ease with which small changes in content consumption can impact subsequent behaviour.

3. Large-Scale Individualized Targeting

The ability to perform mass manipulation through individualized targeting is perhaps best illustrated by two scandals that plagued the 2016 American presidential election: (1) the Cambridge Analytica scandal; and, (2) the Russian interference campaign. These examples show how the insights gleaned from an individual’s data can be weaponized to achieve the ends of whoever wishes to leverage them.

(a) The Cambridge Analytica Scandal

Cambridge Analytica is a British consulting firm that harvested the Facebook data of millions of users in order to build psychological profiles that could be used to target voters and influence their behaviour. User data was gathered using a free “personality quiz” that granted a third party provider access to a user’s Facebook profile. The third party scraped the user’s Facebook information and that of their friends. For each person who took the quiz (approximately 320,000), they were able to scrape the data of at least 160 other people (here). The data was gathered under the auspices of “academic use” and shared with Cambridge Analytica. Once the company had this set of training data, it processed it for psychological insights and then built the algorithms necessary to profile and target millions (here).

One of the early reports on the scandal was Harry Davies’ 2015 story in The Guardian (here). Mr. Davies reported that U.S. Senator Ted Cruz’ presidential campaign was using psychological data harvested from Facebook users and sounded the alarm on the ethical and privacy concerns raised by the situation.

In 2016, McKanzie Funk covered the same in his New York Times piece Cambridge Analytica and the Secret Agenda of a Facebook Quiz (here). Mr. Funk explained that for several years Cambridge Analytica had been using Facebook as a tool to build psychological profiles of approximately 230 million adult Americans. Their personality quiz results were correlated with their real names and other data. He reported that, at that point, the company claimed to have as many as 3,000-5,000 data points on each person such as age, income, debt, hobbies, criminal histories, purchases, religious leanings, health concerns, gun ownership and car ownership (here).

At the time of Mr. Funk’s article, Cambridge Analytica had been hired by the Trump Campaign and Facebook was selling an advertising product known as the “dark post” – a news feed post only seen by the users being targeted. Former President Trump’s digital team was thus able “to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times” (here). Dark posts could be used to drum up fear, reinforce prejudices or inflame already contentious debates. In the 2016 election, the Trump campaign used the technique to target 3.5 million black voters in swing states and attempt to discourage them from voting (here).

In 2018, former Cambridge Analytica employee and whistle-blower, Christopher Wylie, disclosed the details of the company’s tactics in simultaneously released interviews with The Guardian (here) and the New York Times (here). Facebook’s market capitalization dropped by over $100 billion in the days that followed (here). It recovered just fine.

(b) Invisible Russian Influence

The fact of Russian interference in the U.S. 2016 presidential election is now well known (here). Less well known are the techniques that were used to gently sway hearts and minds in a coordinated attack on the fabric of American civil society. During the course of the election campaign, Russia’s Internet Research Agency created thousands of social media accounts and waged a campaign designed to bolster Trump’s campaign and sow social and political discontent in America. They created online groups, purchased ads and disseminated content that reached millions of people.

In 2018, a cache of Facebook ads purchased and run by the Internet Research Agency was published by the U.S. Congress. As concisely summarized by Olivia Solon and Julie Carrie Wong in a piece for The Guardian (here), the ads dealt with highly contentious issues and came from accounts designed to look like they promoted American ideals or values.

For example, “Blue Lives Matter” messaging was used to disseminate a photo showing a black man with a white police officer in a chokehold while another black man threatened to stab him with the tip of an American flag. Pro-confederate messaging contained statements that the U.S. civil war was not about slavery, Confederate black soldiers were treated better than those in the Union army and statues of confederate generals should be “defended” (here).  There were pro-Beyoncé rallies and anti-Beyoncé rallies scheduled for the same time and date. Patriotic pages promoted material designed to smear presidential candidate Hillary Clinton. Importantly, the messaging was carefully targeted. For example, the anti-Beyoncé rally was promoted only to people who had studied to become a police officer, or whose job title matched a list of law enforcement, or military titles or 911 dispatchers. The anti-immigrant group Stop AI (“Stop All Invaders”)  targeted people who had shown an interest in Syria (here).

Black voters were heavily targeted. As reported by Wired (here), the Instagram account @blackstagram was able to amass over 300,000 followers and 28 million reactions to content. It focused on increasing distrust in democracy and discouraging voter turnout for Ms. Clinton.

In terms of scope, a report commissioned by the U.S. Senate Intelligence Committee (here) found that Russian-driven propaganda was contained in over 10.4 million tweets, 1,000 videos uploaded to YouTube, and reached 126 million people on Facebook and over 20 million users on Instagram. In order to collect personal information, trolls took to selling merchandise such as LGBT sex toys and American patriotic-themed art work (here).

4. Our Phones As Our Homes

These types of propaganda campaigns exploit human vulnerability. By learning how to push the right buttons at the right times, any third party with enough money to buy ads can use emotional triggers to nudge people into whatever real-world behaviors align with their agendas. How we think about this (and where we place responsibility) starts with how we think about our smartphones. Using machine learning to refine psychological manipulation techniques is more than a new ethical challenge. It is an unregulated, often invisible intrusion into the very place we live.

In a new study from the University College London detailed in the book The Global Smartphone (here), anthropologists set out to understand the consequences of smartphones for people around the world and gain a better understanding of what a smartphone actually is (p. 4). Eleven researches embedded themselves in ten different societies for sixteen months. They focused on how older adults used their devices and how users experience their phones.

The researchers concluded that our smartphones function as more than just “phones”. They are places of digital refuge where we live much of our interior lives. They described smartphones as “tranportal homes” which are “perhaps the first object to challenge the house itself (and possibly also the workplace) in terms of the amount of time we dwell in it while awake” (p. 219). We are, they write, “always ‘at home’ in our smartphone. We have become human snails carrying our home in our pockets” (p. 219).

The smart phone has replaced much human interaction, often abruptly. The authors term this “the death of proximity” and describe it in the following terms:

“Most people become annoyed when they are sitting with someone in a restaurant and their companion in effect disappears from their company, becoming instead absorbed in their smartphone. What has happened is that the individual has, in effect, gone home. They can use this portal to zone out from the place where they are sitting, to return to a home in which they can carry out many familiar activities, from finding entertainment to organising their schedule or messaging friends or relatives through text and visual media. Previously we entirely respected the right of somebody to take their leave and go back to their own private house. However, it is disturbing when someone who appears to be sitting next to us has, to all intents and purposes, abruptly retreated to some other place from which we are excluded without saying goodbye. They may remain in our physical company, but they have disengaged.” (pp. 219-220)

In addition to elegantly capturing the unforgivable rudeness rampant in virtually all modern interactions, this passage underscores the importance of the smartphone as a unique opportunity to influence someone when their guard is down. We go to our phones for comfort, connection and distraction. We are not simply addicted to our phones. For many, they are the place where we most meaningfully live our lives. This is especially true for older users. The smartphone is a lifeline to friends, news, medical care, children and grandchildren.

When smartphones and other digital devices are viewed in this light, they cannot be easily dismissed as tools of modern vice that we have become “addicted” to; rather, they are intimate spaces worthy of protection. Viewed in this light, using machine learning to refine psychological manipulation can be construed as an intrusion into the sanctity of the home.

5. Conclusion

The 2016 election examples demonstrate that machine learning comes with problems that do not admit of easy solutions. Social media platforms have had some increases in transparency and crack-down on propaganda bots, but the manner in which invisible influences can nudge user behaviours remains a fundamental issue at the core of our society.

The central role of smartphones in our lives heightens the need to guard against their use as tools to exploit human fear, ignorance and weakness. As I will argue in my next post, if human autonomy is to be meaningfully preserved, we must emphasize digital literacy and take both freedom of thought and the right to cognitive liberty seriously.

Comments are closed.