Technology Is Not Neutral
- July 28, 2018
- Clayton Rice, K.C.
Following the Cambridge Analytica scandal and Europe’s General Data Protection Regulation (GDPR) Facebook’s stock price took a 20% tumble this week. In an article titled Does Facebook’s plummeting stock spell disaster for the social network? published in the July 26, 2018, edition of The Guardian, Olivia Solon identified “decelerating revenue growth, fuelled by user base stagnation in Europe and the US” as the core problem. And among the concerns about privacy, misinformation and election interference is the question: What is techlash?
Since the London-based magazine The Economist first used the term it has acquired momentum among pundits and polemicists. It is generally used to describe a backlash against the big five tech companies – Facebook, Amazon, Apple, Google (Alphabet) and Microsoft. The rebellion comes from uneasy users concerned about digital surveillance and data protection, and governments concerned about a loss of power to corporations that generate revenue larger than the gross domestic product (GDP) of some countries. Why? Because, as former Google design ethicist, Tristan Harris, recently said – technology is not neutral. (See: Eve Smith. The Techlash against Amazon, Facebook and Google – and what they can do. The Economist. January 20, 2018; Mark Richards. Techlash: What is it? And What does it Mean for the Future? CLNews. February 12, 2018; and, Jonathan Vanian. How Data Privacy Blunders and Conspiracy Theories Helped Fuel the ‘Techlash’. Fortune. July 18, 2018)
The solution has always been this – to protect our freedom, we must control our data. In a piece titled Tech-Lash Galore published by WIRED on April 30, 2018, Nathan Gardels described the solution in terms of controlling our fate. “Controlling our fate,” he wrote, “is not only a technical matter. It entails critically detaching from the social media to which many are becoming reliant.” Mr Gardels went on to give us this extract from Homo Deus (2017) by Yuval Noah Harari who described “dataism” as the new god on which we are dependent:
“If you have a problem in life, whether it is what to study, whom to marry or whom to vote for, you don’t ask God above or your feelings inside, you ask Google or Facebook. If they have enough data on you, and enough computing power, they know what you feel already and why you feel that way. Based on that, they can allegedly make much better decisions on your behalf than you can on your own.”
Add the fuel of ‘fake news’ to dataism and techlash erupts because the truth is not out there. Once viewed as saviours of democracy, the tech behemoths are increasingly seen as threats to truth. In an article titled Dawn of the techlash published in the February 11, 2018, edition of The Guardian, Rachel Botsman wrote this about fake news: “[It] has become a game of accusation and counter-accusation. If it started out as a useful identifier of misinformation, it is now an unhelpful catch-all term hurled at all kinds of uncomfortable truths a president, say, might not like. Likewise, many people, overwhelmed by the pace of change and the sheer amount of knowledge available, are beating a retreat to media echo chambers.”
Ms Botsman goes on to give us her definition of trust. As she said – it is a tricky thing, both to define and measure. In The Resolution of Conflict (1973) Morton Deutsch wrote: “Trust involves the delicate juxtaposition of people’s loftiest hopes and aspirations with their deepest worries and darkest fears.” Ms Botsman put it this way: “Trust is the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown. And that’s why my definition of it is simple: trust is a confident relationship with the unknown.” But, as Richard Edelman commented in the Executive Summary of the Edelman Trust Barometer: Annual Global Study (2018) at p 4: “[W]e now have a world without common facts and objective truth” where “media has become the least-trusted global institution for the first time”.
Yet, Facebook maintains that it is not a media company and only a “neutral technology pathway” for users to stay connected. Ms Botsman describes Facebook’s position as misconceived and dangerous. “It is a media company,” she wrote, “with enormous influence in shaping someone’s worldview about whom to trust. And it is profit-driven.” Separating truth from fiction will get even more difficult in a world populated with artificial intelligence. We will have to forensically question everything we see, hear or read. Some developments are staggering in impact – real and potential. Let’s take a brief look at two of them.
On March 18, 2018, a milestone was reached in the history of autonomous motor vehicles. A pedestrian in Tempe, Arizona, was killed by a driverless car owned by Uber – and with an emergency backup driver behind the wheel. Video of the accident released by law enforcement is available online from CNN and NBC. As the vehicle approached the pedestrian, the car showed no sign of slowing down. It was the industry’s first fatality associated with self-driving technology. (See: Daisuke Wakabayashi. Self-Driving Uber Car KIlls Pedestrian in Arizona, Where Robots Roam. The New York Times. March 19, 2018)
Let’s use a hypothetical. A driverless car is travelling a residential street at the posted speed limit. A child runs out from behind a parked car. Engaging the brakes will not avoid an accident. There is no room to veer to the right and turning left will bring the car into an oncoming cyclist. What decision does the driverless car make? Based on what considerations? And who programmed it to make the choice?
On July 18, 2018, more than 2,400 specialists in artificial intelligence signed a pledge, organized by the Future of Life Institute in Boston, declaring they will not participate in the development or manufacture of robots that can identify and attack people without human oversight. The intention is to deter military firms and nations from building lethal autonomous weapons systems (LAWS). On July 19, 2018, the pledge was announced at the International Joint Conference on AI in Stockholm. (See: Ian Sample. Thousands of leading AI researchers sign pledge against killer robots. The Guardian. July 18, 2018)
The pledge states: “[W]e the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”
Yes, Mr Zuckerberg, technology is not neutral.