Blog

Class aptent taciti sociosqu ad litora

Europe agrees on landmark law forcing big tech to tackle illegal content

  • April 30, 2022
  • Clayton Rice, K.C.

European policymakers have reached a provisional political agreement on a landmark regulatory regime requiring online platforms like Facebook, Twitter and Google to to do more to tackle illegal content in the second pillar of a digital services package that provides a framework of unprecedented regulation. Based on the principle that what is illegal offline must also be illegal online, the sweeping new law will require tech giants to combat misinformation, disclose how their services amplify divisive content and stop targeted online ads based on ethnicity, religion or sexual orientation. It will remain to be seen whether adequate enforcement resources will pack the punch.

1. Introduction

The Digital Services Act (DSA), a companion of the Digital Markets Act (DMA), is a proposal by the European Commission to modernize the e-Commerce Directive respecting illegal content, transparent advertising and disinformation. They were submitted together to the European Parliament on December 15, 2020. On April 23, 2022, the policymakers concluded a deal on the new Digital Services Act in what Cat Zakrzewski, a technology reporter with The Washington Post, described as “paving the way for one of the most expansive regulations to date to address a broad range of harms caused by social networks, shopping websites and search engines.” (here) The agreement secures the two-pronged legislative initiative in tandem with the Digital Markets Act, a competition bill that will establish new rules to prevent gatekeepers from wielding their power against smaller rivals.

2. What does the DSA do?

In a press release issued on April 23, 2022, the Council of the European Union described the DSA as “a world first in the field of digital regulation.” (here) In terms of ambition, the nature of the actors regulated and the innovative aspect of the supervision involved, the legislation “aims to protect the digital space against the spread of illegal content, and to ensure the protection of users’ fundamental rights.” The new rules include: (a) a ban on advertising aimed at children or based on sensitive data such as religion, gender, race and political opinions; (b) allowing EU governments to request removal of illegal content, including material that promotes terrorism, child sexual abuse, hate speech and commercial scams; (c) forcing social media platforms to allow users to flag illegal content in an “easy and effective way” so it can be swiftly removed; and, (d) require online marketplaces like Amazon to implement similar systems for suspect products such as counterfeit sneakers or unsafe toys. (here) Here are four other key components of the provisional agreement.

  • The DSA will apply to all online intermediaries providing services in the EU. The obligations introduced are proportionate to the nature of the services concerned and tailored to the number of users, meaning that very large online platforms (VLOPs) and very large online search engines (VLOSEs) will be subject to more stringent requirements. Services with more than 45 million monthly active users in the European Union will fall into the category of very large online platforms and very large search engines. Start-ups and smaller enterprises with under 45 million monthly active users in the EU will be exempted from certain new obligations to safeguard their development.
  • The VLOPs and VLOSEs will be supervised at European level in cooperation with the member states. This new supervisory mechanism maintains the country-of-origin principle, which will continue to apply to other actors and requirements covered by the DSA.
  • An obligation is introduced for very large digital platforms and services to analyze systemic risks they create and to carry out risk reduction analysis. This analysis must be carried out every year and will enable continuous monitoring aimed at reducing risks associated with:
    • dissemination of illegal content;
    • adverse effects on fundamental rights;
    • manipulation of services having an impact on democratic processes and public security; and,
    • adverse effects on gender-based violence, and on minors and serious consequences for the physical or mental health of users.
  • A prohibition of misleading interfaces known as ‘dark patterns’ and practices aimed at misleading users. A dark pattern or ‘deceptive design’ is a website that tricks users into doing things such as buying overpriced products and signing up for services they may not want. (here)

A new article has been added to the text introducing a “crisis response mechanism”. The mechanism is of particular interest in the context of the Russian military aggression against Ukraine and the impact on the “manipulation of online information”. It will be activated by the Commission on the recommendation of the board of national Digital Services Coordinators and will make it possible to analyze the impact of VLOPs and VLOSEs on the Russian invasion and decide on “proportionate and effective measures to be put in place for the respect of fundamental rights.”

3. How the NGOs Responded

Reactions to the Digital Services Act were generally positive when the proposal was initially announced. Human Rights Watch said the new rules “have the potential to better protect human rights online” but urged the European Parliament to be more ambitious in holding technology companies accountable for “human rights harm stemming from their practices’. (here) HRW drew attention to the preservation of “conditional liability” for online platforms and the prohibition of general monitoring of users, both of which are “cornerstones for protecting freedom of expression online.”

In a post to the Electronic Frontier Foundation blog, Christoph Schmon highlighted the prohibition against the use of “dark patterns” and the increased efforts against “surveillance capitalism” by the adoption of new rules restricting the data-processing practices of big tech companies. However, he drew a critical eye on the role of non-judicial authorities to order the removal of “problematic content” and acquire “sensitive user information without proper fundamental rights safeguards.” (here) The absence of such safeguards raises concerns about “the perils of law enforcement overreach”.

Amnesty International, in a post to its website, lauded the increased accountability of providers of online services and the enhanced transparency of platforms’ practices but was critical of the DSA for not going far enough in protecting human rights. (here) “Amnesty welcomes the obligations imposed on very large online platforms (VLOPs) to address systemic risks stemming from the functioning and use made of their services,” the post continued, “but these obligations must go further and extend to compulsory and effective human rights due diligence in line with international human rights standards including the UN Guiding Principles on Business and Human Rights.”

4. Content Moderation

The new law again sparked the controversy between free-wheeling online free speech advocates and those who support the principle of conditional liability to moderate digital content. I will focus on the principles in play based on the legal regimes in the United States and Europe. The reason is simple. Most of the prominent online services are based in the United States. The Communications Decency Act, 47 U.S.C. s. 230, states that, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The statute thus created one of the most important tools for protecting free speech in the United States in the digital era. Online intermediaries that host or republish speech are protected against laws that might be used to hold them responsible for what others say and do. The protected intermediaries include a range of “interactive computer service providers” and not only Internet Service Providers. The Electronic Frontier Foundation has argued that s. 230 is “perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive since 1996.” (here)

The conditional liability exemption in the DSA, however, operating from the same premise, arrives at a different place. It largely adopts the rules of the e-Commerce Directive which means that online platforms are fundamentally not responsible for third-party content. However, as Professor Miriam C. Buiten of the University of St. Gallen, in St. Gallen, Switzerland, emphasized in an article titled The Digital Services Act: From Intermediary Liability to Platform Regulation, there’s a hitch. Service providers are only exempted from liability for third-party illegal content provided they are unaware of or fail to remove the illegal content after becoming aware of it. (here) In practice, the conditional liability exemption has resulted in notice and takedown procedures enabling users to notify service providers of illegal content. The DSA sets out new requirements for the notice and takedown procedure for hosting providers, extending it to a notice and action procedure.

Based on the texts of international human rights instruments and a vast and ever-expanding global literature, the pendulum appears to have swung substantially in favour of the conditional liability principle. In Speech Police: The Global Struggle to Govern the Internet (2019), Professor David Kaye, a former United Nations Special Rapporteur on freedom of expression, asserted that “[i]t’s time to put individual and democratic rights at the center of corporate content moderation and government regulation of the companies.” The global standard is not “Congress shall make no law…” but Article 19 of the Universal Declaration of Human Rights which states, “Everyone has the right to freedom of opinion and expression: this right includes freedom to hold opinions without interference and to seek, receive, and impart information and ideas through any media and regardless of frontiers.” Under treaty law, as Professor Kaye put it, governments may restrict expression where necessary and proportionate to protect legitimate interests “which can provide a basis for companies to deal with some of the ills of the contemporary internet.” (at pp. 18-9)

5. Conclusion

The DSA aims to end the era of self-regulation in which big tech has called the shots about what content can remain on their platforms or be taken down. As EU Commissioner Thierry Breton said, “With the DSA, the time of big online platforms behaving like they are too big to care is coming to an end.” But the end is not yet in sight as the persistent tension between “uninhibited, robust, and wide-open” free speech and the regulation of content moderation is unlikely to find permanent repose. The European initiative will also require new enforcement resources. It has been estimated that 230 new employees will be needed to enforce the new regime – an extremely small number compared to the resources of the internet behemoths. (here) As Professor Tommaso Valletti of the Imperial College Business School at the University of Rome, and a former competition economist for the European Commission, told The New York Times – without vigorous enforcement the new laws will be an unfulfilled promise.

Comments are closed.