top of page
martinrochevisionf

The Risks of European Content Removal Laws on Freedom of Speech

Updated: Aug 21


Publication date: 18.07.2024


In the era of social media, just a few of the digital giants can influence public opinion. However, new European laws that aim to stop "hate speech" online may restrict freedom of speech. A question arises when legislators maintain these actions as essential for a safer internet: are social media sites removing lawful speech from their platforms? This article highlights the main findings of The Future of Free Speech’s report on the concerns for free expression online.


 

Europe's Expanding Regulatory Net

 

The European Union's Digital Services Act (DSA) and Germany's Network Enforcement Act (NetzDG) are two examples of European digital rules designed to combat the spread of illicit content on the internet. The 2017 NetzDG mandated that social media companies take immediate action to delete unlawful content, including hate speech and defamation, or risk paying hefty fines. The DSA is now replacing and deleting such laws.

 

Highest Rates of Legally Permissible Content Removal

 

Germany, with its rigorous content removal rules such as the NetzDG, sees the largest percentage of legally permitted content being removed. For example, legality was found in 99.7% and 98.9%, respectively, of the removed remarks on Facebook and YouTube. Platforms are taking precautionary steps to prevent harsh sanctions, which might seriously harm free speech, by excessively removing content.

Though to a lesser extent, Sweden and France likewise show high rates of content that is deleted that is legally permitted. It was determined that 94.6% of removed comments on Facebook and YouTube were legitimate in Sweden, compared to 92.1% and 87.5%, respectively, in France. These numbers, which were obtained from a sizable sample of almost 1.3 million comments, emphasize the widespread problem of excessive censorship caused by regulatory requirements.

 

Effects of Excessive Content Moderation

 

This excessive elimination has far-reaching effects. Removing speech that is legally allowed in order to comply with digital legislation compromises the public's trust in social media platforms as forums for free speech and the fundamental right to freedom of expression. For instance, the majority of the deleted comments were general statements of opinion that frequently did not break any laws or community guidelines. These remarks lacked hate speech, linguistic insults, and unlawful content, such as promoting a contentious candidate. This leads to a concerning trend in which too-cautious moderation is being sacrificed on platforms in the name of free speech.


 

Over-moderation may be the consequence of cultural pressure from the media and civil society, attempts to avoid the disproportionate fines associated with existing rules, or the major expansion of platforms' hate speech policies. In an effort to preserve their own reputations or stay clear of contentious associations, platforms may also naturally implement strict moderation guidelines.

 

In order to support the necessity for harsh internet regulations like the DSA, European authorities have asserted that social media sites are full of unlawful hate speech. The Future of Free Speech’s report presents data demonstrating that the majority of comments that are removed from platforms are not unlawful. According to the data, most of the removed content—between 87.5% and 99.7%, depending on country and platform—was acceptable under the law.

 

These results show how urgently digital laws and content control guidelines need to prioritize preserving the right to free speech and information access.

 

Preserve the Right to Freedom of Speech

 

Legislators and content moderators need to understand that too strict rules might have the opposite effect of what is intended, stifling free speech and weakening the foundations of democracy. The very minority voices that these restrictive measures are meant to protect may also have their content removed.

 

Rather, we must provide a space on the internet for diverse viewpoints to coexist with policies that restrict access to really destructive content while allowing legitimate political conversations to continue, even when they contain unpleasant or divisive content. To enable more accurate targeting of truly dangerous information, regulators and platforms should tighten and refine the criteria for content removal.

 

To its credit, the DSA mandates that platforms offer user appeals procedures that are more comprehensive and clear, as well as content management criteria. By doing this, the chilling effect might be lessened and public confidence in social media as a forum for free and open discussion could be restored. Policymakers in Europe need to reconsider the effects of the current digital laws and be aware of any unforeseen implications that may arise from the DSA's implementation.

 

The report's findings serve as a sobering reminder that we must not overlook the fundamental human right to free expression in the interest of a safer online environment. Policymakers, platforms, and civil society should make sure that content moderation procedures safeguard users without limiting the views that are essential to our democracy, even while these platforms are not restricted by international human rights legislation.


 

You can also read about:

 

Reference List

28 views0 comments

Commenti


bottom of page