Current events in France have again demonstrated the pervasive and disruptive societal impact of social media. As a result – to a large extent – of extensive use of social media the Yellow Vest Movement has evolved into a devastating political force just within weeks. The movement has reached proportions not seen since the riots of 1968. It has caused government to back down on strategic reform and even forced the president to retreat.
Unfortunately, social media has also been used to intimidate the public, members of the movement itself, MPs and law enforcement. Calls for physical violence and incitement abound. In a matter of no time has one of Europe’s most stable democracies been thrown into institutional chaos and the specter of populistic incitement is looming more dangerously than ever. It is against this backdrop that we urgently need a serious debate on the roles and obligations of social media in democracy or even on “Net Accountability” in general.
Hate Speech and Fake News regulation is expanding across the globe. The German Law (NetzDG), which entered into effect on January 1st 2018 is the prime example. It has inspired several countries to crack down on harmful content and reinforces the legal arsenal. All of these initiatives, commendable as they may be, highlight the importance of developing a coherent and workable accountability standard for the tech community.
Criminalizing certain content and holding intermediaries accountable (social media, search engines, platforms…) must to satisfy basic legal principles of complicity or “secondary liability” and in particular the principle of personal accountability. This means that the intermediary must have intended to be complicit with the commission of the punishable act ( “mens rea”) or somehow be associated with it. There is no crime without intention to commit it.
The principles of personal responsibility and the requirement of an intentional element are found in various foundational legal instruments for instance the Declaration of the Rights of Man and of the Citizen (France 1789 ), the European Convention on Human Rights, the U.N. International Covenant on Civil and Political Rights and the Warsaw Convention on the Prevention of Terrorism (just to mention a few). The problem is of course that the intermediary would in fact be held accountable (or co-accountable) for content of which it is not the author.
One may attempt to develop an accountability standard based on the theory of negligence or organizational fault ( “systemic” as in the German law). Another avenue is that of “material support”, which is found for instance in the Patriot Act. One may also decide that complicity arises as of the moment the intermediary acquires knowledge of the illicit content. Given the nature of technology, this would actual mean “construed” as opposed to actual knowledge. The intermediary would not have to associate itself with the author’s intention. Whatever the legal basis for indirect liability, it is key that legislative initiatives carefully consider these choices and build in the necessary safeguards in order to avoid a regime of “objective or strict liability”.
Apart from prescribing clear procedures allowing for such construed, intention another problem arises: It is often extremely difficult to qualify whether impugned content is in actual fact illicit (contextual and linguistic challenges are abundant). Perhaps in order to address the difficulties of qualification and respect essential principles of law, one should envisage other remedies than penal sanctions, which could be better adapted to the particular nature of these questions.
Among the solutions that we propose is the creation of an “Internet Ombudsman” – a recommendation made before the Parliamentary Assembly of the Council of Europe. This “Ombudsman” would assist intermediaries in qualifying content and in addition to limiting chilling effect on free speech, it would allow for an accountability standard based on the intermediary’s conscious decision not to follow the Ombudsman’s recommendation.
This solution would allow the actors/users and intermediaries of the Internet (social media, search engines, hosts etc.) in good faith to obtain a recommendation on the licit or illicit content. The recommendation would not be binding (it would in no way constitute an order or a judgment), but in the event the intermediary would decide to follow the recommendation (in a voluntary manner), the intermediary would no longer be exposed to any risk of criminal or civil proceedings. The opposite will also be the case.
Retaining accountability of social media or other intermediaries without such guidance seems unjustified. Even very experienced judges may have great difficulty qualifying content. In addition to the risk of violating fundamental principles of law, Hate Speech (or « incitement ») regulation could become counterproductive : The intermediaries or even the authors could likely be considered victims of censorship or « martyrs » in addition to a « Streisand effect ».
Hate Speech regulation is often combined with « Fake News » regulation. Upon closer examination of these initiatives, it transpires that they almost always amount to a prohibition against offensive/defamatory (i.e. false) statements against political figures. Thus, contrary to case law of the European Court of Human Rights, « public figures » would in fact enjoy better protection under these laws than ordinary citizens.
Under the pretext of « protecting democracy », we would in actual fact protect a tiny part of the population (those who govern us), while leaving the rest with no effective legal remedy to defend their reputation and dignity on the Internet despite the nefarious consequences of false information on the professional and social lives of thousands of people. These initiatives constitute a violation of the principle of equality before the law.
For all those reasons, such regulation would probably be censured by Constitutional Courts, the European Court of Human Rights or the United Nations Commission on Human Rights.
Fake news regulations are clear examples of limits not to be overstepped. Net accountability should not be on alibi for censorship.