YSEC Volume II (2021)
Triangulating Freedom of Speech: Business, Social Rights, and the Freedom of Speech in a Digital Age
Addresses the challenges associated with upholding freedom of speech
Deals with the interplay between freedom of speech and social and economic rights
Combines contributions from leading international academics, practitioners, and policy makers in their respective fields
This volume addresses the challenges associated with upholding freedom of speech where it conflicts with social rights, such as respect for private and family life and with economic rights such as the freedom to conduct a business or the rights of free movement.
We live in a digital age, where technology shifts happen faster than most people even realize. In this age, we have all become powerful: media powerful. We used to sit in silence in front of newspapers and TV screens and the world was explained to us by the few “who always knew what was right and wrong”. Today, thanks to the Internet, social media and Web 2.0, we can not only share our own thoughts with everyone in a more self-determined way, but we can also take part in public debate and even co-organise it ourselves. But, have we stopped to consider who sets the rules in this brave new world?
The empowerment of the individual challenges the “grand speakers” who are suddenly detecting “fake news”, echo chambers, and filter bubbles everywhere on the Internet.
Internet-based communication allegedly hinders us from the “one truth”; as if newspaper hoaxes, propaganda, and narrow-mindedness were an invention of the Internet. The current heated debate about “fake news”, copyright and “upload filters” shows that we are unsure of how to deal with the newer and more complex phenomena of Internet-based speech. In no small part is this due to the fact that an important benchmark - our constitutional compass - is still strongly rooted in the past. Constitutions change at a significantly lower speed than technology. Societal changes pull constitutional changes; but what about normative content control?
Already there are demands for “old-times clarity”: truth filters on social media platforms, horrendous sums of liability for platforms that encourage (over-)thorough cleaning up and, if some had their way, ideally an “Internet truth advisory board”. However, it is equally true that private individuals “regulate”: they decide what is found on the Internet and who may post on a platform. Who will protect us from the threatening care curate?
In today's networked world, technology shifts happen faster than most people even realize. Some of these shifts have made us all potentially powerful: media powerful. We used to sit in silence in front of newspapers and TV screens and the world was explained to us by the few “who always knew what was right and wrong”. Today, thanks to the Internet, social media, and Web 2.0, we can not only share our own thoughts with everyone in a more self-determined way, but we can also take part in public debate and even co-organise it ourselves. Of course, the internet is not the counter-draft to the communication (power) structures of the past. Gains in communicative self-determination are threatened due to algorithmisation, platformisation, and value extraction from self-created private markets in data capitalism. However, there is arguably more potential for self-determination in the technology relating to the Internet, social media, and Web 2.0 when it comes to mass communication by individuals than there has ever been before with "old media".
The empowerment of the individual challenges the old “grand speakers” who are suddenly detecting “fake news”, echo chambers, and filter bubbles everywhere on the Internet. Internet-based communication allegedly hinders us from the “one truth”; as if newspaper hoaxes, propaganda, and narrow-mindedness were an invention of the Internet. The current heated debate about “fake news”, copyright and “upload filters” shows that we are unsure of how to deal with the newer and more complex phenomena of Internet-based speech. In no small part is this due to the fact that an important benchmark – our constitutional compass – is still strongly rooted in the past. Constitutions change at a significantly lower speed than technology. Societal changes pull constitutional changes; but what about normative content control?
Already there are demands for “old-times clarity”: truth filters on social media platforms, horrendous sums of liability for platforms that encourage (over-)thorough cleaning up and, if some had their way, ideally an “Internet truth advisory board”. However, it is equally true that private individuals “regulate”: they decide what is found on the Internet and who may post on a platform. Accounting for all interests at play and to strike a "fair" balance avoiding both a public or private care curate by over- or under regulation is a complex matter. The authors of this volume do not only provide reflections in their highly topical contributions, but also suggest their understanding of what accounts to a "fair balance" within the larger frame of freedom of speech in a digital age.
New technologies have had an impact that is both positive and negative on freedom of speech, constitutional rights and democratic processes. It was positive in the early stages of the development of the Internet and particularly in the early stages of Web 2.0, when the Internet was designed in a more participative and cooperative manner. In recent years, nevertheless, hierarchical processes of information and data organisation have appeared through large technological companies that monopolise the distribution of information and opinion and which are the new mediators between users and the public sphere. Freedom of speech is currently constrained by these mediators, namely large technological companies that control the communicative processes. This paper analyses the role that these new mediators are developing, taking into account their impact on freedom of speech and on the configuration of the public sphere in democratic systems.
Two elements stand out among the new mediators: the dialectic of freedom of speech is shifting from the public to the private sphere and from the state to the global sphere. These are two elements that together help fuel the power of the new mediators and weaken the state’s capacity for regulation and control. But in the ecosystems developed by technology companies, the new mediators exercise a power that is not strictly private since they occupy and monopolise a public sphere. In the environment created by the new mediators, freedom of expression becomes a mere commercial product, so that information and opinion are transformed into ephemeral merchandise organised through the algorithms of Internet applications, which decide their impact on and their incidence in the public sphere.
These algorithms have been created with an economic purpose and promote fake news and radicalisation to attract the attention of the public and thus generate greater profit. The new mediators, by promoting fake news in democratic contexts (without trying to impose a specific narrative, as in dictatorial ones), generate a destructive tension about reality. Instead of contributing like the traditional media does to the social construction of reality or as in dictatorships to the reconstruction of reality based on the interests of the dominant oligarchy, they are causing the “destruction” of reality, that is, of a shared social perception of reality.
Among the many measures that can be adopted, those related to competition law stand out, with institutional measures through regulators that may avert an even greater concentration of power. However, rather than restrictions, it is openness that is desirable—open technology that puts an end to the closed-off, hierarchical nature of applications. Telephone communication, for example, is open and allows mobile phone operators to operate, making global communication possible, and the same is true for e-mail servers. Communication applications that are currently closed-off (WhatsApp and Telegram, for example) should also be open, intercommunicable and managed by a plurality of operators.
Freedom of speech in the digital era comes with a number of open issues, including the role of regulation, of the markets as well as that of the public sphere. In many of the related issues, tensions of constitutional and social nature resurface. This is also the case with some recently introduced EU policy initiatives and legal rules to digital speech. While they seem to give ground to EU wide rules on some important issues, their reach can remain limited without a constitutional and social understanding of the matters at hand. Thus it appears in many ways beneficial to have a wider outlook and analysis on these topics, including on the constitutional and social ramifications of regulating freedom of expression today. This article sets an example to that and joins these threads together in its analysis. By doing so, the work aims to contribute towards obtaining a wider outlook into an evolving domain which has global implications.
Anna Aurora Wennäkoski (2021)
Internet speech provides opportunities for democratic discourse but has also proven to cause harm to democracy by elevating disinformation, harassment, and extremism. Regulating power in the digital world challenges traditional understandings of freedom of expression and might require a legal response at the constitutional level. This article explores how internet speech and freedom of expression have been addressed in three constitutional reform processes commenced after the 2008 financial crisis, in Iceland, Ireland, and Norway. In all cases the novel or emerging problems involving internet speech, and the power of internet platforms in particular, were missed by constitutional reformers while positive aspects of internet speech were embraced and granted constitutional protection. The experiences highlight, among other things, the importance of timing of constitutional reform: Reformers necessarily focus mostly on problems of the past but the timing of a "constitutional moment" may not be optimal to address what will become pressing problems. Reformers are constrained, or perceive themselves to be constrained, by international law and traditional constitutional law doctrine where the state is the principal risk for fundamental rights, the power of private entities, including internet platforms, goes unaddressed, and the global scale of internet speech, far beyond the territorial jurisdiction of constitutional law, presents various complexities.
Up until very recently, AI-generated, or more precisely, machine learning (ML) generated content was still in the realm of sci-fi. A recent series of important inventions gave AI the power of creation: Variational Autoencoders (VAEs) in 2013, Generative Adversarial Networks (GANs) in 2014, and Generative Pre-trained Transformers (GPT) in 2017. Synthetic products based on generative ML are useful in diverse fields of application. For example, generative ML can be used for the synthetic resuscitation of a dead actor or a deceased loved one. Can ML be a source of speech that is protected by the right to freedom of expression in Article 10 ECHR? In contrast to a tool, like a pen or a typewriter, ML can be such a decisive element in the generative process, that speech is no longer (indisputably) attributable to a human speaker. Is speech generated by a machine protected by the right to freedom of expression in Article 10 ECHR? I first discuss if ML-generated utterances fall within the protective scope of freedom of expression (Article 10(1) ECHR). After concluding that this is the case, I look at specific complexities raised by ML-generated content in terms of limitations to freedom of expression (Article 10(2) ECHR). The first set of potential limitations that I explore are those following from copyright, data protection, privacy and confidentiality law. Some types of ML-generated content could potentially circumvent these limitations. Secondly, I study how new types of content generated by ML can create normative grey areas where the boundaries of constitutionally protected and unprotected speech are not always easy to draw. In this context, I discuss two types of ML-generated content: virtual child pornography and fake news/disinformation. Thirdly, I argue that the nuances of Article 10 ECHR are not easily captured in an automated filter and I discuss the potential implications of the arms race between automated filters and ML-generated content.
Transnational digital platforms have contributed greatly to freedom of expression, not least including easy access to information. However, they have also enhanced private life infringements such as the unconsented distribution of nudity, sexual activities and fake porn.
We may look to the European Court of Human Rights (ECtHR) which has dealt extensively with the balancing of freedom of expression and the protection of private life. The Court has developed a set of criteria to balance the rights; The criteria are not perfect, but they are workable, and they include criteria such as ‘contribution to a debate of general interest’ and ‘the methods involved’ when collecting and distributing information, including pictures.
However, companies including transnational digital platforms, are not legally bound by international human rights law, only states are. To address this, the UN has developed United Nations Guiding Principles to Business and Human Rights. These Principles do not create legal obligations, but duties including the duty to address human rights adverse effects of their activities. So far, the transnational digital platforms have done little, if anything at all, to address private life infringements. As of late, Facebook has indeed established an Oversight Board and declared its commitments to the UN Guiding Principles. This is a step in the right direction but overtly insufficient, as it solely addresses infringements of freedom of expression, but not any other human rights infringement such as violation of private life.
Sten Schaumburg-Müller (2021)
This chapter focuses on platforms’ protection against (unjustified) interference with the free drafting of house rules, viewed through a lens of European fundamental rights protection. It discusses the difference in protection of two fundamental rights in the European Charter of Fundamental Rights: article 16’s freedom to conduct a business and article 17’s right to property. The articles’ subject of protection (“the essence of the right”) will be mapped by analysing the CJEU’s case-law between respectively 1974 and 1979 until 2020 and the outcome of this analysis will be applied to the process of running a platform. It will show that article 16, rather than article 17, covers platforms’ house rule-drafting. However, it is unlikely that restrictive measures will interfere with the essence of article 16. Measures limiting house rule-drafting can therefore be justifiable.
To be justifiable, the measures must live-up to the principle of proportionality. Therefore, subsequently, the question arises whether potential measures live up to this principle. To prevent a purely normative answering of that question - due to the fact that a standardized test does not exist - the Unfair Contract Terms Directive’s unfairness test will be used as interpretational guidance. It is concluded that measures limiting a platform’s contractual freedom can, and most likely will, be justifiable to protect platform users’ freedom of expression. That opens the door for future legislation.
Outside the direct realm of platforms, this chapter demonstrates that the freedom to conduct a business has been reduced to an empty shell, or rather, a shell that has never been inhabited. Apart from the situation where an undertaking would be able to demonstrate that a proposed measure would mean the end of the business, the freedom to conduct a business does not provide any effective protection.
In the wake of the 2020 presidential election, incumbent Donald Trump voiced dubious allegations of vote fraud. In response to that, social media giants Twitter and Facebook made a dramatic step of attaching corrective statements to the U.S. president’s posts. By doing so, they highlighted their power over core political speech. In his 2014 article ‘Old-School/New-School Speech Regulation‘, influential American legal scholar Jack Balkin argued that today speech is often suppressed by private actors, controlling the digital infrastructure, rather than directly by governments.
In a follow-up article ‘Free Speech in Algorithmic Society‘, Balkin suggests that there is a triangular relationship between governments, corporations and end-users. My contribution will discuss this triangulation in the context of electoral speech. In recent years, the threat of foreign interference in elections became a widely discussed issue in established democracies. These discussions specifically highlighted the role of social media. They have also led to an increasingly aggressive policing of election-related content on major platforms such as Facebook and Twitter.
This Chapter suggests that securitization of elections in established democracies occurs primarily as cooperation between private actors and governments. The apparent exclusion of end-users is a worrying tendency. It obscures the distinction between established democracies and competitive authoritarian regimes, ultimately undermining the very democratic legitimacy that needs to be secured. I argue that, given the role of social media companies in the current environment, their obligations go beyond the traditional notions of corporate social responsibility. These companies have de facto assumed the role of a regulator effecting election securitization in online speech. I suggest that such (self-) regulation could only be proportionate, if internationally recognized standards of free elections are taken into account.
The chapter discusses possible interactions between free speech and antitrust. Antitrust law is typically associated with economic phenomena and competitive market conditions. However, claims have been made during the ongoing debate over the role of Big Tech that antitrust enforcement should also be concerned with the exercise of free speech. The chapter shows that while such claims can be seen as controversial, they are not fundamentally ill-conceived, as antitrust law used to interact with free speech and used to be more focused on safeguarding political values, not merely efficiency. Since the free speech narrative is more present in the United States, the chapter covers the US perspective, but it ultimately aims at making a link between free speech and EU competition law, drawing a framework of possible interactions. The main conclusion of the chapter is that while more interaction between free speech and antitrust had been seen in the past in the United States, EU competition law can be in fact more flexible when it comes to accommodating the interests of free speech.
Globalisation leads to technology diffusion which is inevitably linked to the selling of sensitive surveillance technologies. States and the private sector are close collaborators in the market for digital surveillance tools which are a part of a new generation of disruptive technologies. As new technologies filter information, they have the potential to limit the freedom of speech or violate the right to privacy. As such, new means to mitigate their misuse and proliferation, including instruments outside of the box of traditional export control regimes, should be considered. This chapter explains the role of surveillance technologies and their effects in suppressing the right to privacy and the freedom of speech in a digital age. Next, it identifies current legal regimes of export control and its limits under the Waasenaar Arrangement. In this context, it also considers the particular EU latest development with regard to the Dual-Use Export Regulation and business due diligence. Further, it turns to private self-regulation and the need to comply with the OECD Guidelines on Multinational Enterprises and the UN Guiding Principles on Business and Human Rights and calls for more standard-setting built on their principles.
The issue of investor nationality has been one of the most problematic issues in investment law since its inception. Indeed, the drafters of the ICSID Convention were well aware of this and, although they finalised a Convention that does not include a definition of investment, they did include a detailed definition of ‘national of a Contracting State’ in Article 25(2). The nationality of the investor is in fact a crucial factor for the definition of the rules applicable to the protection of the foreign investment, as well as for the identification of the jurisdiction of any arbitral tribunal called to settle a dispute between an investor and the state hosting the investment. Furthermore, questions about investor nationality underlie problems of treaty and forum shopping in the system of investment law. For these reasons, the Nationality of Corporate Investors under International Investment Law is a timely contribution to the study of this problem.
This short essay reviews Nicolas M. Perrone’s Investment Treaties and the Legal Imagination: How Foreign Investors Play by Their Own Rules and reflects on how this book contributes to the debate on the origins of international investment law and the role of investors in shaping such an unbalanced international legal regime.