E. Mantovani (LSTS), R. Sabbadini (guest author), A. Kumar (LSTS) and P. De Hert (LSTS) ponder upon the impact of algorithms on free speech and suggest a three-way plan of study.
Today’s world is one in which a significant number of social interactions and the way we present ourselves to the world take place on online platforms. This evolution in social interactions and representation, coupled with the advent of Artificial Intelligence (AI) and big data applications that play an increasingly crucial role for the functioning of online platforms, poses new questions and challenges concerning the effective protection of European values and fundamental rights of the citizens – users, particularly in relation to art. 10 and 11 of the Charter of Fundamental Rights of the European Union; i.e. the very freedoms that according to the Eurobarometer more than 4 in 5 Europeans cherish the most: freedom of thought and freedom of expression.
The possibility of being banned from, suspended or otherwise penalised on social media platforms on the basis of an opinion has introduced a new form of social exclusion, a form of ‘digital ostracism’ which carries increasingly heavy social and economic costs for those affected, including losing friends on Facebook, followers on Twitter, likes on Instagram, views (and revenues) on YouTube, journal article downloads, as well as the ability to engage in respectful, abuse-free dialogue with others. These costs are heightened by the apparent semi-permanence of the stains on reputation that derive from being banned, which are recorded and difficult to expunge over time (Elford, 2021). There can be also very serious real-world consequences, most notably in the form of employment loss or in the curtailment of one’s employment prospects. Moreover, in terms of democracy and governance, the ‘chilling effect’ on freedom of expression brought by the fear of being banished from social platforms because of non-conforming opinions seriously risks encouraging trends towards majoritarianism, unilateralism, nationalism and populism. Similarly as to what social psychologist Jonathan Haidt observed about the American society, even in Europe the policing of opinion that takes place on social media on both the Left and the Right, with each side punishing more nuanced thinkers on their own team, increases polarisation and makes compromise between the two sides more difficult.
Does AI interfere with free speech? It most probably does. Out of the sheer amount of data uploaded by social media users (e.g. 500 hours of video uploaded on YouTube every minute in 2020), social media platforms rely more and more on the use of AI and machine learning algorithms to make decisions about banishments and suspension or penalties of users. Facebook deleted 26.9 million pieces of content for violating its Community Standards on ‘hate speech’ in the first quarter of 2020, 17 times as many as the 1.6 million instances of deleted hate speech in the last quarter of 2017. More than 97% of the purged hate speech in the fourth quarter of 2020 was identified by AIs, as reported by Jacob Mchangama in his latest book (Mchangama, 2022).
How does AI interfere with free speech? While, at first, the reliance on automated processes to decide whether or not someone should be allowed to continue their activity on social media platforms may give the idea that such processes are fairer and more objective than human-based processes, the fraught issue related to the actual understanding and transparency of automated decision-making processes must be investigated thoroughly, as the ‘right to an explanation’ (art. 22 of GDPR) and the right to ‘meaningful information about the logic involved’ in automated decisions (art. 13-15 of GDPR) are key for any user to be able to seek rectification and redress for decisions that led to their exclusion from social media platforms.
Starting from these premises, a number of research questions spring to mind:
- How effective are AI applications in the balancing act between the need to eliminate illegal content from social media platforms and the need to guarantee the users’ freedom of expression?
- How free to express their opinions do social platform users feel they are? How free are they in actuality?
- What role do algorithms play in the decision-making process leading to the banishment from social media platforms or to penalising measures such as suspension, demonetisation or the ‘invisibilisation’ of one’s content?
- What protection does current and proposed legislation provide to those who feel their right to freedom of expression has been violated or risks being violated by an algorithm-based decision?
- Is it possible to completely eliminate the political biases of the programmers when designing an algorithm or an AI? If that is possible, is it desirable?
- Can algorithm-based decision-making be realistically expected to be more transparent than it currently is?
- How should citizens protect their right to freedom of expression if, should algorithms and machine learning reveal themselves to be intrinsically intractable in terms of transparency?
To answer these questions, we believe that a series of activities mobilising social empirical, computer science, and legal research is necessary. First, one would need to collect information, which is currently not available, in relation to (i) the awareness among users on the use of algorithms and machine learning on social media platforms, and (ii) the awareness of the enjoyment of freedom of expression online, e.g. whether or not users feel the need to self-censor when posting on social media platforms. In addition to desk research, it would be also crucial to run a survey aimed at the general public and a ‘sentiment’ analysis of messages posted on social media to gauge how the public reacts to cases of users being de-platformed for their controversial, yet legal, opinions. A randomised survey aimed at the general public would require, to be meaningful, a large dataset (e.g. around 50,000 replies in at least 5 countries of the EU).
Second, after learning about concrete cases and social perceptions, it is necessary to learn whether AI plays a part in penalising users and to what extent. The problem in investigating the way algorithms impact the free speech of platform users is that, on the one hand, the exact functioning of social media algorithms is a very well-guarded secret and, on the other hand, that for the most advanced form of ML based algorithms, their internal logic may very well be extremely challenging to describe in terms that humans in general, not to mention lay people, may understand. A way around these difficulties could be to do something vaguely akin to ‘correspondence tests’, i.e., to run a social experiment designed to reveal the logic behind penalisations based on expression, ideally run unbeknownst to the targeted platforms.
A third avenue of research needs to engage legal scholars directly and, ideally, with the support of public institutions such as, for instance, Data Protection Authorities (DPAs), in order to collect and analyse regulatory and governance frameworks on AI and big data applications for online platforms, at the European and national level, to assess the effectiveness of monitoring and control protocols of established and planned legislation.
We believe that the combined results of these three different research venues would lead to a final set of policy recommendations, particularly needed in case current or planned legislation proves unable to guarantee the fundamental right to free speech.
18 October 2022
Disclaimer:
This post is based on a project proposal the authors are planning to elaborate and submit in the near future. Readers interested on the subject are welcome to contact the authors for more details.