Thought

Censorship to defend democracy

Misinformation, propaganda, and biased narratives are increasingly recognised as a major concern and a source of risk in the 21st century (World Economic Forum 2024). The Russian interference in the 2016 US presidential election marked a pivotal moment, raising awareness of foreign influence in democratic processes.

From Russia’s ‘asymmetric warfare’ in Ukraine to Chinese influence over TikTok, autocracies are weaponising information, shifting their effort from outright repression to controlling narratives (Treisman and Guriev 2015, Guriev and Treisman 2022).

Recent work (Guriev et al 2023a, 2023b) highlights two major policy alternatives democracies could adopt to counter this threat. One strategy relies on top-down regulatory measures to control foreign media influence and the spread of misinformation. The other addresses the issue at the individual level with media literacy campaigns, fact-checking tools, behavioural interventions, and similar measures.

The first approach is particularly challenging to implement in a democratic context due to the inherent trade-off between implementing effective measures to curb misinformation and upholding free speech as a core principle of a democratic order.

Recent actions, such as the Protecting Americans from Foreign Adversary Controlled Applications Act passed by the US Congress in 2024, the EU’s Digital Services Act (European Union 2023), the German NetzDG (Müller et al 2022, Jiménez Durán et al 2024) or Israel’s ban of Al Jazeera’s broadcasting activity, demonstrate that democratic governments see large-scale policy interventions as a necessary and viable tool to counteract the spread of misinformation.

At the same time, the rising importance of social media as a news source adds a new layer of complexity to effective regulation. In contrast to traditional forms of media such as newspapers, radio, or TV, which have a limited number of senders, social media is characterised by a large number of users that act as producers, spreaders, and consumers of information – changing their roles fluidly and thereby making it harder to control the flow of information (Campante et al 2023).

To shed light on the effects of censorship in democracies, our recent work (Caesmann et al 2024) examines the EU ban on two Russian state-backed outlets, Russia Today and Sputnik. The EU implemented the ban on 2 March 2022 to counteract the spread of Russian narratives in the context of the 2022 Russian invasion of Ukraine.

The unprecedented decision to ban all activities of Russia Today and Sputnik was implemented virtually overnight, affecting all their channels, including online platforms.

We investigate the effectiveness of the ban in shifting the conversation away from narratives with a pro-Russian government bias and misinformation on Twitter (now X) among users from Europe. To do this, we leverage the fact that the ban on Russia Today and Sputnik was implemented in the EU while no comparable measure was taken in non-EU European countries such as Switzerland and the UK.

We build on recent advancements in natural language processing (Gentzkow and Shapiro 2011, Gennaro and Ash 2023) and measure each user’s opinion on the war by assessing their proximity towards two narrative poles: pro-Russia and pro-Ukraine. We create these poles by analysing more than 15,000 tweets from accounts related to the Ukrainian and Russian governments.

Using advanced natural language processing, we transform these tweets into vectors representing the average stance of pro-Russian and pro-Ukrainian government tweets. We use these as two poles of slant in the Twitter conversation on the war.

Figure 1 illustrates the content differences in government tweets, with keyword frequencies in Russian government tweets in purple (on the left) and in Ukrainian government tweets in orange (on the right). Keywords like ‘aggression’ and ‘invasion’ are used predominantly by Ukrainian accounts to frame the conflict as an invasion, while the Russian narrative describes it as a ‘military operation’.

Other keywords like ‘occupy’, ‘defence’, ‘NATO’, ‘West’, ‘nazi’, and ‘Donbas’ further highlight the distinct narratives of each side. These terms underline the slant in government content, making them effective benchmarks for our measurement.

Note: Russian government tweets in purple (on the left) and Ukrainian government tweets in orange (on the right).

Next, we collect more than 750,000 tweets on the conflict in the four weeks around the implementation of the ban. We compute a slant measure for each tweet by calculating its proximity to the Russian pole relative to the Ukrainian one, centring it at zero. This measure takes negative values when the tweet leans towards the Ukrainian pole and positive ones for the Russian pole.

Figure 2 plots the time series of raw averages of average slant by users in the countries affected by the ban in blue and those not affected by the ban in orange. Our measure of media slant captures the dynamics of the online discussion.

Until the invasion, the conversation was increasingly moving towards the pro-Ukrainian pole. The beginning of the invasion also coincides with mounting pro-Russian activity, most likely capturing the intense online campaign that flooded Europe and pushed the EU to a swift reaction.

Overall, the raw data already suggests an effect of the ban on the spread of pro-Russian government content; we observe a growing divergence in the average slant between EU and non-EU countries after the ban is implemented.

Censorship in a democratic context can affect content circulated on social media

Note: EU includes FR, DE, IT, IR, AT; non-EU includes UK, CH

To estimate the causal effect of the ban more systematically, we compare users located in the EU (Austria, France, Germany, Ireland, and Italy) that were affected by the ban to users located in non-EU countries (Switzerland and UK) that were not affected by a ban in the time of our study, using a difference-in-difference strategy.

First, we focus on users who previously directly interacted (followed, retweeted, or replied) with the two banned outlets. Figure 3 shows the results of this analysis and indicates an immediate and sizeable effect of the ban, leading to a reduction in pro-Russian slant among users affected by the policy.

Our estimates suggest that the ban reduced the average slant of these interaction users by 63.1% compared to the pre-ban mean with no clear existing pre-trends before the ban. In the paper, we show that this effect is most pronounced among users who were most extreme before the ban.

A closer investigation into the temporal effect of the ban suggests that the effect is fading over time. While there is an immediate effect after the ban, even within the short time horizon of our study, the difference in average slant, between users affected by the ban and those not, closes a few days after the implementation.

We further study the indirect effects of the ban on users who did not directly interact with the banned outlets. We do find that the ban also reduced the pro-Russian slant among the non-interaction users.

However, this happens to a lower degree, resulting in a decrease of approximately 17.3% from pre-ban slant levels, in contrast to the 63.1% observed among interaction users.

Notably, we find a reduction in the share of pro-Russian retweets driving this effect. This finding suggests that the ban deprives non-interaction users of slanted content that they are able and willing to share.

Our results show that the ban had some immediate impact, particularly on those users who interacted with the banned outlets before the ban implementation. However, this effect fades quickly and is muted in its reach to indirectly affected users.

In the final step of our analysis, we investigate the mechanisms that might have compensated for the ban’s effect, effectively re-balancing the supply of pro-Russia slanted content. This part of our study particularly examines users identified as suppliers of slanted content.

We provide suggestive evidence that the most active suppliers have increased the production of new pro-Russian content in response to the ban and thereby helped to counteract the overall effectiveness of the ban.

Our analysis complements insights from studies investigating small-scale policy interventions targeting the individual user (Guriev, Marquis et al 2023, Guriev, Henry et al 2023), by studying the effects of a large-scale policy alternative: governmental censorship of media outlets. Specifically, we provide evidence that censorship in a democratic context can affect content circulated on social media.

However, there seem to be limits to the effectiveness of such measures, reflected in the short-lived effect of the ban and its more limited impact on users who are only indirectly affected by the policy.

Our study points to the crucial role of other suppliers who are filling the void created by censoring core outlets. This reflects the changed nature of media regulation in the context of social media, where many users can create and spread information at low costs.

The ability and willingness of other users to take action seem to limit the effectiveness of large-scale regulatory measures targeting big outlets. Successful policy interventions need to account for these limits of large-scale regulatory measures in the context of social media.

References

Caesmann, M, J Goldzycher, M Grigoletto, and L Gschwent (2024), “Censorship in democracy”, Department of Economics Working Paper 446, University of Zurich.

Campante, F, R Durante, and A Tesei (eds) (2023), The political economy of social media, CEPR Press.

European Union (2023), “Digital Services Act – Application of the risk management framework to Russian disinformation campaigns”.

Gennaro, G, and E Ash (2023), “Emotion and reason in political language”, The Economic Journal 133(650): 904.

Gentzkow, M, and JM Shapiro (2011), “Ideological segregation online and offline”, Quarterly Journal of Economics 126(4).

Guriev, S, and D Treisman (2022), Spin dictators: The changing face of tyranny in the 21st century, Princeton University Press.

Guriev, S, T Marquis, E Zhuravskaya, and E Henry (2023a), “Curtailing false news, amplifying truth”, CEPR Discussion Paper 18650.

Guriev, S, E Henry, T Marquis, and E Zhuravskaya (2023b), “Evaluating anti-misinformation policies on social media”, VoxEU.org, 10 December.

Jiménez Durán, R, K Müller, and C Schwarz (2024), “The effect of content moderation on online and offline hate: Evidence from Germany’s NetzDG”, SSRN.

Müller, K, C Schwarz, and R Jiménez-Durán (2022), “The effect of content moderation on online and offline hate”, VoxEU.org, 23 November.

Treisman, D, and S Guriev (2015), “The new authoritarianism”, VoxEU.org, 21 March.

World Economic Forum (2024), Global risks report 2024.

This article was originally published on VoxEU.org.