Tijana Kovac
AI-generated image from Midjourney. Sourced via The Spectator.
Amid continued displays of far-right violence in the UK, social media has yet again proved itself as a place for extremist ideologies to thrive.
Social media platforms (SMPs) have become indispensable tools for connecting with others, accessing information, and participating in global conversations. However, they present opportunities for terrorists to take advantage of ‘likes’, ‘retweets’, and ‘hashtags’ online to disseminate (usually sanitised) messages amongst popular audiences, incite violence, recruit, fuel polarisation within societies, and coordinate attacks.
Although this threat is well-recognised, global responses to regulate SMPs remain fragmented, primarily due to the absence of a unified international approach to social media governance.
The Duality of Social Media
SMPs like Facebook, X, and YouTube have revolutionised communication, becoming essential hubs for news and political dialogue, especially among younger generations. However, SMPs also offer terrorists an efficient, cost-effective means to bypass geographic constraints and engage millions in an interactive, two-way exchange. For example, at its peak, Islamic State posted around 90,000 pro-IS tweets daily on X, generating significant global exposure and interaction.
The situation has become more complex with the rise of Generative AI, which enables the creation of hyper-realistic yet fake content designed to manipulate public opinion, spread misinformation, and sow confusion. This tactic has been observed in the Russia-Ukraine and Israel-Palestine wars, deceiving users and provoking emotional responses.
While some scholars argue that the link between consuming extremist content and engaging in violence is not straightforward, the impact of such messaging cannot be dismissed. Messaging inescapably influences consumers—this has been the premise of advertising since its inception. Moreover, emerging research increasingly links terrorists’ exploitation of offline vulnerabilities to online radicalisation.
This convergence of traditional terrorist tactics with cyber strategies presents significant national and international security threats.
Regulatory Challenges in the Digital Space
The absence of a universally accepted definition of terrorism complicates regulatory efforts as countries and political entities apply varying definitions.
In theory: "It's an unlawful use or threat of violence against persons or property to further political or social objectives." In practice however, terrorism can be applied to any unfavourable action taken in response to state policies. This subjectivity, coupled with the diverse motivations that may fall under terrorism’s umbrella—like guerilla groups or insurgents—results in definitions that are either too broad or too narrow.
The global nature of SMPs further complicates the issue, raising complex legal questions about whether states or companies should manage the threat. These challenges are clear between the United States’ (US) and European Union’s (EU) differing regulatory models.
The ‘Free and Open Internet’ Approach: A Terrorist’s Dream
The US lacks a comprehensive law to regulate social media. The state faces unique challenges due to its strong emphasis on free speech, enshrined in the First Amendment of its Constitution.
Section 230 of the Communications Decency Act complicates attempts to regulate SMPs by providing broad immunity to tech companies. The Act was intended to protect “interactive computer service[s]” from being treated as publishers of third-party content. This legal protection is unique to the US and reflects the principle of the internet being a “marketplace of ideas.” While this approach allows diverse viewpoints to flourish on the internet, it has enabled terrorist content to proliferate online and companies to evade accountability. Attempts to regulate this space, like creating narrowly defined exceptions within Section 230, have faced significant resistance due to concerns over free speech.
An Interventionist Approach: A Step Towards Accountability
Contrastingly, the EU has taken a bold, interventionist stance to regulating SMPs through its 2022 Terrorist Content Regulation (TCR). The TCR aims to curb the spread of terrorist propaganda through referral systems, mandating platforms remove state-flagged content within one hour and imposing financial penalties on companies failing to comply.
While the TCR addresses important security issues, it has ignited significant concerns about free expression. Its broad definition of “terrorist content” may lead to the removal of legitimate material like that of Google’s removal of over 100,000 videos between 2012-2018 related to the Syrian conflict, causing a loss of vital evidence.
Other measures like the EU Internet Forum’s voluntary system fail to work at scale, as the home countries of major digital platforms are usually situated outside the legal competence of the EU, often within the US or China.
These challenges highlight broader barriers to achieving global regulation, despite the progress made toward meaningful cooperation at the regional level.
Self-Regulation is Not a Solution
Without international cohesion, social media governance has largely relied upon self-regulation. Even with AI-assisted takedowns, the sheer volume of content poses complications for SMPs. A 2017 evaluation showed that Facebook removed only 28.3% of flagged content within 24 hours, Twitter 19.1%, and YouTube 48.5%.
Facebook’s reliance on fact-checking and flagging—which other platforms like TikTok have since adopted—to combat fake news and hate speech is insufficient, especially for users who seek out misinformation to confirm biases. Moreover, systems can be easily overwhelmed by malicious users that report genuine content as fake.
The challenge is not just technological. SMPs exist to generate profit, and consequently avoid strict content moderation to prevent losing users to less-regulated alternatives. This profit-driven logic leaves SMPs with little incentive to prioritise public safety without financial or operational consequences, rendering self-regulation ineffective at curbing terrorist content.
The Way Forward: A Unified Global Approach
Without coherent international legislation or a competent international regulatory body, internet governance remains fragmented and inefficient. States must set aside competing national interests and political priorities to develop a unified global framework to counter terrorist materials online. Until then, social media regulation will remain timid, allowing terrorists to exploit its gaps.
Tijana Kovac is a Master of International Relations student at Monash University, Australia, double specialising in Political Violence & Counter-Terrorism and International Diplomacy & Trade, currently completing her master’s thesis on social media governance.
Comments