How to Plan and Execute a Coup
The attempted coups in the United States in January 2021 and Brazil in January 2023 share a key element: In both cases, insurgents have skillfully moved across various social media platforms to create and spread narratives and mobilize their followers. A playbook for staging a coup is emerging, and governments and platforms have been slow to respond. More cooperation and transparency are required. While the European Union has at least created an appropriate legal framework, implementation will be challenging.
Brazil on January 8, 2023; the United States on January 6, 2021: two cases of insurgencies that attained the heart of national power. In both countries, the supporter basis shared important characteristics. They adhered to a mutually sympathetic ideological affiliation, created a narrative about a stolen election and the need for an armed insurrection or coup, and, of course, engaged in actual violent mobilization. Much of the real action – and the real points of commonality – was online. And that matters.
In Brazil, plotters tried to hide their tracks, but a team of researchers has assembled a coherent story. They spotted evidence of coordinated planning on Telegram, Twitter, Facebook, Instagram, Gettr, and YouTube between January 1 and January 10, 2023. There is evidence that Bolsonaristas and Bolsanaro allies employed a hybrid social media strategy – hybrid in the sense that it was partly offline but also involved major social media platforms alongside less moderated alt tech platforms like Gettr, Parler, or Truth Social.
Analyses on the run-up to the storm on Capitol Hill two years before reveal parallels. ProPublica & The Washington Post studied about 650,000 Facebook posts leading up to the events on January 6, 2021. Brookings traced the impact of podcasts which were used to fan the flames. Together with a report by the Election Integrity Partnership that studied how the narrative of the “stolen election” turned into the #stopthestealmovement, these analyses provide a clear picture of patterns traced by following digital footprints across platforms.
Insurgents in both countries evidently used the same online playbook on how to plan a coup online and execute it in the offline world: A small core group of X plans and organizes the various phases using encrypted messenger services; they then amplify their narrative and galvanize and radicalize users via major social media platforms; next, they mobilize their followers offline; and finally, they hide whatever tracks they left online. This use of social media is a game-changer when it comes to mobilizing insurrections because the platforms are built on algorithms designed for virality, engagement, and access to millions of users.
Avoiding Content Moderators Was Easier Than It Should Have Been
Plotters have exhibited considerable knowledge of how social media “ecosystems” work. They have managed to catapult fringe content from smaller, less moderated alt tech platforms to major social media platforms, all the while avoiding its removal. By leveraging the specific features of each platform, they have successfully ensured that different groups of users see and engage with specific content. This requires plotters to be able to navigate individual content moderation policies across a wide range of platforms. They also need to know the platforms’ real red lines on flagging and removing content to exploit their inconsistency.
YouTube, Twitter, and Facebook were comparatively good at flagging content relating to election denialism in Brazil. These were platforms that had adopted stricter policies after the run on Capitol Hill in 2021 and were resolved to clamp down on misinformation related to the narrative of a stolen election. Yet, Bolsonaristas benefited from the fact that content moderators lacked local knowledge. Given Brazil’s past as military regime from 1964 to1988, it was negligent of platforms to fail to act on viral posts that called for a military coup.
The failure of US social media platforms to attune to local political contexts abroad is not new. It was highlighted in Autumn 2020 by two Facebook whistleblowers, Sophie Zhang and Frances Haugen but still has not been fixed. In 2021, only nine percent of users of the major social media platforms used English. Yet the platforms concentrate an estimated 87 percent of their content moderation resources on efforts to counter English-language misinformation.
Insurgents Use Multiple Platforms to Mobilize Online
In both countries, insurgents built their social media success on mastery of not just one platform. Coordinating efforts on various types and exploiting the lack of coherence between them was key. What stands out in both Brazil and the United States is how skillfully insurgents navigated between platforms for the stages of planning, amplifying, and finally mobilizing. The plotters used strategies to leverage the various functions of internet platforms to maximum effect to bring fringe topics such as the call for a military coup to the mainstream and mobilize potential insurgencts.
Bolsonaristas amplified their narrative, claiming their cause was righteous because of election fraud and the need to take action. To drive home their message, they used a QAnon style decoding of events to support their mission. Offline events were organized as well, such as the massive trucker strike that lasted a couple of days, or Bolsonaristas camping in front of military barracks. In combination, these off- and online events created a drumbeat for the riots in Brasilia on January 8, 2023.
The strategy used in Brazil in particular implies a high level of understanding of platform dynamics and their algorithms. The first stage involves using partially encrypted private messenger apps (like WhatsApp and Telegram) for the tactical planning of the coup. The use of these apps is widespread, complex content can be shared with large groups, and there is no content moderation. According to Statista, the penetration of WhatsApp is 99 percent in Brazil. Telegram is installed on 60 percent of smartphones.
The second stage is designed to amplify the dominant narrative – in both cases, the call for a military coup. This was achieved by cross-posting and deploying formats that encouraged online engagement. Influencers and bots then amplified the call for a military intervention, boosting specific posts for maximum exposure. Next, platform algorithms did what they were designed to do: recommend content based on engagement and virality. These strategies were highly efficient and successful in turning a narrative into a movement.
Regulators and Platforms Are Missing the Problem Under Their Noses
Governments and platforms are speculating about another coup attempt and its chances of success as if this were an abstract puzzle. They need to recognize that there is a playbook available to plan a coup based on digitally maximizing on- and offline capabilities to amplify a cause and push for mobilization, and that it has been employed twice already. Insurgents are learning the lessons. We are not. There are no general content moderation policies that all platforms adhere to diligently. The failure is not due to a lack of data on the insurgents: Even though the plotters deleted their digital footprints after their failed attempt in Brazil, those traces have been reconstructed.
Nevertheless, the platforms continue obfuscating. While most of them adapted their content moderation policies after the January 6 events in Washington, they are not sufficiently transparent regarding the effectiveness of their interventions. This was shown by a scorecard developed by the New America’s Open Technology Institute. Without access to platform data, there is no possibility of testing for impact. The only way to improve content moderation is to bring together local human expertise and machine learning to create content moderation models capable of flagging highly problematic local content.
Platforms also have a blind spot when it comes to acknowledging the long-term impact of these insurgencies. They avoid responsibility by treating the coup attempts almost as if they were closed files. Yet, multidirectional platform strategies across platforms can continue to have a devastating impact on the quality of public debate, as we have seen in both the United States and Brazil. The two plots may not have delivered a successful coup, but they certainly contributed to catapulting fringe topics to the front and strengthening anti-democratic sentiment.
Regulators and Industry Must Link-up Platforms and Countries
Content moderation teams need to work with regional experts who can provide more nuanced insights in different contexts. The idea here is to design hybrid (human and machine learning) content moderation models to flag harmful content in multiple languages. This requires a change from current resource allocation and priorities to invest into developing new practices.
While Brazil is not drafting new legislation to govern social media, its response has been swift when it comes to detaining and arresting protesters, issuing judicial orders for the removal of extremist content in relation to the far-right invasion, and pursuing those suspected of undermining Brazil’s democracy. In the United States, persecution of insurgents is still underway. The January 6 Commission report has been released, but there has been no successful attempt at passing legislation which would require social media companies to disclose more data about content moderation. The hard truth is that in the United States, decisions on content moderation and containing the spread of harmful content are left to social media platforms, and their track record in applying content moderation policies vigorously has been tempered severely by their fear of right-wing backlash.
Enter the European Union’s new Digital Services Act (DSA), which came into force on November 16, 2022 and will be applicable law across the EU from February 24, 2024. It provides a framework that can be used to watch out for multidirectional platform strategies and includes provisions to allow vetted researchers access to platform data. However, this extends only to very large platforms with a reach of about 45 million users in the EU. Smaller alt tech platforms are exempted; and if there is one takeaway from the digital coups’ playbook, it is the absolute need to monitor various types of platforms regardless of their primary use and size.
To identify and monitor patterns, the EU will need to establish clear authorities and roles in coordination with national authorities. According to the DSA, the EU can rely on the assistance of NGOs, civil society, and vetted researchers to identify evolving online risks and study how algorithms work. It will need to make a point of ensuring that content moderation practices are updated to incorporate local human expertise for improved efficacy. Additional efforts are needed during election cycles: At that time, platforms need to be in contact with dedicated national authorities and researchers to share more nuanced insights into flagged narratives. This should include providing information on the timeline for adapting content moderation procedures as well as on the rate of error regarding flagged narratives.
According to the letter of the law, the EU will dispose of most of the tools needed to regulate and work with platforms to improve accountability and minimize the risk of violent insurgencies. The new Digital Services Act does reflect the realization that platforms are inadequate at self-governance. Yet, Europe’s democratic resilience will depend on how efficiently it will be implemented and enforced.
Dieses DGAP-Memo wurde am 31. Januar 2023 veröffentlicht.