Memo

06. Sep 2023

The Transformative Role of AI in Reshaping Electoral Politics

Photo of European Commission presentation with AI logo
Lizenz
Alle Rechte vorbehalten

Germany is increasingly caught up in the global competition between autocratic and democratic systems – including how both sides are instrumentalizing leadership in technology. Thus, Germany has supported the EU’s attempt to become a regulatory superpower, ensuring that high, liberal standards are applied to fields such as generative AI. As governments digitalize their services and AI increasingly shapes the electoral process, European policymakers need to realize that trust in new technologies is essential to political legitimacy. Given the decisive elections that will be held worldwide in 2024, they must act to maintain this trust.

PDF

Share

The launch of ChatGPT has put generative AI in the spotlight. This foundational technology has spurred a race among tech companies for rapid development and release. As these firms – such as OpenAI – rush to be the first to disrupt various sectors, they are moving before they consider the social implications of those moves, incurring ethical debt. The result is seen in the algorithmic reinforcement of biases, exploitation, and the first signs of displaced jobs. Tech companies also face expensive legal challenges, e.g., lawsuits on data use or misuse, copyright infringement, and privacy violations.

The European Union has made a choice to become the global bastion of digital privacy, boasting data protection standards far stricter than those of the United States or People’s Republic of China. Anticipating the disruptive character of AI, officials in the EU have meticulously crafted what is hailed as the most exhaustive legislation on artificial intelligence: the AI Act. Currently in its conclusive trilogue phase, it seeks to corral AI applications based on their inherent risks, aspiring to clinch the mantle of world leadership in trustworthy AI. Germany was a key proponent in creating a definition of high-risk AI systems and advocated for these systems to be subject to a risk assessment prior to market introduction. Leaders of German industry have, however, voiced their concern that the AI Act could stifle innovation and investment should it be perceived to overstep into overregulation. For the EU, the conundrum lies in striking a balance – between amplifying efforts to attract investments and foster innovation hubs without scaling back safeguards for its citizens.

This paper will explore one of the key sectors that is about to be disrupted by generative AI and that has both economic and political implications for the EU: the electoral process. Because this technology is poised to reshape the landscape of one of democracy’s pivotal mechanisms, it is imperative for both voters and institutions to anticipate the changes it will drive there. AI will revolutionize campaign strategies, supercharge the automation of electoral procedures, and enable the wholesale creation of content. And this is happening now. Moreover, in 2024, more than 70 elections will take place, including in the United States, Taiwan, India, and the European Parliament. Against this background, the EU must define what it needs to achieve “success” in this sector and the AI era in general. In the end, the EU could triumph by spearheading the charge to delineate the frontiers of the applications for which AI can and should be used.

AI in Campaigning

Politicians worldwide have swiftly embraced generative AI in their campaigning efforts, drawn by its user-friendliness and cost-effectiveness. For example, through only minimal prompts, it enables real-time responses to campaign developments. The findings of a limited study by journalist Nicolas Kayser-Bril, which looks at the Instagram feeds of politicians across multiple countries, underscore that the largest role is currently played by image generators that are predominantly wielded by right-leaning parties. Serious ethical problems may arise due to the potential covert, non-transparent, and/or illegal nature of the strategies behind political messaging that are influenced by AI.

In the United States and Germany – and, thus, likely everywhere else – politicians harness generative AI to draft speeches, strategize schedules, manage campaigns, identify potential voters, or use chatbots emulating politicians to engage constituents. Furthermore, the potential of AI is demonstrated in tracking voter engagement and pinpointing potential supporters. AI also increases productivity by streamlining the creation of tailored political messages for voters based on A/B testing that refines messaging for optimal impact. AI-fueled messaging empowers campaigns to deliver personalized – and potentially fake – content to diverse voters that is tailored to their predicted susceptibility to specific arguments. This is similar to what Cambridge Analytica did by evaluating data collected on Facebook during the 2016 US presidential elections but is more sophisticated and far more extensive.

Although many potential dimensions of generative AI remain largely unknown, they will be felt in the elections of 2024. One possible scenario: a start-up, state-actor, or covert interest develops an AI tool designed to maximize the chances of a specific candidate to win with few – or no – ethical restrictions. According to professors Archon Fung and Lawrence Lessig, such a tool could deploy automated microtargeting to affect voter behavior on a scale of millions by using reinforcement learning to generate a succession of messages that become increasingly more likely to change voter sentiment.

Given the above, disclosing the nature and scope of the AI systems used in elections needs to be made mandatory. And before such systems are employed, they need to go through extensive testing to avoid data breaches, vulnerabilities, etc. In Germany, members of almost all major parties use AI campaign management systems that gather data to improve communication among party members, supporters, and voters. While this data is anonymized, the identities of individuals can still be triangulated – a grave problem that was made public by a benevolent hacker to expose vulnerabilities.

AI Influences Voter Sentiment

During elections, we have already seen fake news driven by mere text or photoshopped pictures. Today, the accessibility and user-friendly nature of generative AI tools have ushered in a democratized era of advanced disinformation, empowering virtually anyone to morph into a content creator. As a result, we are now encountering cascades of audio, voice, and visual content that are crafted by automated AI in numerous languages. Generative AI exacerbates misinformation and election interference on a global scale – a boon for foreign influence campaigns or opportunistic scammers. In tandem, AI-driven voice synthesis and deepfakes could be used to orchestrate fabricated visuals such as counterfeit scenes of ballot manipulation or disrupted polling stations. This fake content could then be augmented by distribution through seemingly reputable news sites. Social media ecosystems extend the reach of misinformation, unleashing further turmoil. For example, women in politics have found themselves in high-resolution explicit videos synthesized by AI with the goal of suppressing their voices.

As we experience the wholesale creation of content in upcoming elections, social media platforms remain the bottleneck. The distribution of generated posts depends on engagement, platform algorithms, high reach accounts, bots, ads, etc. Consequently, the German government needs to secure the implementation of the EU’s Digital Services Act, which mandates mechanisms for reporting disinformation. The German government also needs to support initiatives that incentivize companies to adopt tech that labels generated content, e.g., through digital watermarking.

How Democratic Governments Use AI

Governments worldwide are embarking on the integration of AI through small-scale pilot projects, primarily deploying narrow AI systems tailored for specific tasks. Examples include streamlining government hearings, enhancing citizen services through chatbots, searching archives, or evaluating political projects.

In the United States, automated electoral processes have begun to be streamlined by election officials, driven by the potential for heightened efficiency across thousands of jurisdictions. Generative AI could, for example, be used to support redistricting to enhance equity and representation, potentially putting an end to gerrymandering. Yet initial assessments underscore potential biases against minorities within AI systems. These biases could hamper matching names to individuals on existing voter rolls or verifying signatures, eroding democratic integrity.

Policymakers and election officials must keep such assessments in mind. If the perceived capabilities of AI in industrial and scientific domains are misaligned into the political arena, the general public may come to believe that AI has the capacity to offset democratic processes. This narrative could erode faith in elections – the linchpin for managing political contention in democratic systems and granting parties access to governmental power within institutional confines – and engender skepticism toward their outcomes.

It Boils Down to Trust

Building trust is key for protecting the legitimacy of democracy and electoral processes. To this end, officials in the EU and its member states should encourage public discussion about the potential risks and benefits of generative AI. At the very least, everyone needs to understand the role that this technology can play in shaping opinions during elections. Merely classifying election-related tools as high-risk – as prescribed by the EU’s AI Act – is not enough. Building trust requires education; the development of competencies, rules, and a regulatory framework; and the involvement of civil society – all of which is envisaged in the German government’s AI strategy.

Furthermore, AI has the potential to help safeguard democracy’s credibility. It can, for example, model intricate dynamics and outcomes that factor in diverse political landscapes, contextual nuances, and various scenarios. This capability could unveil avenues for novel forms of deliberation and participation within democratic structures that could become part of structural electoral reforms.

In July 2023, the seven leading AI companies took a significant step by signing voluntary commitments on AI security and trust. However, these commitments lack a timeline for implementation. Especially given the number of upcoming elections in 2024, this is reckless. The looming specter of AI exploitation that is designed to sway voter sentiment has left numerous experts very nervous. Even Sam Altman, the CEO of OpenAI, expressed his unease in a tweet about AI’s potential to impact elections by combining personalized 1:1 persuasion and high-quality generated media.

We can already see unsettling examples of authoritarian regimes exploiting surveillance technologies with the goal of maintaining social order and power. Consequently, we cannot afford to assume that AI will largely be used to help democratic governments map voter concerns and support superior decision-making and governance. While democratic discourse should help determine the role that AI will play in our system of electoral representation, national regulation needs to be strengthened and definitions narrowed to ensure that loopholes will not weaken our liberal democracy.

Bibliografische Angaben

Muñoz, Katja. “The Transformative Role of AI in Reshaping Electoral Politics.” DGAP Memo 4 (2023). German Council on Foreign Relations. September 2023. https://doi.org/10.60823/DGAP-23-39234-en.

DGAP Memo Nr. 4, September 6, 2023, 3 pp.

Lizenz

Verwandter Inhalt