7  Artificial intelligence and democracy

7.1 Artificial intelligence in politics and society

The success and widespread deployment of artificial intelligence (AI) have raised awareness of the technology’s economic, social, and political consequences. The most recent step in AI development – the application of large language models (LLMs) and other transformer models to the generation of text, image, video, or audio content – has come to dominate the public imaginary of AI and has accelerated this discussion. But to assess AI’s societal impact meaningfully, we need to look closely at the workings of the underlying technology and identify the areas of contact within fields of interest.

AI has become a pervasive presence in society. Recent technological advances have allowed for broad deployment of AI-based systems in many different areas of social, economic, and political life. In the process, AI has had – or is expected to have – a deep effect on each area it touches. We see examples in discussions about algorithmic shaping of digital communication environments and the associated deterioration of political discourse;1 the flooding of the public arena with false or misleading information enabled by generative AI;2 the future of work and AI’s role in replacement of jobs and related automation-driven unemployment;3 and AI’s impact on shifting the competitive balance between autocracies and democracies.4 With these developments, AI has also begun to touch on the very idea and practice of democracy.

This makes AI with its workings, applications, and effects an important topic for political science.5 But, since many of AI’s applications and their society-wide consequences lie still in the future, political science struggles with addressing associated questions. This chapter provides students with a framework of contributing to the ongoing discussion about the role of AI in society.

The chapter will start with a non-technical introduction to AI and the conditions for is successful application. It will then present a set of important areas in democracy where AI is starting to get used and develop effects. The map of these areas can serve as a conceptual framework for students interested in the future work on AI and democracy .6

7.2 What is artificial intelligence?

The success and widespread use of artificial intelligence (AI) has increased awareness of its economic, social and political impacts. The idea of a powerful machine-intelligence has inspired far-reaching expectations and fears regarding the potential or threats associated with AI, ranging from economic growth7 and post-human transcendence8 to a downright menace to human existence.9 Large Language Models (LLMs) and other transformer models that enable the automated creation of text, image, video, or audio content are currently dominating the public imagination and are associated successes and innovations are accelerating this discussion.10 But the discussion of AI and its impacts is broader and goes back well before the recent wave of technological innovations and commercial applications.

In his history of the research field artificial intelligence, Nils J. Nilsson defines AI as

[…] that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.

Nilsson (2010), p. xiii.

This definition provides a broad tent for scientific, engineering, and commercial endeavors affiliated with making intelligent machines or learning about intelligence through machines. The continued scientific quest for artificial intelligence starts roughly in the nineteen-fifties bringing psychologists, engineers, and computer scientists together. These efforts begin as the attempt at understanding human intelligence better by trying to recreate it in machines while at the same time getting machines to perform tasks traditionally assigned to humans, such as conversation, reasoning tasks, or playing board games. In the early days of the field from the nineteen-fifties to the nineteen-nineties scientific and commercial efforts in artificial intelligence were dominated by knowledge-based approaches, trying to teach machines about the world, be it the meaning of words, grammar, or expert knowledge in specific subfields. Approaches like these started out being highly popular among scientists and funders but lost steam when the promised results failed to materialize. The associated frustrations and decreased scientific interest and funding opportunities have become known as AI winter.11

But each winter ends with the onset of spring.12

The first signs of the onsetting AI spring appeared in the nineteen-nineties. Increased computing power and ever growing availability of large data sets documenting ever more aspects of human life and society accompanying the digital transformation led to growing interest in the uses of neural network models. Neural networks are a family of computational models that is inspired by the working of the human brain. In a very simplified account, the brain consists of networks of interconnected neurons. Each neuron receives stimuli. Once these incoming stimuli pass a certain threshold, a neuron sends out an electrical signal that itself serves as an input to other connected neurons. This model of the brain concentrates on its information processing characteristics, in which interconnected neurons accept and process information and through their interconnections are able to achieve stunning feats of translating information into perception, knowledge, or action.

Artificial neural networks follow the same logic in their architecture and functioning. Artificial neurons accept numerical inputs and translate them into a single output variable. These artificial neurons are arranged in networks with varying levels of network layers. A first input layer accepts unprocessed signals and puts them through to a series of so-called hidden layers. These hidden layers accept the outputs of the network layer above them, process these outputs, and transmit them to a further layer, until a final output layer is reached that provides the result of the model. This process is called deep learning and enables machines to make data-driven predictions and decisions.13

Deep learning has made machines highly efficient in automated pattern recognition and decision making without relying on knowledge or theory. This purely data-driven approach to artificial intelligence has produced spectacular results in many different contexts, including computer vision, machine translation, medical diagnosis, robotics, and voice recognition.14 They also have been successfully applied in predicting possible but yet unknown biological or chemical compounds15 and strategic action in game play.16 A recent advance in deep learning are transformer models.17 They serve as the foundation of highly popular applications, such as ChatGPT or Midjourney, that allow the autonomous generation of text, image, and video content.18 In fact, deep learning and its commercial applications have been so successful that deep learning has become nearly synonymous with artificial intelligence, thereby reducing the field’s richness and varied heritage to a limited set of data-driven models and approaches.

But while these models clearly are highly successful and adaptable to surprisingly rich and varied contexts and tasks, should we accept their success as evidence for a deeper, human-level intelligence?

7.3 Narrow artificial intelligence versus artificial general intelligence

Let’s go back to Nilsson’s definition of AI:

Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.

Nilsson (2010), p. xiii.

This definition is very useful since it covers different aspects and goals of artificial intelligence research and development. At the same time, the definition also points to an inherent tension within the field. For example, what accounts for ” function[ing] appropriately”, what counts as “foresight”, and – perhaps most controversially – what is the minimum requirement to meaningfully speak of “function […] in its environment”.

Nilsson consciously takes a broad and open approach to these questions:

According to that definition, lots of things – humans, animals, and some machines – are intelligent. Machines, such as “smart cameras”, and many animals are at the primitive end of the extended continuum along which entities with various degrees of intelligence are arrayed. At the other end are humans, who are able to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories.

Nilsson (2010), p. xiii.

This approach works well for charting the development of the field of artificial intelligence and covering its varied points of origin and developmental paths. But to understand the larger societal effects of artificial intelligence it is limiting, if not misleading. Such a broad account of what it means to be intelligent allows commentators to conflate highly specific feats of data-driven pattern recognition, prediction, and decision making with more general and substantively different expressions of intelligence – such as creativity, embodied knowledge, or abduction. Impressive, but highly specific feats of data-driven pattern recognition, prediction, and decision making – such as winning against humans in board games like Chess or Go – are used to infer the subsequent replacement of human decision making in other contexts. This is a category error. It is true, there is undeniable and impressive progress of machines in one narrow category of intelligence – pattern recognition, prediction, and decision making based on the automated analysis of large data sets. But this does not automatically translate into progress in the development of another broader, general intelligence.

This surprisingly widespread fallacy gives rise to what Erik J. Larson calls the “Myth of Artificial Intelligence”:

The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time – that we have already embarked on the path that will lead to human-level AI, and then superintelligence.

Larson (2021), p. 1.

This myth is dangerous since it ignores the crucial differences between the type of intelligence found in current AI-empowered systems and a human level general intelligence:

The myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them. […] the myth assumes that we need only keep “chipping away” at the challenge of general intelligence by making progress on narrow feats of intelligence, like playing games or recognizing images. This is a profound mistake: success on narrow applications gets us not one step closer to general intelligence. […] As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking low-hanging fruit. The jump to general “common sense” is completely different, and there’s no known path from the one to the other.

Larson (2021), p. 1-2.

This error of using evidence of successful applications of artificial intelligence for specific narrowly defined tasks to project the imminent emergence of a general artificial intelligence that will be replacing human decision making in all walks of life matters. Jumping from the observation that AI-assisted homes can autonomously order produce from the internet to fears of a Skynet-level AI that is just around the corner and a threat to all life on earth is a category error and logical fallacy. And one that is particularly tempting and dangerous in the discussion of the likely societal impact of and regulatory answers to artificial intelligence. This does not mean that artificial intelligence has not made great strides and in turn made philosophers and cognitive scientists rethink crucial tenets of what exactly constitutes intelligence. But by conflating the two different categories of intelligence, we run the risk of misdiagnosing the nature, impact, and trajectory of artificial intelligence.

One way to account for these differences, is to differentiate between narrow artificial intelligence and artificial general intelligence (AGI). Narrow AI refers to systems developed for a specific, singular, or limited task.19 Narrow AIs can be very successful in the tasks they are developed for but they fail at other tasks. In contrast an artificial general intelligence would be a machine with the same cognitive and intellectual capacities as a typical human. It would be able to perform different tasks equally well, even those it was not explicitly trained for. For example, an AGI would be able to hold a conversation in natural language, solve problems in different areas, perceive the world and its position in it, and reason about it. While the public imagination focuses on AGI, scientific research and commercial applications nearly exclusively focus on narrow AI 20.

This disconnect between research activity and public imagination is important as it leads to a fundamental misunderstanding between discursive expectations and technological reality. The computer scientist and philosopher Brian Cantwell Smith proposes one way of remaining aware of these differences. In The Promise of Artificial Intelligence Smith emphasizes the difference between two intelligence tasks – reckoning and judgment. With reckoning he refers to calculation tasks that current instances of AI are highly successful at, like those discussed above:

[…] the representation manipulation and other forms of intentionally and semantically interpretable behavior carried out by systems that are not themselves capable […] of understanding what it is that those representations are about – that are not themselves capable of holding the content of their representations to account, that do not authentically engage with the world’s being the way in which their representations represent it as being. Reckoning […] is a term for the calculative rationality of which present-day computers […] are capable.

Smith (2019), p. 110.

In contrast, Smith uses the term judgment for the sort of understanding

[…] that is capable of taking objects to be objects, that knows the difference between appearance and reality, that is existentially committed to its own existence and to the integrity of the world as world, that is beholden to objects and bound by them, that defers, and all the rest.

Smith (2019), p. 110.

This difference between these forms of intelligence is not just a matter of terminology or of purely academic interest. Instead, by employing machines adapt at reckoning for tasks that require judgement, societies risk having machines calculate and initiate decisions based on the representation of the world and not the world itself and without any commitment to the consequences of these decisions. Conversely, focusing on the staggering successes of machines with reckoning tasks might lead over time to a devaluation of judgment, tasks that machines are not successful at, thereby settling on a critically reduced account of what human intelligence is and should be about. Keeping the difference between reckoning and judgment in mind is therefore crucial in the discussion of what artificial intelligence can do and for what tasks it is employed.

For the discussion of the impact of artificial intelligence on democracy, it is important to be precise about what kind of artificial intelligence we are talking. While it is easy to extrapolate far reaching impacts of an imagined artificial intelligence on aspects of democracy, this is probably not the best use of our time. Instead, we will focus on the impact of aspects of artificial intelligence already in evidence: The power of data-driven predictions and the impact of this on specific aspects of democracy. But first, it pays to look closely at preconditions for the successful application of artificial intelligence.

7.4 Conditions for the successful application of artificial intelligence

The successful application of artificial intelligence depends on a set of preconditions. Some are obvious. For example, to be successful AI needs to be able to access some digital representation of its environment, either through sensors mapping the world or through the input of existing data. Where these representations are difficult to come by or data are scarce, as in many areas of politics, AI will not be successful. Other preconditions are not so obvious. For example, for AI to produce helpful results, the underlying connections between inputs and outputs must be stable over time. This points to two problems: unobserved temporal shifts between variables (Lazer et al., 2014) and the dangers of relying on purely correlative evidence without support of causal models (Pearl, 2019; Schölkopf et al., 2021).

More important still, especially with respect to democracy, is that normatively speaking the past must provide a useful template for the future. Change is a crucial feature of societies, especially the extension of rights and the participation of previously excluded groups. Over time, many societies strive to decrease discrimination and increase equality. In fact, many policies are consciously designed to break with past patterns of discrimination. AI-based predictions and classifications based on past patterns risk replicating systemic inequalities and even structural discrimination (Bolukbasi et al., 2016; Christian, 2020; S. Mitchell et al., 2021).

Problems that share these characteristics can be found in many areas, such as the digital economy, commerce, digitally mediated social interactions, robotics, and sports. AI has proven highly successful in these areas. But few problems in politics and in democracy more broadly share these characteristics. This limits the application of AI in society and, accordingly, its impact on democracy.

7.4.1 Machine readable

For the successful application of artificial intelligence to a given problem, there needs to be a machine-readable representation available; the problem, its constitutive features and outcomes of interest need to be documented in data. As we have seen in Chapter 3, digital technology has contributed to a steady growth in data and increased the areas of the world and social life captured by it. But, at the same time, we have encountered a series of challenges in the translation of the world into data. Challenges that can limit the uses and usefulness of AI.

There are various fields made accessible for AI-enabled systems through different forms of continuous data collection and digitalization. This includes the collection of data and measurements through sensors in digital devices, such as smart phones, smart cars, or dedicated measurement devices. It is also true for the collection of digital trace data documenting people’s interactions with digital services, such as Google, amazon, Facebook, or X. Additionally traditional data sets are digitized and made accessible to machines. These data sources allow AI-enabled systems to support drivers, run smart devices or grids, to shape digital information environments, translate texts, or to deploy ads.

But the growing availability of digital or digitized information sometimes hides the fact, that many areas in social or political life do not lend themselves to digital representation. The use of AI-enabled systems in these arenas might therefore be – and potentially remain – limited. For an example, let’s have a look at voting.

Example: Data about voting and voters

One foundational democratic activity is voting. Voting is the ultimate expression and mechanism of democratic self-rule, allowing people to choose the representatives they elect to be governed by. Naturally, it is an activity that parties want to learn about, predict, and influence. This is a problem that parties and their consultants try to solve through data, algorithms, and recently also AI.21 To assess the power of AI in this arena, we have to first examine what kind of relevant information is available to parties.

Countries vary with regard to the information they allow parties to access and collect about voters and voting behavior.22 The country with the richest provision of voter data is probably the USA. States collect and make available information about registered voters. Details vary between states, but official voter information can include name, address, gender, race, and information about whether they voted in a previous election or not. These information are highly valuable for political parties and campaigns in modeling who to contact.

The state provides campaigns with highly reliable data about crucial demographic characteristics as well as outcomes of interest – party support and election turnout. This provides modelers with a promising foundation to predict party support and election turnout for people, who they have only demographic information about and no information about outcomes of interest.

But the quality of these models depends on the available data. In his book Hacking the Electorate,23 the political scientist Eitan Hersh analyzes the precision of data-driven targeting by Barack Obama’s presidential campaigns in 2008 and 2012, probably still the campaigns with the most sophisticated use of data and models to date. He shows that the campaigns were only able to target likely Democratic voters relatively precisely in states where the official voter file provided data on party registration and election turnout. In states where this information was not available, the campaigns struggled to target respective voters successfully even with rich and varied information available on voters, such as data from commercial data vendors or social media companies. The success of data-driven targeting in the US therefor depends on crucial information being collected and provided by the state, not on varied but only tangentially connected information. Without the availability of these information, the use of AI-enabled systems will be limited here as well.24

This example shows that legislative and political choices shape what data about voting is available to modelers and that modelers cannot simply use other potentially connected information to reliably infer data on phenomena that are not measured by design. Unless the underlying legal conditions change and political actors choose to document and make accessible information about voters, the uses of AI to predict vote choices will remain limited.

We can expect the availability of machine readable data covering political processes to grow. For example, Sanders & Schneier (2021) present an instructive account of how this could come about. The authors present an interesting thought experiment on some of the opportunities AI provides for politics. One example they discuss is predicting the success of a bill based on known interventions by lobbyists or constituents. Their paper shows the preconditions but also the opportunities for the increasing application of artificial intelligence in politics. At the same time, the paper illustrates the considerable efforts necessary by many actors to bring this about.

In the effort of making ever more areas of politics and social life machine readable, there quickly emerges an inherent tension between the interests of making these areas more predictable and concerns about making campaigning too predictable. Let’s stay with voting to illustrate this tension.

There is a legitimate concern in allowing parties to contact voters. The competition of different political viewpoints and proposed actions manifest in parties is crucial for democracy. At the same time, we do not want parties to get too good at this. We do not want parties to be able to clandestinely only contact people that are easy to mobilize or persuade; we want them to openly compete with their arguments for broad support. We also do not want a sustained imbalance in the technical campaigning capabilities to emerge between parties, potentially predetermining elections. This tension between increasing opportunities provided by making new areas of political and social life machine readable and thereby available to artificial intelligence and larger societal values and rights extends to other areas of politics, as well. We therefor can expect the machine-readable representation of politics to remain limited by design, in consequence also limiting the uses of AI-enabled systems.

7.4.2 Abundance

One can think of artificial intelligence as lowering the costs of prediction. In their book Prediction Machines, the management and marketing scholars Ajay Agrawal, Joshua Gans, and Avi Goldfarb define prediction as:

[…] the process of filling in missing information. Prediction takes information you have, often called “data,” and uses it to generate information you don’t have. In addition to generating information about the future, prediction can generate information about the present and the past.

Agrawal et al. (2018/2022), p. 32.

For prediction to work, the predicted entity needs to happen often. Scarce events or outcomes cannot be reliably predicted, they can only be chanced. So, for artificial intelligence to matter in a field, outcomes of interest need to happen often and need to be documented by data.

For artificial intelligence to automatically provide analysts or academics with reliable predictions of an outcome of interest, it needs vast amounts of data covering the outcome variable of interest and its potential predictors. Take Google for example.25

Example: Google

In its search business, Google has information about millions of search queries coming in each minute from all over the world. Google also has the information on which displayed result which user clicked after entering specific terms, thereby finding out which identified search result appeared to be of relevance to the user initiating the search. Both, the outcome variable – clicking on a seemingly relevant link – and the input variable – search terms – are available to Google in abundance and therefor offer a fruitful object for AI-based predictions. By automatically identifying patterns in the past behavior of users – connecting for example their search terms, search history, or location with search results they subsequently clicked on – Google can predict which results are of high relevance for users in the future who exhibit similar behavioral patterns.

By correctly predicting which results a user is looking for when using specific terms or showing a specific behavior, Google can beat its competitors by providing users with the relevant information while sorting out the irrelevant. This is already a nice feature in search. But this capability develops its full commercial potential for the company in the display of ads, supposedly targeted to the interests or needs of people using the service. Through reliably predicting which ads to display to whom, Google has a powerful selling proposition to ad customers. At the same, time it does not lose its search users through overly annoying or irrelevant ad display. Of course, the example is prohibitively simple, but it illustrates the kinds of problems for which AI-based systems offer powerful solutions. Similar patterns hold for the display of ads by Google, Facebook, or Amazon, or the recommendation of products on Amazon or Netflix.

In contrast to these examples, many outcomes that are of interest in politics or democracy remain scarce – even in a big data world. Again, let’s take voting behavior. In most democracies voting for a specific office takes place in evenly spaced temporal intervals. For example, most democratic countries vote for their heads of state every four years. This makes voting a sparse activity and therefor difficult to predict. While each and everyone one of us is using a search engine multiple times per day, we only vote every couple of years. Accordingly, vote choice is an outcome variable much scarcer than those for which machine prediction has proven stunningly useful.

While automatically predicting people’s vote choice might be elusive, other electioneering tasks might turn out to be more promising.26 For example, Barack Obama’s presidential campaigns in 2008 and 2012 modeled the likelihood of people to donate a specific amount to the campaign after receiving an email asking for donations. Given the size of the campaigns’ email lists reportedly running in the millions, the frequency of donation asks, the frequency of small donations, and the campaigns’ ability to frequently run experiments, make this a task in electioneering well suited to the use of artificial intelligence.

While many areas in politics might not come with abundant outcomes, creative actors can reformulate specific tasks or elements in ways that lend themselves better to AI-enabled systems. These new approaches to well-known tasks, such as donation collection in campaigns, can open up politics to creative uses of AI and sometimes even shift the balance of power toward actors willing and able to engage in this reformulation.

7.4.3 Stable connections between variables

For data-driven inference, it is important that the relationship between predicting and predicted variables remains stable between training data and the time of deployment in the wild. This is a temporal problem: Does a phenomenon’s future reliably resemble its past? Or, is the future a foreign country, where they do things differently? But, more generally, it is also a problem of theory-free inference of information without the provision of an underlying causal mechanism.

Example: Google Flu Trends

One famous example for the failure of data-driven prediction is Google Flu Trends.27 Only a few years ago, Google Flu Trends was a popular example for the power of data-driven prediction. Google Flu Trends was an online service run by Google that used the occurrence of topical search terms in specific locations to predict local flu outbreaks. For a while, the service was surprisingly precise and quicker than official statistics. But only a few years in, the service’s quality was found to deteriorate quickly.

In a forensic account of the episode Lazer et al. (2014) identified a shift in the function of Google’s search field as a likely culprit, breaking the previously identified link between search terms and the flu. By suggesting users search terms corresponding with their initial input, Google changed the behavior of users, which in turn negatively impacted the inference of missing information based on this input. Google changed the relationship of the information its models tried to predict and the information that was available to them.

Another problem lies in the exclusively data-driven inference of information. Especially in data-rich contexts, correlations between variables abound. These correlations could indicate an unobserved causal link that might remain stable over time. In this case, predicting missing information based on available information found to be correlated in the past is feasible. But a correlation might also be the outcome of a random fluctuation – present one moment, gone the next. In this case, prediction would be produce meaningless results. To know which correlation is meaningful and which is not, social science uses theories to provide testable hypotheses about why various indicators should be linked. This allows the careful modeling and testing of links between variables and their predictive power. Causal reasoning and causal inference attempt to determine what correlations can be seen as meaningful predictions and which probably are better ignored.28

Of course, this does not mean that artificial intelligence should only look for connections, its programmers thought of. This would be missing out on the very real opportunities of dats-driven discovery and inference through AI. Still, simply relying on connections identified by machines is just as limiting. Instead, people and AI need to meaningfully interact. This means that people have to critically interrogate the output and the process through which AI inferred information. Here, using causal reasoning provides an important reality check to automatically identified patterns through data-driven procedures.

7.4.4 Continuing past inequalities

An additional challenge to the use of artificial intelligence in broader societal contexts is the question of unwanted bias in the outcomes of data-driven models. One crucial element in societies is change, especially the extension of rights and the inclusion of different groups with regard to their participation in society and the workplace. Over time, many societies strive to decrease discrimination and increase equality. In this regard, future behavior toward people and options afforded to them should not resemble the past. In fact, many policies are consciously designed to break with past patterns of discrimination. The use of artificial intelligence and purely data-driven prediction – at least in its current form – has proven a challenge to these goals.

AI-based learnings are inherently conservative. By relying on patterns found in past data, AI will pursue tasks in ways that were successful in the past but might no longer be appropriate,29 either due to unobserved shifts between inputs and outputs30: or due to a shift in values and norms making past learnings obsolete.31 This makes AI-supported shapings conservative.

Using data documenting people’s characteristics, behavior, and trajectories in the past to infer future behavior and trajectories risks replicating systemic inequalities and even structural discrimination.32 For example, Bolukbasi et al. (2016) found that a prominent program underlying many services relying on automated natural language processing applications showed consistent evidence of gender bias. They showed that the model featured many biased associations. For example, when presented with the word-pair “father” and “doctor”, the model completed the input “mother” with “nurse”. Why did it do this? By examining the statistical relationships between vectors of words representing their co-occurrences in a large corpus of news articles, they found that the combination “mother” and “nurse” resembled statistically the pairing “father” and “doctor”.

By relying on statistical relationships between words documenting the outcome of gender inequality of a society’s past, the outcome of AI-enabled systems risks reinforcing said inequalities in the future. This is true even if a society consciously tries to intervene through policy designed to counter said biases and tries to establish more equal and less discriminatory behaviors and structures.

Other cases of accidental AI bias following a similar logic include the over-policing of areas traditionally strongly associated with recorded crime33 or sensors and classification programs not recognizing women or members of racial minorities typically underrepresented in training data.34 The growing uses of artificial intelligence in many areas – such as healthcare, policing, judicial sentencing, or the roll-out of social services – have raised awareness of this inherent limitation and associated potential dangers in the application of artificial intelligence.

7.5 Artificial intelligence and democracy

AI’s recent successes and its broad deployment in many areas of social, economic, and political life have begun to raise questions regarding whether and how AI impacts democracy.35 The idea and practice of democracy are highly contested concepts with competing accounts of great nuance. The associated discussions within political theory are highly productive and successful in identifying different normative, procedural, or structural features and consequences within our understanding of democracy.36 Still, for the purposes of getting a handle on the impact of AI on democracy, we need to reduce this rich discussion to a few important – if sometimes contested – features of democracy.

We focus our discussion on four contact areas between AI and democracy at different analytical levels:

  • At the individual level, AI impacts the conditions of self-rule and people’s opportunities to exercise it.
  • At the group level, AI impacts equality of rights among different groups of people in society.
  • At the institutional level, AI impacts the perception of elections as a fair and open mechanism for channeling and managing political conflict.
  • At the systems level, AI impacts competition between democratic and autocratic systems of government.
Level Area of impact
individual self-rule
group equality
institutional elections
system competition between systems

7.6 Artificial intelligence and self-rule

One tenet of democracy is that governments should be chosen by those they will serve. Such self-rule is a normative idea about legitimizing the temporal power of rulers over the ruled and a practical idea that distributed decision making is superior to other more centralized forms of decision making or rule by experts.37 AI impacts both the ability of people to achieve self-rule and the perceived superiority of distributed decision making over expert rule in complex social systems, highlighting potential limits to self-rule in several ways.

7.6.1 Shaping information environments

The legitimacy of self-rule is closely connected with the idea of people being able to make informed decisions for themselves and their communities. This depends at least in part on the information environment in which they are embedded.38 AI affects these informational foundations of self-rule directly. This includes how people are exposed to and can access political information, can voice their views and concerns, and how these informational foundations potentially increase opportunities for manipulation.39

As we have already seen in Chapter 4, algorithmic shaping of digital information environments based on people’s inferred information preferences or predicted behavioral responses has raised particularly strong concerns.40 Key among these is that people will be exposed only to information with which they are likely to agree, thus losing sight of the other political side. Empirical findings suggest that these fears may be overblown.41 In fact, in digital communication environments people may encounter more political information about the other side and that they disagree with than in other information environments. This can be a problem, especially for political partisans, because it increases the salience of political conflict.42 But the degree to which this mechanism is driven by AI or might even be lessened through specific algorithm design remains as of now unknown.

Going further, several authors have diagnosed various ill effects of digital communication environments on information quality and political discourse, some AI-driven and others independent of AI.43 While clearly important, these diagnoses risk overestimating the quality of prior information environments and the role of information for people in their exercise of self-rule. In fact, critiques of the quality of media in democracies abounded well before digital media became prevalent.44

In addition, most people do not follow the news closely, do not hold strong political attitudes, and do not perform well when tested on their political knowledge.45 They seem to rely on informational shortcuts or on social structures to exercise self-rule.46 Hence, these mechanisms can also be expected to mediate the impact of AI-driven shaping of information environments. To assess AI’s impact fully, research needs to consider not only information environments but must also look at whether and how AI affects the structural and social factors that mediate the impact of political information on self-rule.

It does not appear that AI-driven shaping of digital information environments inevitably leads to a deterioration of access to information necessary for people to exercise their right to self-rule. Nevertheless, there is much opaqueness in the way digital communication environments are shaped. The greater the role of these environments in democracies, the greater the need for assessability of the role of AI in their shaping.47 We also need regular external audits of the effects of AI on the information visible on online platforms, especially the nature and kind of information that is algorithmically promoted or muted.

7.6.2 Economics of news

AI might also come to indirectly impact the creation and provision of relevant political information by changing the economic conditions of news production. For one, recent successes in the development of transformer models suggest that AI might soon be used by media providers to automatically generate text, image, or video content. This might lead to an acceleration of existing trends toward automated content generation in news organizations.48 This puts pressure on journalists who might see routine tasks shift toward AI-enabled systems but also on news organizations who might face a new set of ultra low-cost competitors who specialize on automatically generated news content. This potentially increases pressure on journalists’ salaries as well as the audiences and profits of news companies, intensifying existing pressures on news as a business.49

Additionally AI reconfigures the way news and political information are accessed by the public. Search engines like Bing and Google are experimenting with large language models (LLM) to provide users with automatically generated content in reaction to search queries instead of links to content provided by news and information providers. This limits monetization opportunities for small or middle sized media organizations without strong brand identity and loyalty, that in the past could generate traffic based on query-based referrals from search engines or social networking sites. These new limitations on monetization opportunities might lead to a decline in the coverage of politics, or number of news organizations. This in turn would limit the total amount and diversity of information available to people in order to develop informed decisions. This will hit political outsiders and challengers the hardest who rely on smaller information providers for coverage. This decline in monetization opportunities of news will thus likely lead to a strengthening of existing institutions, media brands, and associated power relations.50

Additionally, public perceptions of digital communication environments being dominated by AI-generated content – some of it correct, some of it actively misleading, some of it accidentally misleading – might contribute among parts of the population to an increased valuation of select news organizations, whose process of news production and quality insurance they have come to trust. These news brands might thus find themselves strengthened through an increase of AI-generated content in open communication environments or in the coverage by cost-cutting competitors. Of course, this expectation only holds if these news brands are seen as providing added value over AI-generated content.

It is also important to remember that this AI-driven turn to specific news brands is only likely to hold for audience members who engage with news and politics demanding accurate information and those interested in politics. This will likely be socio-economically well-resourced and politically engaged people.51 Others might feel fine with free or automatically generated content. This is likely to reinforce an informational divide between politically interested and disinterested audiences that already has grown following the switch from a low-choice mass media environment to high-choice digital communication environments.52 In countries without strong public broadcasters, like the US, this divide will also run along economic lines, allowing those able to pay for news to access high quality, curated, and quality checked information, while leaving those not able (or willing) to pay to the noisy, (partially) automated, and contested free digital information environment. Over time, this might mean that socio-economic divides decide (or are seen to decide) over the ability of people to come to informed political decisions.

7.6.3 Speech

AI, though, does not only impact access to information; it also affects the expression of opinions, interests, and concerns in digital communication environments. With digital communication environments becoming increasingly areas for the expression of voice, surfacing of concerns, and construction of political identities, this is an important element in AI’s shaping of the conditions for self-rule.

The perceived ability of AI to classify content has put it at the forefront of the fight against harmful digital speech and misinformation. AI is used broadly by tech companies to classify user content to stop it from publication or flag it for moderation.53 Details of the applied procedures, their successes, and error rates are opaque to outsiders, making it difficult to assess the broadness of AI’s uses and its effects on speech. This is problematic: harmful speech and misinformation are both difficult categories for classification. Neither category is objective nor stable and both require interpretation as meaning shifts across contexts and time. This makes them difficult to identify with automated data-driven AI and risks suppression of legitimate political speech.

Additionally, the technical workings of AI also impact the type of speech becoming visible in AI-shaped spaces. By learning typical patterns within a given set of cases, AI will lean toward averages. For AI-enabled shaping and summarizing of speech or political positions, this will favor common positions, concerns, and expressions. Outsiders and minority positions, concerns, and expressions will in unadjusted AI-shaped communication environments submerged and remain invisible. AI would thus negatively impact the ability of a society to become visible to itself in the public arena, lower democracies’ information processing capacities, and strengthen the status quo.54

Still, there are few alternatives to AI-based moderation given the pure volume of content being published in digital communication environments,55 which makes it important to gain a better understanding of AI-based moderation’s workings and effects. Accordingly, AI-based moderation needs assessability provided by platforms and external audits to ensure its proper workings.

AI-based moderation, however, is not only a risk. Scholars and commentators have long pointed to the limits of large-scale political deliberation imposed through inefficiencies in information distribution, surfacing of preferences, and coordination of people. AI may improve on some of these inefficiencies by predicting individual preferences, classifying information, and shaping information flows.56 This in turn might open up opportunities for new deliberative and participatory formats in democracies, thereby strengthening and vitalizing democracy.

It is important to remain aware of both the risks and the opportunity AI provides for moderating speech and surfacing concerns in digital communication environments. AI can contribute to creative solutions to some of the technical challenges underlying successful self-rule. But if it is to do so, we need to know more about its actual uses, effects, and risks. This demands for greater transparency from digital platforms and continued vigilance and attention from civil society.

7.6.4 Manipulation

Artificial intelligence could also negatively impact individual informational autonomy by predicting the reactions of people to communicative interventions. This could allow professional communicators to reach people in exactly the right way to shift opinions and behavior. Sanders & Schneier (2021) present a thought experiment that illustrates how lobbyists might use AI to predict the likelihood of success of bills they introduce to legislators. While still far from realization, their example shows interested parties employing AI to increase the resources available to them and potentially targeting interventions aimed at influencing people to behave in ways beneficial to those same parties. AI can also be used to generate messages aimed at persuading people, with early working papers indicating interventions designed by LLMs to have persuasive appeal.57 Similarly, LLMs are currently used by academics and campaign professionals to simulate reactions and attitudes by prototypical voters for message testing and research. 58 But to be sure, currently the precision and validity of these approaches are still in doubt.

Fears also exist regarding people encountering targeted communicative interventions in digital communication environments. By predicting how people might react to an advertisement, digital consultancies could use AI to tailor interventions to influence people. The English consultancy firm Cambridge Analytica, which claimed to be able to predict which piece of information displayed on Facebook was necessary to get people to behave in ways beneficial to its electoral clients, provided a first taste of this problem. While the company’s claims have been debunked,59 the episode speaks to the perception of AI’s power to manipulate people at will, as well as the willingness of journalists and the public to accept widely exaggerated claims about the power of digitally enabled manipulation irrespective of contradicting evidence.

Recent advances in transformer models have opened new avenues for potential manipulation through the automated production of text or images.60 There are legitimate uses of these models, as well as nefarious ones. For instance, they facilitate the automated generation of content based on raw information or event data, as found in sports coverage or the stock market.61 This is largely unproblematic, since AI translates information from one form of representation – such as numerical or event data – into another – such as a narrative news article.

More problematic are cases in which AI does not simply translate one representation of information into another, but generates content based on prompts and past patterns. Examples include text or image responses to textual prompts in form of questions or instructions. AI has no commitment to the truth of an argument or observation; it is only imitating their likeness as found in past data. Today’s AI is committed only to the representation of the world, an object, or an argument available to it, not to the world, object, or argument as such.62 Thus, AI output taken at face value cannot be trusted because it is not necessarily true, only plausible.

More problematic still is the chance that future AI could be used to produce fake information at scale. This could take the form of targeted fakes aimed at misleading people, or flooding information environments with masses of unreliable or misleading AI-generated content. This would dilute information environments, making it more difficult for people to access crucial information and/or making information appear untrustworthy. But while evocative, the mere opportunity of creating more unreliable information might not directly translate into these automatically generate disinformation reaching audiences or persuading them.63

Also, somewhat counterintuitively, a mass-seeding of automated misinformation might also contribute to the strengthening of professional news and information curation discussed above. When the prevalence of unreliable or misleading information in digital communication environments becomes evident, the premium for reliable information rises. Accordingly, professional, reliable, and impartial news sources might see a reversal of fortune compared to their economic and ideational challenges of the last twenty years. This way, automated misinformation at scale might turn out to strengthen intermediary institutions that provide information in democracies.

It is important to note that these uses of AI are still projected, and may not come to pass given limits of the underlying technology, the development of efficient countermeasures, and/or the persistence of mediating structures that limit the effects of information overall. But in light of recent technological advances, these uses have come to feature strongly in the public imagination and demand for critical reflection by social and computer scientists.

7.6.5 Expert rule

Support for self-rule is also closely connected with the assessment of expert rule being limited in complex social systems. Expertise is important, but has limited predictive power in complex societies, and the decentralized decision making and preference surfacing of self-rule, while imperfect, are seen as superior for settling on collectively binding decisions.64 As we have seen in Chapter 3, the growing availability of data in ever more domains, coupled with new analytical opportunities offered by AI, have raised hopes for new predictive capabilities in complex societies.65 AI not only highlights the weaknesses of people making political decisions, but also increases the power of experts.

AI brings new opportunities in the modeling and prediction of societal, economic, ecological, and geopolitical trends, promising to provide experts with predictions of people’s behavior in reaction to regulatory or governance interventions. While the actual quality of these approaches is still open to question, they have strong rhetorical and legitimizing power. They increase the power of experts, who – sometimes actually and sometimes rhetorically – rely on AI-supported models to ground their advice on how societies should act considering major societal challenges. This apparent increase in the power of experts to guide societies in responding to challenges can reduce the option space available for democratic decision making, shifting the question from whether people can to whether they should decide for themselves. In this, AI could induce a transition from self-rule to expert rule and thereby weaken democracy.

7.6.6 Power of technology companies

AI also increases the power of companies over the public and even over states. While the theoretical breakthroughs in the current wave of AI began at universities, it is companies that lead in their practical application, further development, and broad rollout.66 Over time, the power to innovate and critically interrogate AI may shift from public to commercial actors, weakening AI oversight and regulation by democratically legitimated institutions. These challenges can be clearly seen in attempts by both the US and EU at getting to grips with regulating AI development and uses.67

There is also the issue of economic and political power. AI has allowed companies such as Google and Amazon to dominate multiple economic sectors.68 Governments have also begun to rely on AI-based service providers to support executive functions such as policing and security. The result is a growing government dependence on AI companies and an opaque transfer of knowledge from governments to these service providers. Add to this power over AI-enabled information flows and governance over political speech, and AI companies hold central positions in democracies, potentially negatively influencing the abilities of people for self-rule. This shows the importance of effective government and civil society oversight of companies that provide AI and those that employ AI to ensure that the foundations of meaningful self-rule hold as societies begin to rely more on AI-supported systems.

7.7 Artificial intelligence and equality

Democracy depends on people having equal rights to participation and representation.69 While this ideal is imperfectly realized and strongly contested in practice,70 democracies are in an ongoing struggle to extend rights to formerly excluded groups. AI’s reliance on data documenting the past risks subverting this process and instead continuing past discrimination into the future, thereby weakening democracy.

By predicting how people will behave under various circumstances based on observations from the past, AI differentiates between people based on criteria represented in data points. This risks reinforcing existing biases in society and even porting socially, legally, and politically discontinued discriminatory patterns into the present and future.71 This makes continuous observation and auditing of AI implementation crucial. The associated problems resemble those, we already have encountered in the discussion of algorithms in Chapter 4.

People’s visibility to AI depends on their past representation in data. AI has trouble recognizing those who belong to groups underrepresented in the data used to train it. For example, minorities not traditionally represented in data sets will remain invisible to computer vision,72 and historically underrepresented groups will not be associated with specific jobs and thereby risk discrimination in AI-assisted job procedures.73

This general pattern is highly relevant to democracy: for example, systematic invisibility of specific groups means they would be diminished in any AI-based representation of the body politic and in predictions about its behavior, interests, attitudes, and grievances. Accordingly, already disenfranchised people could risk further disenfranchisement and discrimination in the roll out of government services, the development of policy agendas based on digitally mediated preferences and voice, or face heightened persecution from the state security apparatus.

AI also makes some people more visible. Historically marginalized groups will be overrepresented in crime records, negatively impacting group members in AI-based approaches to policing or sentencing.74 In countries like the US, where voting rights are withheld for felons to varying degrees depending on state jurisdiction, systematic biases in AI-supported policing and sentencing might over time come to systematically bias the electorate against historically disenfranchised groups.75 Additionally, AI-based approaches can also have a profound effect on electoral redistricting.76 In sum, AI could lead to a reinforcement of structural inequality and discrimination by continuing patterns found in historical data even if a society is trying to enact more equal, less discriminatory practices.

Extrapolating from this, we can expect subsequent AI-based representations of public opinion, the body politic, and AI-assisted redistricting to be biased against groups marginalized in the past. Different degrees of visibility to AI could increase the democratic influence of some groups and decrease that of others. For instance, AI might contribute to an increase of resources for the already privileged by making their voices, interests, attitudes, concerns, and grievances more visible and accessible to decision makers. AI might use the preferences of visible groups in predictions about political trends and policy impact while ignoring those of less visible groups.

AI can also have adverse effects on the labor market. While in principle firms could invest in automation to allow workers to pursue new tasks and thereby increase the value of their labor, it appears that firms do so mostly to lower their own labor costs by substituting AI for human labor-based tasks.77 This lowers workers’ bargaining power and income by substituting labor for capital, which in turn threatens to increase economic inequality and weaken workers’ collective bargaining power. Consequently, this could also lower workers’ political influence and representation.78

What type of labor is affected by AI-based technological progress, though, is uncertain. Automation traditionally substitutes for routine human tasks and thus affects mostly low-skilled workers.79 But subsequent waves of AI innovation have shown that routine tasks underly many professions, including white-collar and knowledge ones long perceived as being immune to automation. The impact of AI in changing the political fortunes of workers might thus concern larger groups in the economy than traditional forms of automation. This can already be seen in the current discussion about the impact of large language models and generative AI on the creative and software industries, which until now seemed to be exempt from the dangers of automation-driven job replacement. These emerging fault-lines became evident in the Hollywood writers’ strike from 2023, in which screenwriters demanded contractual protection against studio uses of AI for writing tasks. And the actors’ strike of the same year, asking for control about the use of actors’ likeness by AI models.80

At the same time, AI can help aging societies complete substitutable work tasks and concentrate the shrinking labor force on currently non-substitutable tasks, thereby maintaining productivity levels in the face of growing demographic pressures in several developed economies.81 But realizing AI’s economic potential for societies means ensuring that respective gains are broadly shared and do not only benefit a narrow elite. Especially with prosperity gains from digital technology, this link of shared prosperity gains seems to be broken. This raises concerns as to whether elites manage to capture respective AI-enabled gains while most people only face automation-driven economic risks.82 This would increase inequality in society and weaken democracy. This potentially dangerous development puts the specifics of AI’s implementation and its public and regulatory oversight into focus.

AI clearly touches on equality within democracies. Inequalities might arise in the allocation of options and state services using AI-based systems, people’s visibility and representation within AI-based systems, and the provision or withdrawal of economic opportunities for people whose job tasks can be replaced with AI. These are, therefore, important areas for further interrogation and, if necessary, regulatory intervention.

7.8 Artificial intelligence and elections

Democracies rely on elections, which channel and manage political conflict by providing factions the opportunity to gain power within an institutional framework. But for this to work, each faction must feel the very real opportunity to win power in future elections. Otherwise, why bother with elections? Why not choose a different way to gain power?83 In the words of Adam Przeworski, democracy is a system of organized uncertainty:84

Actors know what is possible, since the possible outcomes are entailed by the institutional framework; they know what is likely to happen, because the probability of particular outcomes is determined jointly by the institutional framework and the resources that the different political forces bring to the competition. What they do not know is which particular outcome will occur. They know what winning or losing can mean to them, and they know how likely they are to win or lose, but they do not know if they will lose or win.

Przeworski (1991), p. 12–13.

AI applications promise to offset this organized uncertainty of who will lose and who will win elections. Ideas of being able to correctly predict elections or the behavior of voters go back to the early days of the computer age, the 1950s. In his 1955 short story Franchise,85 science fiction author Isaac Asimov has the computer system Multivac calculate election results based on the input of one specifically chosen person. In the late 1950s a set of eclectic scientists and engineers founded the Simulmatics Corporation86 with the goal to predict human behavior and support political campaigns based on computer models. Among their client roster was later President of the United States John F. Kennedy. More recently, the Presidential campaigns of Barack Obama used data-driven models to predict the likely behavior of voters.[^obama predicts]

While we can discuss the degree to which each of these examples qualifies as AI, in each we encounter the idea of actors being able to use available information to infer unknown outcomes. This could happen on an individual level where available information might include attitudes revealed in a survey, attitudes inferred based on behavior or choices – such as buying a specific brand of consumer good, driving a specific car, or donating money – , or documented behavior – such as turning out to vote – to infer future behavior – such as vote choice, the decision to turn out to vote, or the willingness to donate money. Alternatively, this could also happen on the system level, taking aggregate information – such as the state of the economy or general approval ratings – to predict the outcome of an election without modeling individual behavior. In other words, artificial intelligence might contribute to the lowering of uncertainty about who will win or lose in democratic elections.

But these approaches remain limited in the prediction of individual voters’ behavior. While the voting behavior of committed partisans can be predicted with some probability87 – at least in two-party systems –, predicting the behavior of people who are only weakly involved with politics is much harder. People do not always vote, and when they do the context can vary greatly. Their vote choices are – as we have encountered – for the most part not available to modelers, making predicting voting behavior automatically a problem for which AI is not well suited. The uncertainty of election victories will thus remain alive for the foreseeable future. But campaigns can develop other relevant data-driven models of elections, such as someone’s probability of voting or donating money,88 which could give campaigns a competitive advantage.89 At the same time, the subsequent fate of Obama’s successor as Democratic nominee Hillary Clinton in 2016 and the victory by Republican nominee Donald Trump show that even such a seemingly decisive predictive advantage as developed by Obama and his team is hard to maintain over time. Also, any such advantage is likely fleeting, given the broad availability of AI-based tools and campaign organizations learning from others’ successes and failures.90

Firms and governments might also seek to use AI to predict election outcomes or the electorate’s mood swings and possibly intervene. Campaigns or parties might simply have too little data, not enough computing power, or simply not enough talent to capitalize on the opportunities of AI. But this is not necessarily true for large companies developing AI in other areas or governments able to use the services of these companies or to coerce them. But these corporate or government efforts are limited by the same challenges raised above. Still, the public impression of this capability might be enough to undermine and delegitimize elections and give election losers a pretext to challenge results rather than conceding.

Cambridge Analytica’s supposed role in the United Kingdom’s Brexit vote and the 2016 U.S. presidential election previewed some of the challenges ahead. While there is little indication that data-based psychological targeting was widely used or had sizable effects, these episodes still loom large in the public imagination as an example of AI’s perceived power in election manipulation.91 We can expect widespread AI use in economic, political, and social life to shift people’s expectations of its uses and abuses in electioneering, irrespective of its actual uses or inherent limitations.

Overall, AI’s direct impact on elections seems limited given the relative scarcity of the predicted activity – voting. While indirect effects are possible through potential opportunities for competitive differentiation, it is doubtful that this can translate into a consistent, systemic shift of power, given the broad availability of AI tools. More likely is the indirect impact mentioned above: that by transposing expectations regarding AI’s supposed powers from industry and science to politics, the public may come to believe that AI is actually able to offset the organized uncertainty of democratic elections. This alone could weaken public trust in elections and acceptance of election results. It is thus important to keep organized uncertainty alive in the face of AI, not weaken it through irresponsible and fantastical speculation.

7.9 Artificial intelligence and the autocratic competition to democracies

AI also affects the relationship between democracy and other systems of governance, such as autocracy, which some have argued has an advantage in the development and deployment of AI.92 Leaving aside deeper normative considerations for a moment, on a purely functional dimension, democracies are often seen to be superior to autocracies or dictatorships, due to their superior performance as information aggregators and processors. AI might conceivably offset this functional superiority.

Governments all over the world face a shared challenge: They must decide on a course of action best suited to society or their interests based on expected outcomes. This means collecting and feeding available information about the state of society or the consequences of specific actions into implicit or explicit models of how the world works and to adjust one’s actions accordingly. Here, democracies are seen to have a competitive advantage over autocracies or dictatorships.93 By allowing free expression, having a free and inquisitive press, competition between factions and even within governmental groups democracies have structural mechanisms in place that surface information about society, the actions of bureaucracies, or the impact of policies so that political actors can react and reinforce or countermand a course of action.

Autocracies and dictatorships do not have the same mechanisms in place. By controlling speech and the media, they restrict information flows considerably, leaving governments often in the dark with regard to local situations, the preferences of the public, the behavior or corruption in their bureaucracies, and ultimately the consequences of the policies pursued by them. Democracy has thus been seen to allow for a better information acquisition and processing performance than more centralized approaches of governance, such as autocracies and dictatorships. The underlying mechanism is akin to the better performance of the market system compared to centralized planning with regard to economic outcomes. There now has been some debate over whether AI allows autocracies or dictatorships to overcome this disadvantage.

In democracies, companies and governments face limits to AI deployment or pervasive data collection about people’s behavior. In autocracies, they have more leeway. A close connection between the state and firms developing and deploying AI in autocracies creates an environment of permissive privacy regulation that provides developers and modelers with vast troves of data, allowing them to refine AI-enabled models of human behavior. Add centrally allocated resources and training of large numbers of AI-savvy engineers and managers, and some expect the result to be a considerable competitive advantage in developing, deploying, and profiting from AI-supported systems. This may allow for asymmetric developmental progress in AI, state capacity, economic benefits, and potentially even military prowess favoring autocracies over democracies.94

To be sure, the operating word is may. The differential powers of AI for autocracies is currently speculated about, not proven. Accordingly, there are strong critiques emerging of this position.95 Still, while far from settled, this question is a valuable one to discuss. So let’s turn to the place where these arguments have been spelled out most explicitly: China.

China is seen by some to provide a context more suited for the large scale deployment of AI than Western democracies.96 Reasons for this include the indiscriminate and large-scale collection of data and the state’s willingness to support companies – which in any case remain under strong state control – to pursue AI in force. Over time China would be therefore the country in which broad data collection and AI-based prediction of people’s behavior become pervasive and ubiquitous. This would allow the authoritarian government to use AI as an information gathering and processing mechanism potentially offsetting its challenges in information gathering by not having free expression or a free press.

While you might not state your preferences or dissatisfaction with the government in an official survey, running automated text analysis on your online chat protocols might unearth your true state of mind. While bureaucracies might keep you in the dark about the true state of the economy under your policies, an automated analysis of local payment patterns might indicate a surprising up-pick or slowdown in the economy.

Large scale data collection and predictive analytics through AI might thus help autocrats and dictators to learn as much about their people and the effects of their policies as democracies, perhaps even more as they can use these tools more pervasively and efficiently. Whether this is a realistic expectation is contested, but the implementation of China’s Social Credit System is widely seen as an attempt by China’s government to capitalize on this potential.

Example: China’s Social Credit System

China’s Social Credit System is currently the most ambitious society-wide scoring system.97 The goal is to automatically track people and their behavior over multiple societal and economic domains in order to establish their degree of compliance with rules. Behavior conforming with domain-specific rules is rewarded with positive scores. Conversely, behavior conflicting with rules is punished by taking points away. Examples for negative behavior vary but reportedly can include hearing loud music or eating on public transport, crossing red lights, or cheating in online games. People with an overall positive score can find themselves fast tracked with regard to credit applications, while those with negative scores can find themselves excluded from public services. Scores are assigned automatically based on conforming or deviant behavior captured by sensors, such as public surveillance cameras or other digital devices. This system provides the state with a vast pool of data on its people, potentially allowing it to infer opinions, predict unrest, learn about the effects of its policies, and shape individual behavior.

But before we become too enamored – or scared – by this vision of total surveillance and control, we should not forget the tendency of people to learn about and undermine attempts at surveillance and government control.98 Although, currently stories about advertised features and supposed successes of China’s Social Credit System abound, it is best to remain skeptical as to its actual workings and precision.

Still, the Social Credit System illustrates the point made by Lee (2018). He expects that the autocracy China is currently better placed than Western democracies in capitalizing on the potentials of AI. Sooner or later this might translate from pure engineering power into soft or cultural power. Once, AI-enabled Chinese digital platforms will provide users with better entertainment or commercial opportunities than Western platforms, they will capture their user base and assume their central role in cultural, commercial, or political life in countries well beyond China. An early example for this is the Chinese video platform TikTok.99

Western democracies might be able to shrug off that personalized music clips or funny videos are distributed through digital infrastructures hosted in China rather than the US. But AI-driven power shifts can also happen in other, more crucial areas.

AI might provide people living in autocracies with greater cultural, economic, or health-related opportunities.100 There are those who might see these benefits as a worthy tradeoff with some individual freedoms, leading to strengthened public support for autocracies and state control. Differential opportunities in realizing the potentials of AI might thus reinforce tendencies already evident in countries facing economic, cultural, or security crises. Particularly in times when democracies increasingly find themselves internally challenged with respect to the opportunities with which they provide people, these potentials of AI that asymmetrically favor autocracies represent an obvious challenge to democracies – if realized.

Going further, AI is a technology increasingly discussed in military and security circles.101 While its normative role and functional potential in these areas are heavily contested, the growing concerns in these circles point to the broad perception that AI could facilitate democracies falling behind autocracies.

Over time, differential trajectories in the development and deployment of AI in democracies and autocracies may emerge. If the assumption holds that autocracies share a greater affinity with AI and can profit more from it than democracies, AI could lead to a power shift between systems and thus weaken democracy.

7.10 Artificial intelligence and democracy: The road ahead

While many AI applications still lie in the future, we already start to see AI’s impact on democracy. True, many of AI’s future uses and effects remain speculative. But it is important that political science engages early on with AI and helps observe, evaluate and guide its implementation. This includes AI’s uses in politics, government, and its regulation and governance. This chapter has provided a set of areas and problems for the impact of AI on democracy. The broad contact areas of self-rule, equality, elections, and competition between systems can serve as topical clusters for future work. At the same time, future work will provide a more fine-grained account and advance theories that explain use and effect patterns.

Social scientists need to consider AI in their analysis of features, dangers, and potentials of contemporary democracy. In doing so, they need to reflect on the inner workings and domain specific effects of the underlying technology. At the same time, computer scientists and engineers need to consider the consequences for democracy in AI development and deployment. This means focusing not only on the analysis of the technology itself, but also to consider its embeddedness in economic, political, and social structures that mediate its effects for better or worse. This makes the analysis of AI’s impact on democracy an important area of future interdisciplinary work.

The quality of the analysis of AI’s effects on democracy depends on specificity regarding the type of AI, how it functions, the conditions for its successful deployment, and the aspect(s) of democracy it touches. Narratives about an unspecified, super-powered AGI and its supposed impact on society may make for stimulating reading, but offer little for the analysis of actual effects on society or democracy. In fact, interested parties can use the discussion of AGI and supposed extinction-level-event-dangers as a smokescreen, distracting public and regulatory attention from more mundane but crucial questions of AI governance, regulation, and the societal distribution of AI-driven gains and risks.

Although AI is often discussed as a danger or threat, it may also provide opportunities to offset some of the contemporary challenges to democracy. Thinking openly about the application of AI in democracy could provide some relief from these challenges. Conscious design choices and transparent audits can help ameliorate dysfunctions and uncover biases.

In general, AI’s impact depends on implementation and oversight by the public and regulators. For this, companies, regulators, and society need to be explicit and transparent about what economic, political, or societal goals they want to achieve using AI and how its specific workings can propel or inhibit this pursuit. By nature, this discussion combines normative, mechanistic, and technological arguments and considerations. It is important not to be sidetracked by grandiose, but ultimately imaginary, visions of an AGI, but instead focus on specific instances of narrow AI, their inner workings, uses in specific areas of interest, and effects. This includes the discussion of both potentially positive as well as negative effects.

AI is unlikely to impact many aspects of democracy directly. Nevertheless, public discourse is likely to continue to focus on threats, manipulation, and expected power shifts. This discourse and these expectations have the potential to shape public attitudes toward AI and its impact on democracy strongly, irrespective of their factual basis. Perceived effects can matter more strongly than actual effects. Researchers have a responsibility not to fan the flames of discourse with speculation, but instead remain focused on AI’s actual workings and effects.

There are many promising avenues for future scientific work on the impact of AI on democracy. Here, it is important to combine insights from different fields. Purely technological accounts risk overestimating AI’s impact in social systems, given their boundedness and the role of social structures. Accounts coming purely from the social sciences risk misrepresenting the actual workings of existing AI and thereby misattributing its consequences.

The impact of AI on democracy is already progressing. Its systematic, interdisciplinary examination and discussion needs to proceed as well. This means it is high time for political scientists to add their voice and perspective to the ongoing debate about the impact of AI on democracy.

7.11 Further Reading

For a helpful account of the workings of artificial intelligence see M. Mitchell (2019).

For an interesting discussion of artificial intelligence and its dependence on representations of the world see Smith (2019).

For a deeper discussion of artificial intelligence and democracy see Jungherr (2023).

For a more extensive discussion of artificial intelligence and the public arena see Jungherr & Schroeder (2023).

7.12 Review questions

  1. Please define the term artificial general intelligence (AGI).

  2. Please define the term narrow AI.

  3. Please discuss how according to Smith (2019) reckoning versus judgment are different forms of intelligence.

  4. Please discuss which four features of democracy are contact areas for the impact by AI on democracy.

  5. Please discuss how AI might and might not lower the “organized uncertainty” of democracy.

  6. Please discuss how AI might and might not impact elections.

  7. Please discuss how AI might or might not negatively impact the informational autonomy of people.

  8. Please discuss how AI might strengthen expert rule.

  9. Please discuss how AI might come to increase inequality in societies.

  10. Please discuss the impact of AI on the competition between democratic and autocratic systems of government.


  1. See Kaye (2018).↩︎

  2. See Krebs et al. (2022).↩︎

  3. See Acemoglu & Johnson (2023), Brynjolfsson & McAfee (2016), Frey (2019).↩︎

  4. See Filgueiras (2022), Lee (2018).↩︎

  5. See Risse (2023).↩︎

  6. This chapter is in parts based on the articles Jungherr (2023) and Jungherr & Schroeder (2023). Some of the material presented here, is adapted from theses earlier sources.↩︎

  7. See Brynjolfsson & McAfee (2016).↩︎

  8. See Hanson (2016).↩︎

  9. See Bostrom (2014).↩︎

  10. For the technological foundations of transformer models see Brown et al. (2020), Vaswani et al. (2017). For a non-technical introduction to ChatGPT, one of the most popular current AI applications, see Wolfram (2023).↩︎

  11. For a recent chronological account of the development of artificial intelligent from its past into its present see Woolridge (2020). For a personal history of the early days of artificial intelligence and the recollections of some of its pioneers see McCorduck (2004). For a narrative history of the people, companies, and feuds in this recent phase in the history of artificial intelligence see Metz (2021). For a broader history of the field see Nilsson (2010).↩︎

  12. For a non-technical introduction to current approaches and techniques in artificial intelligence see M. Mitchell (2019). For a highly popular technical introduction to artificial intelligence and current challenges and techniques see Russell & Norvig (1995/2021).↩︎

  13. For an easy to follow introduction to neural nets and deep learning see Kelleher (2019). For a technical introduction and deeper discussion see Goodfellow et al. (2016) or Prince (2023). For a practical hands-on introduction to programming deep learning models see Trask (2019).↩︎

  14. See LeCun et al. (2015).↩︎

  15. See Chow et al. (2018), King et al. (2009), Schneider (2018).↩︎

  16. For Go see Silver et al. (2016); Silver et al. (2017). For Go, Shogi, and Chess see Silver et al. (2018). For diplomacy see FAIR et al. (2022).↩︎

  17. See Parmar et al. (2018), Vaswani et al. (2017).↩︎

  18. See Brown et al. (2020), Ramesh et al. (2022).↩︎

  19. See M. Mitchell (2019), p. 45f.↩︎

  20. Woolridge (2020), p. 42.↩︎

  21. For overviews of the use of data and models in US campaigning see Hersh (2015), Issenberg (2012), Nickerson & Rogers (2014).↩︎

  22. For an overview of how different conditions impact the uses of data by campaigns and the views of campaigners on data see Dommett et al. (2024).↩︎

  23. See Hersh (2015).↩︎

  24. For the limits of data-driven approaches in other campaign contexts see Jungherr (2016).↩︎

  25. For more on Google see Levy (2011). For more on how Google and other digital media companies are using AI see Metz (2021).↩︎

  26. For instructive discussions of the limits and promises of predictive data analysis in election campaigns see Hersh (2015), Nickerson & Rogers (2014), Sides & Vavreck (2014).↩︎

  27. For a detailed discussion of the failure of Google Flu Trends see Lazer et al. (2014).↩︎

  28. For the pitfalls of theory-free prediction see Jungherr et al. (2017), Jungherr (2019). For discussions of causal modeling see Imbens & Rubin (2015), Pearl (2009), Morgan & Winship (2015).↩︎

  29. See Vela et al. (2022).↩︎

  30. See Lazer et al. (2014).↩︎

  31. See Bender et al. (2021).↩︎

  32. For a foundational discussion of biases in computer systems see Friedman & Nissenbaum (1996). For a discussion of biases found in automatically trained language models see Caliskan et al. (2017). For critical accounts for the use of artificial intelligence in broad societal contexts in face of various biases see Ferguson (2017), Eubanks (2018). For attempts at mitigating inherent biases in machine learning see Barocas et al. (2023).↩︎

  33. See Ferguson (2017)↩︎

  34. See Buolamwini & Gebru (2018).↩︎

  35. This section is strongly based on and slightly adapted from Jungherr (2023).↩︎

  36. See Dahl (1998), Guttman (2007), Landemore (2012), Przeworski (2018), Tilly (2007).↩︎

  37. See Dahl (1998), Landemore (2012), Landemore & Elster (2012), Schwartzberg (2015).↩︎

  38. See Jungherr & Schroeder (2022).↩︎

  39. See Jungherr & Schroeder (2023).↩︎

  40. See Kaye (2018).↩︎

  41. See Flaxman et al. (2016), Kitchens et al. (2020), Scharkow et al. (2020).↩︎

  42. See Settle (2018).↩︎

  43. See Bennett & Livingston (2021).↩︎

  44. See Keane (2013).↩︎

  45. See Converse (1964), Lupia & McCubbins (1998), Prior (2007), Zaller (1992).↩︎

  46. See Achen & Bartels (2016), Kuklinski & Quirk (2000), Lodge & Taber (2013), Popkin (1991).↩︎

  47. See Jungherr & Schroeder (2023).↩︎

  48. See Diakopoulos (2019).↩︎

  49. See Nielsen (2020).↩︎

  50. See Jungherr & Schroeder (2023).↩︎

  51. See Prior (2018), Schlozman et al. (2018).↩︎

  52. See Prior (2017).↩︎

  53. See Douek (2021); Kaye (2018).↩︎

  54. See Jungherr & Schroeder (2023).↩︎

  55. See Douek (2021).↩︎

  56. See Landemore (2024).↩︎

  57. See Bai et al. (2023).↩︎

  58. See Bisbee et al. (2023), Horton (2023), Kim & Lee (2023).↩︎

  59. See Jungherr et al. (2020), 124–130.↩︎

  60. See Brown et al. (2020), Ramesh et al. (2022).↩︎

  61. See Diakopoulos (2019).↩︎

  62. See Smith (2019).↩︎

  63. For a critical look at why generative AI might not lead to a new age of powerful disinformation see Simon et al. (2023).↩︎

  64. See Dahl (1998), Lindblom (2001).↩︎

  65. See Kitchin (2014).↩︎

  66. See Ahmed et al. (2023), Metz (2021).↩︎

  67. See Kretschmer et al. (2023).↩︎

  68. See Bessen (2022), Brynjolfsson et al. (2023).↩︎

  69. See Dahl (1998).↩︎

  70. See Phillips (2021), Young (2002).↩︎

  71. See Eubanks (2018), Mayson (2019), Mehrabi et al. (2022), S. Mitchell et al. (2021), Obermeyer et al. (2019).↩︎

  72. See Buolamwini & Gebru (2018).↩︎

  73. See Caliskan et al. (2017).↩︎

  74. See Chouldechova (2017), Christian (2020), Ferguson (2017).↩︎

  75. See Aviram et al. (2017).↩︎

  76. See Cho & Cain (2020)↩︎

  77. See Acemoglu & Restrepo (2019).↩︎

  78. See Acemoglu (2024), Gallego & Kurer (2022).↩︎

  79. See Acemoglu & Restrepo (2022b), Frey (2019).↩︎

  80. See Wilkinson (2023).↩︎

  81. See Acemoglu & Restrepo (2022a).↩︎

  82. See Acemoglu & Johnson (2023).↩︎

  83. See Przeworski (2018).↩︎

  84. Przeworski (1991), p. 13.↩︎

  85. Asimov (1955).↩︎

  86. For the story of the Simulmatics Corporation see Lepore (2020). But note how Lepore is much better at picking holes at past visions of predictability than today’s. [^obama predicts]: For a background on the efforts at prediction by the Obama campaigns 2008 and 2012 see Issenberg (2012).↩︎

  87. See Hersh (2015), Nickerson & Rogers (2014).↩︎

  88. See Hersh (2015), Issenberg (2012), Nickerson & Rogers (2014).↩︎

  89. For more on different uses of AI in election campaigns see Foos (2024), Jungherr et al. (2024).↩︎

  90. See Kreiss (2016).↩︎

  91. Jungherr et al. (2020), 124–130.↩︎

  92. See Filgueiras (2022), Lee (2018).↩︎

  93. For fictional illustration of the information challenge to autocracies and dictatorships see Spufford (2010). For academic discussions of the information challenge to autocracies and dictatorships see Wintrobe (1998), Kuran (1995), Gregory & Markevich (2002), Wallace (2022). On the informational strengths of democracy see Lindblom (1965), Ober (2008). For market systems and information see Lindblom (2001). For the challenges of control in organizations and government organizations see Little (2020).↩︎

  94. See Filgueiras (2022), Lee (2018).↩︎

  95. For strong critiques of the supposed autocratic AI advantage see Farrell et al. (2022), Yang & Roberts (2023).↩︎

  96. For the supposed special fit of China to AI innovation see Lee (2018). For China’s use of data and AI in social planning and control see Pan (2020), Ding et al. (2020). For the development of AI in autocracies see Filgueiras (2022).↩︎

  97. On China’s Social Credit System see Liang et al. (2018), Creemers (2018), Brussee (2023). For the export of Chinese and Russian approaches to social control through AI see Sı́thigh & Siems (2019), Weber (2019), Morgus (2019).↩︎

  98. For tactics to avoid state surveillance or governance attempts more broadly see Scott (2009).↩︎

  99. For the challenge of countries to rely on digital infrastructures developed and maintained in other countries see Jungherr & Schroeder (2022).↩︎

  100. For scenarios of the future impact on AI on culture, health, and innovation see Diamandis & Kotler (2020), Lee & Quifan (2021).↩︎

  101. For the role of AI in international security and conflict see Goldfarb & Lindsay (2022), Buchanan & Imbrie (2022).↩︎