The impact of artificial intelligence on democracy: Elections, informational autonomy, equality, and power shifts within and between societies

4.3. The impact of artificial intelligence on democracy: Elections, informational autonomy, equality, and power shifts within and between societies

In the last two sections, we have discussed what type of artificial intelligence we are currently experiencing in the wild and the preconditions for this AI to be deployed in narrowly defined computational areas. We now turn to how this connects to democracy.

Unsurprisingly, the discussion of and literature on democracy is vast and multifaceted. Discussions range from the philosophical foundations, normative ideals, historical expressions, empirical variations, to legal and procedural questions. But three important characteristics of democracy consistently feature in theses discussions:

  • Free and fair elections as a process to establish the basis for collectively binding decision making;

  • Belief in the ability of people to make decisions for themselves about societal and political questions;

  • Equality of people with regard to representation and rights.

Each of these characteristics is touched by developments in the field of AI to different degrees. This impact is clearly felt by the current applications of narrow AI. Accordingly, we should discuss these questions without resorting to science fiction scenarios of imaginary artificial general intelligence.

AI raises questions with regard to the integrity of elections as an adjudication process for the conflict between political factions. In the age of the perceived predictability of people's political attitudes and behavior, can there be free and fair elections in which each faction conceivably might rise to power?

Is it still plausible that people are able to make political and societal decisions? For one, are information environments shaped by artificial intelligence based on the preferences of people still adequate to the task of creating informed publics able to form political opinions according to their interests? Going further, are scenarios provided by AI to experts about the future of complex issues—such as the climate, pandemics, geopolitical conflict, or finance—not better decision makers than the people following their passions and their interests? How does democratic decision making hold up against this new environment.

Can we still meaningfully speak of equality of rights and representation among people, if AI-based systems discriminate against minorities or the underprivileged? How do the known biases inherent in AI systems translate to democratic politics?

Even more fundamentally, what does equality even mean when AI contributes to massive power imbalances between the companies running and developing AI and everyone else, including the government?

To some commentators AI might also provide an opportunity for autocracies to get a leg up on democracies in the detection and solution to societal and political challenges. Traditionally, democracies were seen to be better at soliciting information about the state of their societies or the effects of interventions compared to autocracies. This information benefit was seen as one reason for democracies being able to outperform autocracies. AI might offset this benefit and allow autocracies to pass democracies by.

Admittedly, in answering these questions there is a certain amount of speculation involved. As discussed in the previous sections, much current AI is still in a preliminary and rather limited usage mode. In fact, AI might never transcend this current state. But to get a sense of how AI might impact democracy going further, we can start here.

4.3.1. Artificial intelligence and elections

One important strength of democracies is the role of elections as adjudication process of political conflict between different factions in society. By providing factions the very real opportunity to win or lose power in regular elections, democracies channel political conflict institutionally. But for this channeling of conflict to hold, each faction must feel the very real opportunity to win power in future elections. Otherwise, why bother with elections? Why not choose a different way to gain power? In the words of Adam Przeworski, democracy is a system of "organized uncertainty" [Przeworski, 1991], p. 13:

"Actors know what is possible, since the possible outcomes are entailed by the institutional framework; they know what is likely to happen, because the probability of particular outcomes is determined jointly by the institutional framework and the resources that the different political forces bring to the competition. What they do not know is which particular outcome will occur. They know what winning or losing can mean to them, and they know how likely they are to win or lose, but they do not know if they will lose or win."

[Przeworski, 1991] p. 12-13.

AI applications promise to offset this organized uncertainty of who will lose and who will win elections. Ideas of being able to correctly predict elections or the behavior of voters go back to the early days of the computer age, the 1950s. In his 1955 short story Franchise, science fiction author Isaac Asimov has the computer system Multivac calculate elections results based on the input of one specifically chosen person. In the late 1950s a set of eclectic scientists and engineers founded the Simulmatics Corporation with the goal to predict human behavior and support political campaigns based on computer models. Among their client roster was later President of the United States John F. Kennedy. More recently, the Presidential campaigns of Barack Obama used data-driven models to predict the likely behavior of voters. While we can discuss the degree to which each of these examples qualifies as AI, in each we encounter the idea of being able to use available information to infer unknown outcomes. This could happen on an individual level where available information might include attitudes revealed in a survey, attitudes inferred based on behavior or choices—such as buying a specific brand of consumer good, driving a specific car, or donating money—, or documented behavior—such as turning out to vote— to infer future behavior—such as vote choice, the decision to turn out to vote, or the willingness to donate money. Alternatively, this could also happen on the system level, taking aggregate information—such as the state of the economy or general approval ratings—to predict the outcome of an election without modeling individual behavior. In other words, artificial intelligence might contribute to the lowering of uncertainty about who will win or lose in democratic election.

We can see the promise of AI in this context as a continuation of earlier promises connected with data-driven campaigning. Barack Obama's presidential campaigns in 2008 and 2012 still serve as the watermark in the use of data-driven practices in political campaigns. The Obama campaign was very vocal about collecting large amounts of data about potential voters and campaign supporters. These data were then used to construct models on how to organize the voter outreach or resource collection for the campaign. The campaign built models on the likelihood of specific voters to vote for Obama, turn out to vote, or to react positively to campaign outreach. They also built models predicting how to maximize the input of their supporters, for example with regard to donations. The campaign also ran a turnout model predicting the outcome of both elections with greater precision than the news networks and professional pollsters. The Obama campaign can therefor serve as an example for what data-driven solutions in political campaigning can provide and where problems emerge.

For one, predicting the behavior of specific voters is difficult and unlikely to be solved through purely data-driven approaches. As discussed, voting is a sparse activity. People only vote comparatively seldom and the contexts of these vote choices vary greatly. Predicting this behavior automatically seems an elusive goal. The uncertainty of who will win elections will thus remain alive in the foreseeable future.

Still, the Obama campaigns also demonstrate that campaign organizations can be successful in developing data-driven models of behavior of great relevance to the campaign. For example, the probability to turn out to vote or to donate money. This might fall short of predicting the vote downright but might still be helpful to a campaign and give it a competitive advantage over competitors. AI based procedures might thus help campaigns to gain an advantage over competitors. At the same time, the subsequent fate of Obama's successor as Democratic nominee Hillary Clinton in 2016 and the victory by Republican nominee Donald Trump show that even such a seemingly decisive predictive advantage as developed by Obama and his team is hard to maintain over time. Any AI-based advantage might therefor be fleeting and accordingly might not contribute to a decisive lowering of the organized uncertainty of electoral politics in democracies.

Leaving parties and campaign organizations for a moment, AI might conceivably also impact the ability of powerful companies or governments to predict the outcome of elections or mood-swings of the public. Campaigns or parties might simply have too little data, not enough computing power, or simply not enough talent to capitalize on the opportunities of AI. But this is not necessarily true for large companies developing AI in other areas or governments able to use the services of these companies or to coerce them. While the nature of the challenge to predict vote choice still remains an unlikely target for successful AI-based predictions, here the impression of them being able to predict—or even intervene in—the outcome of elections with the public might be enough to undermine and delegitimize elections. The successes of AI in various societal and political fields might thus lend credence to narratives of widespread uses of AI even in areas where they objectively have little purchase.

Overall, the impact of AI on elections therefor seems limited given the inherent scarcity of the predicted activity—voting. Indirect effects are possible by providing parties or campaign organizations with the opportunities of competitive differentiation. Whether this can translate into a consistent and systemic shift of power between parties or political actors is doubtful, though, given the broad availability of AI tools. More likely is another sort of indirect impact. By transposing expectations regarding the supposed powers of AI from industry and science to politics, the public might come to expect that governments or companies providing AI solutions might be able to predict the political behavior of people or the outcome of the election. So while AI might not be able to offset the organized uncertainty regarding the prospective outcome of elections, it might give the impression of being able to. This alone might be enough of weakening public trust in elections and the acceptance of election results by factions deeming themselves at a systematic disadvantage. Consequently, it is important to keep organized uncertainty alive in the face of AI and not weaken it through irresponsible and fantastical speculations.

4.3.2. Artificial intelligence and informational autonomy

One tenet of democracy is that those who are ruled by a government should also be the ones who choose it. This idea of self-rule is as well a normative idea about legitimizing the temporal power of rulers over the ruled, it is also a practical idea about distributed decision making being superior to other more centralized forms of decision making or rule by experts. AI directly impacts these normative and practical considerations.

A crucial pre-condition for people making decisions in democracies is their ability for free expression and access to information. Without expressing opinions, interests, and concerns in public, there is no representation of the body politic and its respective factions. Just as important is the opportunity for people to keep themselves informed, either habitually by regularly following the news or incident-driven in moments of great concern to them or public scandals. Without access to an information environment providing critical coverage of political elites and societal trends and offering access to multiple viewpoints representing different societal groups or political factions, democratic decision making by the people lacks its basis. Both, free expression and the access to information are potentially impacted by AI.

How can people decide for themselves if they lose the ability for free expression in algorithmically moderated information environments?

The moderation of speech in digital information environments is a topic of growing importance. In light of the use of digital media in support of extremist movements all over the world, as a platform for discriminatory or hateful language targeted at individuals, its role in the propaganda efforts by countries intent at shaping public opinion abroad, or the distribution of misleading health information during the Covid-Pandemic, the moderation of speech online is at the top of the agenda of regulators and public commentators. As legal scholar Evelyn Douek has pointed out, to deal effectively with the sheer volume of content, this moderation increasingly will rely on artificial intelligence to identify and moderate harmful speech [Douek, 2021].

While it is easy to see the value in the opportunity to automatically identify and suppress instances of discriminatory or hateful speech and misleading information endangering the stability of societies or people's health, this is a slippery slope. Even if the intention is not to use these practices to regulate political speech, the probabilistic nature of AI-based content moderation means that the borders might be more blurred than envisioned. More troubling still, as for example Warner [2002] has shown, in the past factions outside the political mainstream and minorities consciously chose forms of expression that were offensive to mainstream discourse or dominant publics in order to manifest their challenge to the status quo. These forms of expression are potential targets of AI-based moderation of offensive speech. This would translate into suppression of relevant political speech.

In the beginning the exclusion of offensive political speech might contribute to a general feeling of accomplishment. But this does not solve political conflict. Instead it suppresses and hides it. In the mid- to long-term, this risks exacerbating conflict and pushing it outside the realm of democratic competition. This will weaken democracy. AI-based moderation needs transparency and external audits in order to establish that it indeed does not suppress political expression by outside factions or minorities deemed offensive by a majority. But as the example above has shown, this might not be enough. Given the political nature of offensive speech, instead of relying on automated moderation and suppression, open societies might have to accept a certain degree of unruliness in digital communication environments. Instead on automated guardians, we should rely more on the epistemic vigilance of publics and the strength of public discourse to weed out bad ideas and discriminatory actors.

How can people decide for themselves if they lose the ability to inform themselves?

The growing importance of digital platforms as access points for many people to news and information more broadly has raised concerns about the impact of algorithmically shaped information environments. Probably the most well-known of those is the Filter Bubble, put forward by the political activist Eli Pariser [Pariser, 2011]. Pariser worried about algorithms putting only information in front of people that supported their political world view and thereby reinforcing preexisting beliefs instead of challenging people by providing them with competing opinions or worldviews. If true, the filter bubble would force people into algorithmic cages hiding potentially relevant information from them. Over time, people would therefor lose the ability to form grounded political opinions and lose the ability for informed political decision making.

Luckily, empirical studies have found little evidence of algorithmically shaped information environments contributing to a meaningfully reduced diversity in the information people are exposed to. Still, concerns remain. For one, it could be that algorithms pick up on information that increases our likelihood to click, comment, or share and increase our exposure to emotionally stimulating information. As the political psychologist Jaimie Settle has shown, this might be information that we disagree with and that stimulate our voicing outrage or disgust and increase our salience of political conflict [Settle, 2018].

There are different and contradicting ways in which algorithmically shaped information environments might lead to a deterioration in the ability of people to encounter information they need to form political opinions and come to meaningfully grounded decisions. While the actual effects of algorithmically shaped information environments is heavily contested among scientists, without doubt we need greater transparency about algorithms employed in the shaping of information environments. This also includes the establishment of regular external audits establishing the effects of AI on the information visible on online platforms and the nature and kind of information that is amplified and the information that is muted.

How can people decide for themselves if AI helps communicators to become better at manipulating them or the political process?

If we switch our perspective from structures people use for expression or information, there is another area where AI might negatively impact individual informational autonomy. AI might allow communicators to predict the reactions of people to their communicative interventions. In an ideal world—at least from the perspective of professional communicators—this would enable them to contact us in exactly the right way to shift our opinion or behavior. Sanders and Schneier [2021] present a thought experiment illustrating a scenario how AI might be used by lobbyists to predict the likelihood of the success for bills introduced by them to legislators. While currently far from realization, their example shows the logic by which AI can be used by interested parties to increase the ressources available to them and potentially intervene targetedly to get people to behave in ways beneficial to them. If true, AI would thus enable interested parties to manipulate people to act against their better interests and weaken the case for the fitness of people to decide their political fates themselves. A first taste of this conflict was given in the discussion about the role of the English consultancy firm Cambridge Analytica in the success of Donald Trump. The consultants claimed to be able to predict which piece of information displayed on Facebook was necessary to get people to behave in ways beneficial to their clients. By now, the company's claims have been sufficiently debunked. But the episode shows the perceived power of AI to shift people's electoral behavior and the subsequent distrust in the ability of people to decide in their own interest.

But AI does not only highlight the weaknesses in people making political decisions for themselves, it also increases the power of experts. This shifts the question from whether people can decide for themselves to whether they should.

AI increases the power of experts to develop predictions. Be it in the modelling of societal, economic, ecological, or geopolitical trends, AI increases the powers of experts—at least symbolically as a manifestation of their incontestable authority. Based on these scenarios experts advice governments on the options available for action. In consequence, this leads to a reduction of the option space available for democratic decision making. We see this with regard to areas as diverse as climate change, global pandemics, or finance. In these cases experts present AI supported scenarios predicting which cause of action presumably leads to the envisioned goals. Nevermind that action based on predictions like this in reality often lead to different results as those expected. AI based scenarios enable experts to present a new case for why democratic decision making should be limited to an option space predetermined by them. Often presented by them with the TINA axiom "There is no alternative!"

There might be good reasons to follow the AI supported predictions by experts. But their AI-powered claim to knowledge about the future should not be accepted on faith. If AI based predictions are used to limit the option space for democratic decision making, they need to be presented transparently and open to external audits regarding their underlying models. Also, their output cannot automatically translate into politics. Experts do not rule in democracy—even if they might be right—they need politicians who advocate policies based on their advice and try to gain majorities. This process might take longer and seem frustrating to those agreeing on the way forward but it ensures the societal support necessary in democracies. Should AI shift this balance of power in the direction of expert rule, this would be a detrimental effect on democracy.

These examples show different ways in which the growing use of AI might lead to a deterioration in the informational autonomy of people underlying their ability to form political opinions and make political decisions necessary in a democracy. At the same time, there are various avenues forward with regard to transparency and external audits that might help for AI to strengthen the quality of information available to people in democracies and therefor support them in their democratic decision making. At the same time, the alternative approach of treating AI as a black box and hiding AI-enabled decision making behind a facade of efficiency and convenience would lead to a weakening of democracy. But danger lies not in AI as such, but in the way we choose to use it.

4.3.3. Artificial intelligence and equality

Democracy depends on people's equality of rights. For meaningful self-government people must have equal rights to representation, voting, and equal treatment by the law. While the actual degree of equality among people remains a consistent point of contention, the principle of equal rights is the foundation of any democracy. This makes it an important point of focus in the examination of how artificial intelligence impacts democracy. And here we find another area where AI might end up weakening democracy.

AI is not a technology of equality, it is a technology of differentiation. By predicting how different people will behave under different circumstance based on observations from the past, AI differentiates between people based on criteria documented in data points. This opens it up to the risk of reinforcing existing biases in society and even porting socially, legally, and politically discontinued discriminatory patterns from the past into the present and future. This makes it a crucial subfield of continuous observation and auditing how the implementation of AI enforces biases and results in discriminatory practices.

One area where AI potentially reinforces inequality in that different people are visible or invisible to it depending on their past representation in data sets. By now, various studies have shown that AI has trouble recognizing people who are underrepresented in data used to train its models. This includes instances where AI-based face recognition failed to identify women or non-caucasian racial groups. While these studies refer to AI-based vision, we can transpose this general pattern into other potentially democratically more relevant contexts. For example, the systematic invisibility of specific groups means that they would be underrepresented in any AI based representation of the body politic or in predictions about its behavior, interests, attitudes, or grievances.

Conversely, artificial intelligence also makes specific people more visible. For example, studies examining the use of AI in policing or the calculation of recidivism risks show that algorithms risk reinforcing historical biases found in the data. For one, specific areas predominantly housing economic or racial minorities that in the past were associated with higher rates of crime tend to be assigned higher police presence for the deterrence of crime. As a result a higher degree of criminal activity gets recorded in these areas, as there is a higher likelihood for police to pick it up here than in areas where they do not patrol. Accordingly, through an automatically assigned higher police presence these areas will be continually showing heightened crime occurrences, simply because there is someone there to notice it. Similar crimes happening somewhere else with a lower police presence might go unrecorded further lowering the chance for police to be sent there. If uncorrected, over time this will lead to an increasing discrimination of areas and groups historically associated with higher crime rates. Historical biases and results of discriminatory practices recorded in official records if unchecked might lead an AI to reinforce said biases even if a society is trying to move on and to enact more equal and less discriminatory practices.

By making different people visible in different contexts AI leads the state to treat people unequally bringing the very real risk of a continuation of discriminatory practices from the past. There are attempts at de-biasing AI but this remains an area that necessitates continuous attention and auditing. If treated uncritically AI might lead to a reinforcement of structural inequality and discrimination by continuation of trends visible in data.

Conversely AI might contribute to an increase of resources of already privileged individuals. For example by making their voices, interests, attitudes, concerns, and grievances visible and accessible to decision makers through automated dashboards or by relying on these visible positions in predictions about trends and policy impact, while ignoring voices by minorities invisible to AI. By different degrees of visibility to AI some groups of people might therefor find their democratic influence systematically increased, while some groups mind find it lowered. This would be a clear weakening of democracy by an undermining of the principle of equality.

Leaving the perspective of the individual and of groups, AI also increases inequality between the power of commercial companies and the state. The bulk of AI development and its application is led by commercial companies. True, the theoretical breakthroughs in the current wave of AI started in university labs. But the practical application of these breakthroughs—like the Go-playing AI AlphaGo or the Jeopardy playing Watson—or their further development and broad roll-out happened through commercial companies. Metz [2021] charts the nexus between academics and companies. His account shows that universities still manage to train excellent people interested and able to work with and further develop AI. But universities appear not to be the places anymore where these people find the most exciting positions for research and development. Instead, this happens in commercial labs of companies like Google, Facebook, or Baidu. Over time the power to innovate or even to critically interrogate AI shifts from public actors—like universities—to commercial actors. This also negatively impacts the abilities of oversight of the development and implementation of AI through democratically legitimated institutions like parliaments, governments, or regulators.

Beyond the question of oversight, there also is the question of economic and political power. Through AI-based business models companies like Google or Amazon have come to dominate multiple economic sectors and become near monopolists. Additionally, governments have started to rely on AI-based service providers like Palantir in support of their executive functions, like policing or security. This contributes to their dependency on companies and intransparent transfer of knowledge from governments to companies providing AI-based services.

AI thereby potentially contributes to a shift of power within societies between different actors, benefitting companies developing AI while leading to a relative loss of economic and political power among other companies or governments. These developments might further weaken the balance of power in democratic societies by increasing systemic inequalities between the companies running and developing AI and everyone else, including governments.

4.3.4. Artificial intelligence and power shifts between societies

Right up until now, we have talked about the effects that AI can develop within democracies. But there is another dimension we need to consider. AI potentially also impacts the relationship between democracy as a system of government compared to others, such as autocracies. Leaving aside normative considerations for a moment, in the past democracies were seen to be superior to other forms of government, such as autocracies or dictatorship, due to their superior performance as information aggregators and processors.

Governments all over the world face a shared challenge: They must decide on a course of action best suited to society or their interests based on expected outcomes. This means collecting and feeding available information about the state of society or the consequences of specific actions into implicit or explicit models of how the world works and to adjust one's actions accordingly. Here, democracies are seen to have a competitive advantage over autocracies or dictatorships. By allowing free expression, having a free and inquisitive press, competition between factions and even within governmental groups democracies have structural mechanisms in place that surface information about society, the actions of bureaucracies, or the impact of policies so that political actors can react and reinforce or countermand a course of action. Autocracies and dictatorships do not have the same mechanisms in place. By controlling speech and the media, they restrict information flows considerably, leaving governments often in the dark with regard to local situations, the preferences of the public, the behavior or corruption in their bureaucracies, and ultimately the consequences of the policies pursued by them. Democracy has thus been seen to allow for a better information acquisition and processing performance than more centralized approaches of governance, such as autocracies and dictatorships. The underlying mechanism is akin to the better performance of the market system compared to centralized planning with regard to economic outcomes. There now has been some debate over whether AI allows autocracies or dictatorships to overcome this disadvantage, especially with regard to China.

China is seen to provide a context more suited for the large scale deployment of AI than Western democracies. Reasons for this include the indiscriminate and large-scale collection of data and the state's willingness to support companies—which in any case remain under strong state control—to pursue AI in force. Over time China would be therefore the country in which broad data collection and AI-based prediction of people's behavior become pervasive and ubiquitous. This would allow the authoritarian government to use AI as an information gathering and processing mechanism potentially offsetting its challenges in information gathering by not having free expression or a free press.

While you might not state your preferences or dissatisfaction with the government in an official survey, running automated text analysis on your online chat protocols might unearth your true state of mind. While bureaucracies might keep you in the dark about the true state of the economy under your policies, an automated analysis of local payments patterns might indicate a surprising up-pick or slowdown in the economy.

Large scale data collection and predictive analytics through AI might thus help autocrats and dictatorships to learn as much about their people and the effects of their policies as democracies, perhaps even more as they can use these tools more pervasively and efficiently. Whether this is a realistic hope is contested, but the implementation of China's Social Credit System is widely seen as an attempt by China's government to capitalize on this potential.

China's Social Credit System is currently the most ambitious society-wide scoring system. The goal is to automatically track people and their behavior over multiple societal and economic domains in order to establish their degree of compliance with rules. Behavior conforming with domain-specific rules is rewarded with positive scores. Conversely, behavior conflicting with rules is punished by taking points away. Examples for negative behavior vary but reportedly can include hearing loud music or eating on public transport, crossing red lights, or cheating in online games. People with an overall positive score can find themselves fast tracked with regard to credit applications, while those with negative scores can find themselves excluded from public services. Scores are assigned automatically based on conforming or deviant behavior captured by sensors, such as public surveillance cameras or other digital devices. This system provides the state with a vast pool of data on its people, potentially allowing it to infer opinions, predict unrest, learn about the effects of its possibilities, and shape individual behavior.

But before we become too enamored—or scared—by this vision of total surveillance and control, we should not forget the tendency of people to learn about and undermine attempts at surveillance and government control. Although, currently stories about advertised features and supposed success of China's Social Credit System abound, it is best to remain critical as to its actual workings and precision.

Still, the Social Credit System underlines a point made by [Lee, 2018]. He expects that the autocracy China is currently better placed than Western democracies in capitalizing on the potentials of AI: The close connection between the state and companies developing and deploying AI provides an environment of permissive privacy regulation providing AI developers with vast troves of data, allowing them to refine models of human behavior. Add to this centrally allocated resources and the centrally encouraged training of large numbers of AI-savvy engineers and managers and the result is a considerable competitive advantage in developing, deploying, and profiting from AI-supported systems. Sooner or later this will translate from pure engineering power into soft or cultural power. Once, AI-enabled Chinese digital platforms will provide users with better entertainment or commercial opportunities than Western platforms, they will capture their user base and assume their central role in cultural, commercial, or political life in countries well beyond China. An early example for this is the Chinese video platform TikTok.

Now, Western democracies might be able to shrug off that personalized music clips or funny videos are distributed through digital infrastructures hosted in China rather than the US. But AI-driven power shifts can also happen in other, more crucial areas. An example is the question whether European publics might be willing to forgo procedures allowing for the prolonging and improving of life based on the AI-enabled manipulation of DNA through CRISPR due to restrictive European regulation on biotech, AI, and data use? Going further, AI is a technology increasingly discussed in military and security circles. While the level of its current and future workings and effects in these areas are heavily contested, the growing concerns in these circles point to the broad perception of the associated dangers for democracies to fall behind autocracies in this regard. Scenarios like this illustrate the real pain points emerging once an AI implementation an development gap opens up between democracies and autocracies.

This brief discussion shows that AI's impact on democracy might not only refer to internal features. Instead, over time, differential trajectories in the development and deployment of AI in democracies and autocracies might emerge. If the assumption holds that autocracies and dictatorships share a greater affinity to AI and can offer it a more promising environment than democracies, AI might lead to a power shift between systems and weaken democracy in turn.