4.2. Conditions for the successful application of artificial intelligence

In the previous section we established that we are leaving it to others, more imaginative people, to worry about the consequences of sentient machines or artificial general intelligence at large. Instead, going forward, we will be focusing on the uses and effects of narrow AI on politics and democracy. To do so, we first need to figure out what it is exactly that AI changes in these areas. In other words, what is AI good at? What becomes cheaper or easier to do? And, for which types of problems does this work?

4.2.1. Predictions

In their book Prediction Machines, the management and marketing scholars Ajay Agrawal, Joshua Gans, and Avi Goldfarb ask a similar question but with a focus on AI's impact on business and the economy [Agrawal, Gans, and Goldfarb, 2018]. Their way of answering this questions is also helpful for us. Being good economists, they ask themselves what type of task becomes cheaper through artificial intelligence?

Their answer: prediction.

The authors define prediction as:

"[...] the process of filling in missing information. Prediction takes information you have, often called "data," and uses it to generate information you don’t have. In addition to generating information about the future, prediction can generate information about the present and the past."

[Agrawal, Gans, and Goldfarb, 2018] p. 32

Prediction lies at the heart of may traditional businesses. Take credit card companies, for example. Credit card companies acquire information about people interested in receiving credit. They use this information to assign each applicant a score concerning their likelihood of paying back or defaulting on said credit. This score is based on the behavior and characteristics of past clients of said company. By testing which piece or combination of available information predicts an outcome of interest in past clients, the company can infer the piece of information they are missing for new clients: Will or won't they pay back their credit?

There are many other business models or tasks that follow the same logic, for example fraud detection or the calculation of insurance premiums. In the past, these predictions were expensive to perform: Data was scarce, expensive to collect, and their analysis demanded for specialized knowledge. Today data is abundant, easy to collect in scale, and the outputs of automated analyses are broadly available. In the terms of Agrawal, Gans, and Goldfarb [2018], artificial intelligence made prediction cheap. Accordingly, business models relying on inferring missing information based on available information become more profitable. But this cheapening also contributes to an extension of prediction to new areas of economic and social life.

The declining cost of prediction means that new tasks in new fields are approached as prediction problems. Agrawal, Gans, and Goldfarb [2018] offer a series of examples, one being machine translation. In the past, translation was seen as a task for trained specialists with deep knowledge of both the origin and target languages and cultures. One stunning achievement of contemporary artificial intelligence is the automated translation of text without understanding, simply based on the numerical representation of words and their mutual statistical relationships in different languages. The most well-known example is the online service Google Translate. By numerically representing the relationship of words in multidimensional vectors in different languages, a neural network can predict the likelihood of the co-occurrence of words in a sentence in a target language—the missing information—by examining the co-occurring words in the origin language—the available information.

Low costs and abundance therefor make prediction, and through it artificial intelligence, not only important to actors and tasks that in the past relied on inferring missing information out of information available to them but also to new actors and new tasks that up until then did not use this approach. On a larger societal scale, this has made prediction tasks explicit in areas in which prediction implicitly had a role at least partially. Examples include predictive policing, the use of recidivism scores in judicial sentencing decisions, predictive diagnoses in healthcare, or prediction of career paths in recruitment. The reach and consequences of applications using artificial intelligence thus go far beyond narrowly computational application. Accordingly, their inherent workings and biases have grown into an important area of critical discussion and interrogation.

By being cheap and abundant, prediction—the filling in of missing information based on information available at the time—and through it artificial intelligence becomes a common feature to social life in ever more areas. This opens many new opportunities but also brings risks. For one, prediction always comes with a degree of uncertainty. Inferring information based on prior information comes with varying degrees of probability of being correct or wrong. Probabilistic reasoning does not come natural to people but has to be trained. While past practitioners of prediction had to be trained in statistics to actually perform prediction, the outputs of prediction machines are readily available to each and every one of us. This means that many people will have to assess the value of the output of prediction machines with or without such training. Add to this that most such outputs do not come with a warning sign:

"Please be aware: predicted scores or outcomes fall within the following range."

The uncertainty of outputs of artificial intelligence predominantly comes neatly packaged away in a nice box, suggesting certainty where there are in fact varying degrees of probability. Training in probabilistic reasoning and the construction and use of outputs of varying competing models thus becomes a core competency in a society shaped by automated prediction machines—or in other words, artificial intelligence.

More fundamentally, there are limits to this new abundance in prediction. In fact, it only applies to specific tasks, namely those with outcomes that share a clear set of features:

  • Information needs to be machine readable;

  • Predicted outcomes need to happen very often;

  • Their tomorrows need to resemble their yesterdays; and

  • We agree that their tomorrows should resemble their yesterdays.

More on this now.

4.2.2. Machine readable

On the most basic level, for artificial intelligence to matter in a field, information about outcomes and contributing factors needs to come in machine readable format. This is clearly the case in fields were sensors or devices produce or capture digitally encoded information about context conditions or behavior. The growing availability of digital or digitized information sometimes hides the fact, that many areas in social or political life are very ill documented.

For example, in the USA, a country known for its rather permissive data privacy laws, states collect and make available information about registered voters. The details vary between states, but these official information can include name, adress, gender, race, and information about whether they voted in a previous election or not. These information are highly valuable for political parties and campaigns in modelling who to contact. But as Hersh [2015] has shown, the success of these models depends on the quality and resolution of data provided by the states. The availability of machine readable data is therefor a precondition for the use and success of artificial intelligence.

The availability of machine readable data covering political processes or outcomes is bound to grow. Sanders and Schneier [2021] present an instructive account of how this translation could come about and through this the opportunities for automated prediction of missing information in politically relevant processes increase. One example they discuss is predicting the success of a bill based on known interventions by lobbyists or constituents. Their paper shows the preconditions but also the opportunities for the increasing application of artificial intelligence in politics. At the same time, the paper illustrates the conscious efforts necessary by many actors to bring this about.

In the effort of making ever more areas of politics and social life machine readable, there quickly emerges an inherent tension between the interests of making these areas more predictable and the people's privacy concerns. Let's stay with voting to illustrate this tension. There is a legitimate concern in allowing parties to contact voters. The competition of different political viewpoints and proposed actions manifest in parties is crucial for democracy. At the same time, we do not want parties to get too good at this. We do not want parties to be able to clandestinely only contact people that are easy to mobilize or persuade; we want them to openly compete with their arguments for broad support. We also do not want a sustained imbalance in the technical campaigning capabilities to emerge between parties, potentially predetermining elections. This tension between increasing opportunities provided by making new areas of political and social life machine readable and thereby available to artificial intelligence and larger societal values and rights extends to other areas, as well. This tension is inherent and associated conflicts legitimate.

4.2.3. Abundant outcomes

For artificial intelligence to automatically provide analysts or academics with reliable predictions of an outcome of interest, it needs vast amounts of data covering the outcome variable of interest and its potential predictors. Take Google for example.

In its search business, Google has information about millions of search queries coming in each minute from all over the world. Google also has the information on which displayed result which user clicked after entering specific terms, thereby finding out which identified search result appeared to be of relevance to the user initiating the search. Both, the outcome variable—clicking on a seemingly relevant link—and the input variable—search terms—are available to Google in abundance and therefor offer a fruitful object for AI based predictions. By automatically identifying patterns in the past behavior of users—connecting for example their search terms, search history, or location with search results they subsequently clicked on—Google can predict which results are of high relevance for users in the future who exhibit similar behavioral patterns.

By correctly predicting which results a user is looking for when using specific terms or showing a specific behavior, Google can beat its competitors by providing users with the relevant information while sorting out the irrelevant. This is already a nice feature in search. But this capability develops its full commercial potential for the company in the display of ads, supposedly targeted to the interests or needs of people using the service. Through reliably predicting which ads to display to whom, Google has a powerful selling proposition to ad customers, while at the same time not losing its search users through overly annoying or irrelevant ad display. Of course, the example is prohibitively simple, but it illustrates the kinds of problems for which AI-based predictions offer powerful solutions. Similar patterns hold for the display of ads by Google, Facebook, or Amazon, or the recommendation of products on Amazon or Netflix.

Unfortunately—or fortunately—many outcomes that are of interest in politics or democracy remain scarce—even in a big data world. Take voting behavior for example. In most democracies voting for a specific office takes place in evenly spaced temporal intervals. For example, most democratic countries vote for their heads of state every four years. This makes voting a sparse activity and therefor difficult to predict. While each and everyone one of us is using a search engine multiple times per day, we only vote every couple of years. Accordingly, vote choice is an outcome variable much scarcer than those for which machine prediction has proven stunningly useful.

While automatically predicting people's vote choice might be elusive, other electioneering tasks might turn out to be more promising. For example, Barack Obama's presidential campaigns in 2008 and 2012 modeled the likelihood of people to donate a specific amount to the campaign after receiving an email asking for donations. Given the size of the campaigns' email lists reportedly running in the millions, the frequency of donation asks, the frequency of small donations, and the campaigns' ability to frequently run experiments, make this a task in electioneering well suited to the use of artificial intelligence. In other words, while many areas in politics might not come with abundant data, creative actors can reformulate specific tasks or elements in ways that lend themselves better to the analysis of large data sets. These new approaches to well-known tasks, such as donation collection in campaigns, can open up politics to creative uses of artificial intelligence and sometimes even shift the balance of power toward actors willing and able to engage in this reformulation.

4.2.4. Stability over time

For the automated inference of missing information based on available information, it is also important that the relationship between missing and available information remains stable. For one, this is a temporal problem: Does a phenomenon's future reliably resemble its past or is the future a foreign country, where they do things differently? But, more generally, it is also a problem of theory-free inference of information without the provision of an underlying causal mechanism.

One famous example, for the failure of data-driven prediction is Google Flu Trends. Only a few years ago, Google Flu Trends was a popular example for the power of data-driven prediction. Google Flu Trends was an online service run by Google that used the occurrence of topical search terms in specific locations to predict local flu outbreaks. For a while, the service was surprisingly precise and quicker than official statistics. But only a few years in, the service's quality was found to deteriorate quickly. In a forensic account of the episode Lazer, Kennedy, King, and Vespignani [2014] identified a shift in the function of Google's search field as a likely culprit, breaking the previously identified link between search terms and the flu. By suggesting users search terms corresponding with their initial input, Google changed the behavior of users, which in turn negatively impacted the inference of missing information based on this input. Google changed the relationship of the information its models tried to predict and the information that was available to them.

Another problem lies in the exclusively data-driven inference of information. Especially in data-rich contexts, correlations between variables abound. These correlations could indicate an unobserved causal link that might remain stable over time. In this case, predicting missing information based on available information found to be correlated in the past is feasible. But a correlation might also be the outcome of a random fluctuation—present one moment, gone the next. In this case, prediction would be produce meaningless results. To know which correlation is meaningful and which is not, social science uses theories to provide testable hypotheses about why various indicators should be linked. This allows the careful modeling and testing of links between variables and their predictive power. Causal reasoning and causal inference attempt to determine what correlations can be seen as meaningful predictions and which probably are better ignored.

Of course, this does not mean that artificial intelligence should only look for connections, its programmers thought of. As former Google chief executive chairman Eric Schmidt points out:

"You can think of AI as a large math problem where it sees patterns that humans can’t see. [...] With a lot of science and biology, there are patterns that exist that humans can’t see, and when pointed out, they will allow us to develop better drugs, better solutions."

Schmidt quoted in [Metz, 2021] p. xiii p. 182

Exclusively looking for connections people thought of would be missing out on the very real opportunities of AI. Still, simply relying on connections identified by machines is just as limiting. Instead, people and AI need to successfully interact. This means that people have to critically interrogate the output and the process through which AI inferred information. Here, using causal reasoning provides an important reality check to automatically identified patterns through data-driven procedures.

The successful use of artificial intelligence in inferring missing information depends on the stability of the relationship between target and available information. As the previous restriction, this works well for specific tasks but fails for others. This being said, there remains also the question of whether the future should resemble the past.

4.2.5. Reinforcing structural inequalities

An additional challenge to the use of artificial intelligence in broader societal contexts is the question of unwanted bias in the outcomes of data-driven models. One crucial element in Western societies is change, especially the extension of rights and the inclusion of different groups with regard to their participation in society and the workplace. Over time, many societies strive to decrease discrimination and increase equality. In this regard, future behavior toward people and options afforded to them should not resemble the past. In fact, many policies are consciously designed to break with past patterns of discrimination. The use of artificial intelligence and purely data-driven prediction—at least in its current form—has proven not to be an ally in this.

Using data documenting people's characteristics, behavior, and trajectories in the past to infer the future behavior and trajectories of their counterparts risks replicating systemic inequalities and even structural discrimination. For example, Bolukbasi, Chang, Zou, Saligrama, and Kalai [2016] found that a prominent program underlying many services relying on automated natural language processing applications showed consistent evidence of gender bias. They showed that the model featured many biased associations. For example, when presented with the word pair "father" and "doctor", the model completed the input "mother" with "nurse". Why did it do this? By examining the statistical relationships between vectors of words representing their co-occurrencess in a large corpus of news articles, they found that the combination "mother" and "nurse" was found in an environment of similar words as "father" and "doctor". By relying on statistical relationships between words documenting the outcome of gender inequality of a society's past, the outcome of artificial intelligence applications risk reinforcing said inequalities in the future. This is true even if a society consciously tries to intervene through policy designed to counter said biases and tries to establish more equal and less discriminatory behaviors and structures.

Other cases of accidental AI bias following a similar logic include the over-policing of areas traditionally strongly associated with recorded crime [Ferguson, 2017] or sensors and classification programs not recognizing women or members of racial minorities typically underrepresented in training data [Buolamwini and Gebru, 2018]. The growing uses of artificial intelligence in many areas—such as healthcare, policing, judicial sentencing, or the roll-out of social services—has has raised awareness of this inherent limitation and associated potential dangers in the application of artificial intelligence.

These four conditions limit the application of artificial intelligence to politics and accordingly their impact on democracy. Let's recap: in order for data-driven predictions of missing information based on available information to matter in a given field the following four conditions must hold:

  • Information needs to be machine readable;

  • Predicted outcomes need to happen very often;

  • Their tomorrows need to resemble their yesterdays; and

  • We agree that their tomorrows should resemble their yesterdays.

Accordingly, many fields in politics might not lend themselves directly to the use of artificial intelligence. This being said, many tasks within politics or the practice of democracy might be broken down into components that can be translated into prediction problems. This would make them promising targets for the application of artificial intelligence. We will discuss those and the associated impact of artificial intelligence on democracy in the remaining sections.