Algorithms in politics and society
3.2. Algorithms in politics and society¶
We have come to associate the term algorithm with computers. But in fact, the term simply refers to the process of pursuing a task following a set of specific pre-defined steps. The term goes back to the latinized version of the name of ninth century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī. In mathematics, the term has been used do denote a standardized sequence of steps for solving specific problems. An example for an algorithm in mathematics is Euclid's algorithm, allowing to identify the greatest common divisor between two numbers. Another even more universal example is the basic maths-procedure of performing long division by hand.
But the term algorithm covers, more broadly, all situations that require the standardized solving of repeatedly encountered problems. In this sense, any recipe in a cookbook is an algorithm, allowing for the standardized step-by-step approach to solving a specific problem, preparing a given dish. Algorithms, in this sense, lie at the heart of many organizations, guaranteeing their proper and standardized workings. Examples for this include internal administrative guidelines within government bureaucracies, police departments, or large companies for addressing specific repeatedly encountered tasks.
Currently, we tend to associate the term algorithm most prominently with regard to computers and the automation of processes or decision making. Fundamentally, when used in the context of computing, the term algorithm means the same as in other areas but the series of steps must be applicable to a computer. Thomas H. Cormen, co-author of one of the leading text books on computer algorithms defines a computer algorithm accordingly as:
"(...) a set of steps to accomplish a task that is described precisely enough that a computer can run it."
[Cormen, 2013], p. 1.
For this, algorithms need to be formulated within a programming language, a notation that a machine is able to follow and execute. In the words of mathematician Hannah Fry, computer algorithms:
"(...) are almost always mathematical objects. They take a sequence of mathematical operations - using equations, arithmetic, algebra, calculus, logic and probability - and translate them into computer code. They are fed with data from the real world, given an objective and set to work crunching through the calculations to achieve their aim."
[Fry, 2018], p. 8.
These sequences come in two types. They can be either pre-defined by a programmer - so called, rule-based algorithms. Or they can be the result of machines figuring out the best series of steps to solve a given task on their own, for example the classification of images - machine-learning algorithms. The second category is the basis for the procedures we have previously discussed in the section on artificial intelligence.
Computer algorithms have proven to be immensely useful in contributing to the computer-assisted automation of mundane tasks, computer-assisted decision making based on formally optimal parameters, and the standardized and ressource-efficient roll-out of services and decisions in scale. Algorithms come in many shapes and sizes. But in his book on the role of algorithms in journalism, communication scholar Nicholas Diakopoulos provides a helpful list of four categories of general problems computer algorithms are strong in solving and accordingly are broadly used: prioritization, classification, association, and filtering [Diakopoulos, 2019], p. 19-22.
In prioritization tasks, algorithms examine a set of options and prioritize them according to their likelihood in achieving the task pursued. Examples for this include algorithms underlying search engines, matching your input terms to all potential results and ranking them according to their likelihood of being relevant to you. This also includes algorithms by which your navigation system of choice determines the fastest route between your current location and your destination. Chances are that any computer-assisted system putting a small sample of options before you while hiding others relies on prioritization algorithms. By sorting through available options and prioritizing which to put before us in decision situations, algorithms consistently shape the option space directly available to us and therefor provide a frame for subsequent actions.
Algorithms are also used in a wide variety of classification tasks. In classification tasks, algorithms decide on the likelihood of an object, person, or event falling in a pre-defined category. The software you are using to sort your photos uses algorithms to determine the likelihood of picture containing a face of a friend, your dog, or food. Algorithms on digital platforms try to determine whether you are a promising target for ad display. More gravely, credit card companies use algorithms in order to determine whether a given purchase is likely to be genuine or fraudulent. Insurers use algorithms to determine the risk category a potential customers falls into in order to offer them a fitting rate. In some jurisdictions, courts even use algorithms to determine whether a parolee falls in the category of likely reoffender. Classification tasks underly a number of important processes determining the option space that people are afforded, be it by shaping what people see in digital communication environments or the options and rates they are given by organizations and institutions they ask for services or come in touch with.
Algorithms can also identify hidden associations between entities, such as objects, persons, or behaviors. By identifying shared features determining co-occurence, shared patterns in co-evolution, or hidden signs of commonality, algorithms can identify similarities or associations between entities. Accordingly identified links can be used by companies to target advertisements or to suggest connections. Algorithmically identified associations can in turn be used in support of prioritization and classification tasks.
The final category of tasks proposed by Diakopoulos are filtering tasks. Here, algorithms are used to filter between meaningful and meaningless information, to divide the signal from the noise. Examples include voice recognition, where algorithms need to filter the voice carrying the relevant command from background noise or computer vision for driverless cars, identifying relevant street signs by which to adjust vehicle speed or behavior. Clearly, this is a crucial task in allowing the interaction between computer-assisted sensors and their environment.
Already this short list of tasks algorithms are used for shows the plurality of their uses and hidden ubiquitousness in our lives. It comes as no surprise then to see computational algorithms being used in ever more areas. This includes fields closely connected and deeply reliant on digital technology - such as computational modeling, as in weather or climate forecasts, or the display of information or ads on digital platforms. But it also includes fields that at first seem not directly connected to computation or digital media - such as cultural production, finance, insurance, or policing.
Finance is an example for an industry heavily shaped by algorithms. In fact, hopes for the replacement of humans and the automation of financial markets go at least back to the early nineteen seventies [Black, 1971, Black, 1971]. Today algorithms support trading in various areas of financial markets. Most prominent here is one type of algorithmic trading, high frequency trading (HFT). In his review on the economic literature on algorithmic trading Albert J. Menkveld defines algorithmic traders as:
"(...) all traders who use computers to automatically make trade decisions. An example (...) is one who aims to minimize the price impact of a large order that has to be executed. Such an order is typically executed through a series of smaller child orders that are sent to the market sequentially."
[Menkveld, 2016], p. 8.
High-frequency traders (HFT) are a particular subgroup of algorithmic traders. HFT run on "extremely fast computers running algorithms coded by traders who trade for their own account" [Menkveld, 2016], p. 2. This category of algorithmic trader has received much attention and press. Back to Menkveld:
"(...) the key distinguishing feature of HFTs is their relative speed advantage for trading a particular security. One reason for such an advantage is information technology that enables them to quickly generate signal from the massive amount of (public) information that reaches the market every (milli)second. Examples are press releases by companies, governments, and central banks; analyst reports; and trade information, including order-book updates not only for the security of interest but also for correlated securities."
[Menkveld, 2016], p. 4.
Algorithmic trading is often associated with public fears about loss of control about algorithms and markets or unintended consequences of uncontrolled algorithms running wild. Yet, as reviews of the literature point out, algorithmic trading can also serve investors by removing friction from trading [Menkveld, 2016], p. 19-20. The impact of algorithms on finance is therefor less clear-cut than the public debate or popular culture might suggest.
Another example for a field increasingly shaped through algorithms is policing. In the USA various police departments have started to rely on algorithms. This includes decisions about which areas to patrol more heavily, given an algorithmically determined heightened likelihood for criminal activity. But also extends to decisions about individuals. For example by deciding algorithmically whether specific individuals have a heightened risk of criminal behavior and therefor merit higher police attention. While potentially offering police departments a better way to allocate scarce resources, these developments have come under strong critique regarding the workings of underlying algorithms and associated biases against historically marginalized communities.
Overall, the increasing uses and presence of algorithms in these and other fields has given rise to broad concerns.
3.2.2. Concerns: Opaqueness, fairness, and unintended consequences¶
As we have seen, computer algorithms are used in ever more societal areas. This raises broad concerns. While in principle algorithms provide a set of clearly defined steps to solve a given problem, their current uses have raised the question whether this is still the case. Accordingly concerns have emerged that warn against the dangers of increasing powerlessness of people in face of largely opaque algorithmic cages or algorithmically induced crashes or breakdowns. We can group these concerns in three categories:
Opaqueness of rules and procedures programmed into algorithms and by which they are developed and deployed;
Fairness of the results of algorithmic decisions in face of known and unknown biases;
Unintended consequences of algorithms rolled out in scale.
It might come as a surprise that a concept standing for the sequential working through a list of predefined steps to solve a given problem has become a byword for opaqueness. But this is exactly what happened to algorithms. Their uses in politics and society are opaque - be it by design or accident - and so are the specific rules they follow in their approach at solving tasks.
The legal scholar Frank Pasquale has found an evocative image for these concerns, the black box. In his book The Black Box Society, he discusses the opaqueness surrounding the uses of data and algorithms in assessing people's reputation through their data shadows, shaping their option space by providing answers to their searches, and providing or denying them financial opportunities through financial service providers. Pasquale finds that various actors work actively on keeping their uses of algorithms and their inner workings secret. The most troubling of which is the denial of transparency by companies providing algorithmic decision-support systems employed to provide or deny people opportunities, be it in the provision of credit or in granting or denying them parol, supposedly to protect trade secrets.
Beyond the active work on keeping the uses of algorithms in society opaque for security, commercial, or legal reasons, opaqueness can also emerge through the use of machine learning algorithms. Here, algorithms are set to solve a problem but, as we have seen in the section on artificial intelligence, they do so by identifying the most promising sequence of steps to their solution themselves. While clearly successful in many cases, the uncertainty just about how the algorithm goes about in solving the task creates opaqueness and uncertainty.
Once algorithms shape people's option spaces across different societal fields - such as information spaces, finance, or the legal system - meaningful transparency about their uses, inner workings, and effects becomes crucial. Meaningful transparency encompasses information about in which fields, which actors use algorithms to achieve which goals. This also means that these actors are clear about the rules, the sequence of steps, they programmed the algorithm to pursue. This goes double if they outsource these uses to vendors providing them with algorithmically-supported systems or solutions. It is also important to know what the outcomes of the algorithms are. Do they conform with expectations or are they creating biased or unintended results. Finally, people subject to algorithmic shaping need places to appeal algorithmic decisions and sources where they can find out why the decisions went the way they did. Only through regular and transparent audits and ongoing critical debate, can the sense of opaqueness created by the use of algorithms be countered and the potential of algorithms to improve society be realized.
If this fails, algorithms threaten to create an algorithmic cage whose circumference shifts for unknown reasons and by invisible forces. Without meaningful transparency, this threatens to become a digital version of the dark comedies and nightmares about unresponsive, fickle, and opaque but merciless structures charted by Franz Kafka in his novels Der Process (1925) and Das Schloss (1926). Like the structures charted by Kafka, an opaque algorithmic cage would suddenly shut close on its inhabitants without offering reasons or meaningful options of appeal. This clearly cannot be the goal for the application of algorithms in politics and society.
Once algorithms start shaping people's option space, the question of the fairness of their decisions emerge. Algorithms make, or at least support, decisions about people in areas as diverse as credit, job application, parol options, or welfare benefits. This has given rise to a vibrant academic subfield trying to establish whether algorithms discriminate against specific groups. This concern has brought about a continuously growing literature looking for evidence of algorithms being biased against specific groups: In other words, examining whether algorithms are treating members of specific groups systematically different and unjustly treating them worse than people who otherwise resemble them but belong to another group.
In their recent review of the field, Shira Mitchell and colleagues [Mitchell, Potash, Barocas, D'Amour, and Lum, 2021] identify three dimensions in which one can focus the forensic audit for algorithmic biases:
the policy question;
the statistical learning problem; and
axes of fairness and protected groups.
The first level of analysis identified by Mitchell, Potash, Barocas, D'Amour, and Lum  is the policy level:
"Much of the technical discussion in algorithmic fairness takes as given the social objective of deploying a model, the set of individuals subject to the decision, and the decision space available to decision-makers who will interact with the model’s predictions. Each of these is a choice that - although sometimes prescribed by policies or people external to the immediate model building process - is fundamental to whether the model will ultimately advance fairness in society, however defined."
The overarching policy goal associated with the deployment of an algorithm can be seen as contributing to social equality or discrimination. Accordingly, here might be the first cause of algorithmic unfairness. Similarly, fairness is also always a question of reference groups. Are we examining fairness of outcomes only within a specific subpopulation - such as prison inmates - or the general population. Results will vary, as will the choice of "correct" reference groups. Finally, Mitchell, Potash, Barocas, D'Amour, and Lum  point to the set of actions available to decisions makers, their decision space, as a source of potential unfairness of outcomes. While the abstract modeling of fairness might consider a wide range of options available to policy and decision makers, their actual decision space within its limits an algorithm is deployed might be much narrower. Accordingly, a mathematically correct fairness distribution might be correct in theory but not applicable to a specific situation.
A more technical perspective on algorithmic fairness is treating it as a statistical learning problem:
"Mathematical definitions of fairness generally treat the statistical learning problem that is used to formulate a predictive model as external to the fairness evaluation. Here, too, there are a number of choices that can have larger social implications but go unmeasured by the fairness evaluation."
Here, Mitchell, Potash, Barocas, D'Amour, and Lum  point to biases emerging from biased data sets used to train or run an algorithm and the model itself. This is a very rich area of work in computer science and statistics, trying to identify processes of identifying and correcting biases in data sets, while also trying to de-bias existing models.
The final dimension of importance identified by Mitchell, Potash, Barocas, D'Amour, and Lum  is the definition of what type of groups to consider as protected or the axes of unfairness. For some cases this might be gender or racial lines. While sometimes, specific outcomes might count as unfair while in other contexts the same outcomes might seem fair.
The discussion about the fairness of outcomes of algorithms when deployed in society is a prominent topic in computer research as well as in the social sciences focused on the role and impact of digital media in society. It is a topic that is bound to grow in prominence with the growing use of algorithms in society and more pervasive awareness of them and their consequences.
126.96.36.199. Unintended consequences¶
The last category of concerns, we will focus on, are concerns about potential but unintended consequences of algorithms. These typically come in three forms:
Concerns about the alignment between the goal of the programmer and the goals pursued by the algorithm;
The effects of rolling out algorithms in scale; and
Fears about reinforcing spirals or feedback loops once algorithms are deployed in complex systems.
In his book The alignment problem, the science writer Brian Christian charts a specific challenge underlying any design of a rule-based system: how can the designers ensure that the rules they develop and implement in a system, when followed by the letter, lead to the results they wanted to achieve:
"(...) we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete - lest we get, in some clever, horrible way, precisely what we asked for."
[Christian, 2020], p. 12f.
This is what Christian calls the alignment problem:
"How to prevent such a catastrophic divergence - how to ensure that these models capture our norms and values, understand what we mean or intend, and, above all, do what we want - has emerged as one of the most central and most urgent scientific questions in the field of computer science."
[Christian, 2020], p. 13.
The alignment problem reinforces some issues, we already have encountered, such as the need for transparency regarding the uses, mechanisms, and effects of algorithms. But it also points us to a further - potentially more troubling - issue. What if we have a rule-based system which is transparent and on the face of it seems successful in achieving its goals but unobserved by the programmers and the auditors is misaligned to some unspoken but crucial values or goals of people running it. Oversights like these can mean that algorithms seem successful in pursuit of their goals but in that pursuit violate values or principles unknown to them. In artificial intelligence research, this problem is called value alignment.
The alignment question matters for any rule-based system but it matters especially once algorithms are deployed in scale. One strength of algorithms is their ability to provide automated and standardized decisions at scale. Once implemented, it does not matter to an algorithmic decision system how many cases it handles. Algorithms diagnosing the likelihood of cancer based on medical imagery do not care whether they work on 100 or 10.000 cases. Similarly, algorithms recommending music tracks, news items, or movies for your early evening break, do not care for how many people they provide recommendations. But the scale by which algorithms are deployed matters for auditors in identifying outcomes pointing toward an underlying alignment problem. Scale also matters for support staff fielding calls by people subject to these decisions trying to complain or to understand how their option spaces were shaped - and potentially restricted - by algorithms.
The question of scale also matters with regard to potential but unintended consequences of the application of algorithms in complex socio-technical systems. Algorithms are a technological intervention in social systems. While sometimes this happens in clearly delineated areas with little chance of spill-over effects, running algorithms at scale in only weakly delineated fields brings serious risks of unintended effects, for example through spillover or through reinforcing feedback loops. Examples include algorithms designed to increase engagement in digital communication environments. Should the same principles shape algorithmic amplifying of content designed to entertain people also shape the amplifying of political content or news? Or are there unexpected and unintended side effects that we can expect? Introducing technological interventions at scale in complex social systems can carry large unintended consequences and needs to be approached and monitored with caution.
These worries need to be taken seriously and addressed. Not the least because computationally automated decisions based on algorithms are now prevalent and various areas in society. Accordingly, covering the uses, effects, and regulations of algorithms in various areas of society is an important field of scientific activity going forward. Not the least because - as we have seen with regard to algorithmic trading in finance - popular accounts might have little to do with actual uses or effects of algorithms.