4  Algorithms

Making sense of data, acting on them, and using them to predict the future asks for algorithms. Algorithms are sets of steps by which to solve pre-defined tasks. Originally, the term algorithm refers to a predefined set of steps by which people would solve a given mathematical problem. But today the term appears predominantly in connection with computing. Algorithms allow computers to perform tasks. They are crucial in the advances of computer-enabled analysis and automation.

The uses of algorithms vary widely. Algorithms can identify patterns in data, surfacing hidden connections between phenomena or actions. For example, expressing interest in a given brand on social media might be linked to the support of a specific party. Or being informed about a supposed political crisis might make people donate money to a political party. Potential patterns abound and algorithms help in identifying them.

Similarly, algorithms can also be used to automate actions. Based on rules learned from past data, algorithms can automatically present people with specific options. For example, if in the past people who expressed a liking for a given brand would also go on and support a given party, the algorithm could automatically present them with content from said party without having them first look for it. Or, if in the past specific people reacted to a crisis prompt in an email with a donation, while others reacted to a personal request by the candidate’s spouse, the algorithm can automatically decide which recipient to present with which information in an email blast in order to drive donations.

Algorithms thus help us understand the world better and interact with it more efficiently. But algorithms are also a source of worry. How can we be sure that patterns identified from data documenting the past should be replicated in the present or future? How can we know what the algorithm learns from data? And how can we be sure that the actions taken by algorithms conform with the goals, with which we have designed and deployed them?

Studying the use and impact of algorithms in society means being aware of their technical workings but also their uses in different societal fields. How are algorithms designed? What tasks are their deployed to solve? And how are their uses and outcomes monitored and interrogated, not only by those using them but also those who are affected by them. The study of algorithms therefor has mathematical, computational, and social components. This chapter will provide the reader with an overview of the technical as well as the social components in the study of algorithms.

4.1 What are algorithms?

We live in a world shaped by algorithms. In digital communication environments, algorithms decide what information we see, which music or films are suggested to us, or which people we are invited to connect and interact with. Algorithms decide about which contributions to digital communication environments are banned and which are allowed. Algorithms are also shaping our world beyond the merely digital part. They assign people individual risks, be it credit default, fraudulent behavior, abuse, or recidivism. And algorithms act. Algorithms buy and sell stock and they decide whether a self-driving car should stop or accelerate. Algorithms shape the information we see, options we have, and confront us with actions we have to content with. No wonder, then, algorithms their uses, workings, and effects face increasing scrutiny and contention.

The term algorithm has become a general catch-all term for the risks and fears associated with the powers computers hold over individual lives and in societal fields. The term has become associated with opaque and incontestable mechanisms that allow governments and companies to control people or – worse yet – that through runaway technological change have evolved beyond the control of even those who develop or deploy them. But these far-reaching fears and generalizations tend to obscure the actual nature, workings, and effects of the use of algorithms. Instead of inspiring and enabling critical reflection, evaluation, and improvement of algorithms these accounts risk blanket rejection and creating a sense of helplessness. To turn widespread interest or concern into something more productive means looking at algorithms more closely and analytically.

First, we need to be clear about the term and its origins. We have come to associate the term algorithm with computers. But in fact, the term goes back far beyond the age of computation. Algorithm is the latinized version of the name of an important mathematician from the middle ages, Muḥammad ibn Mūsā al-Khwārizmī. Al-Khwārizmī was a ninth century mathematician from a region of central Asia, south of the Aral Sea in what today is Uzbekistan and Turkmenistan. He was the author of the earliest known book on algebra. Once the book was translated into Latin in the 12th century, the name of its author became synonymous with the practice of mathematical calculation following routine arithmetic procedures. Accordingly, in mathematics the term algorithm refers broadly to “any process of systematic calculation, that is a process that could be carried out automatically” (Chabert, 1994/1999, p. 2). This of course includes processes that were defined well before there was the term algorithm, such as Euclid’s algorithm that described a routine to identify the greatest common divisor between two numbers, and those that came after, such as Euler’s method for solving differential equations. More fundamentally even, we can also think of basic math routines performing long division by hand as algorithms.1

But we can think even more broadly about the term. The term algorithm can cover all procedures that provide a standardized set of steps solving of a specific repeatedly encountered problem. In this sense, any recipe in a cookbook is an algorithm, allowing for the standardized step-by-step approach to solving a specific problem, such as preparing a given dish. Algorithms, in this sense, lie at the heart of many standardized practices of modern life. Be it cooking, sports, or the pursuit of hobbies. They are an essential part of teaching trades, skills, and professions. And they are a crucial feature of organizations, guaranteeing their proper and standardized workings. Examples include internal administrative guidelines within government bureaucracies, police departments, or large companies for addressing specific repeatedly encountered tasks. Algorithms, in this sense, are a core feature of modernity, standardization, and professionalization.2

Currently, we tend to associate the term algorithm most prominently with computers and the automation of processes or decision making. Fundamentally, when used in the context of computing, the term algorithm means the same as in other areas but the series of steps to solve a task must be computable by a machine.

Defintion: Algorithm

“The modern meaning for algorithm is quite similar to that of recipe, process, method, technique, procedure, routine, rigmarole, except that the word”algorithm” connotes something just a little different. Besides merely being a finite set of rules that gives a sequence of operations for solving a specific type of problem, an algorithm has five important features” (Knuth, 1968/1997, p. 4).

For Knuth (1968/1997) these features are:

  1. Finiteness;
  2. Definiteness;
  3. Input;
  4. Output;
  5. Effectiveness.3

According to this definition, an algorithm must be finite. In other words, it must complete after a series of steps. These steps need to be clearly and unambiguously specified, they need to be definite. An algorithm works on inputs and returns outputs. Those stand in a specified relation to the inputs. Finally, following Knuth (1968/1997), algorithms should be effective in that each of the steps specified to solve a task should in principle be doable by a person using pen and paper. These sets of features can be used in the analysis of algorithms to compare the performance of algorithms in translating inputs into outputs. An important element here is the number of steps necessary to perform the task, with a smaller number of steps being more efficient and demanding for less computing time.

Algorithms lie at the heart of computation. They are the basis for any computational operation and their discussion can rightly focus on the math, expression in code, and efficiency.4 But algorithms are increasingly used to solve tasks that are not predominantly restricted to technical contexts but also have social consequences.5 Algorithms shape digital systems that directly impact, or interact with people. Accordingly, in their analysis we have to move beyond merely technical questions to social, psychological, and structural questions. For this, we have to turn to the fields algorithms are used in, tasks they are set to solve, direct effects on people, and indirect effects on the societal fields and social contexts they are deployed in.

Computational algorithms are the key in translating data into action and insight. In Chapter 3 we saw how data make the world readable, offer insights about regular and potentially causal connections between entities, and allow for actors to plan and adapt for the future. Algorithms can build on these potentials and realize them. In fact, given the massive scale of newly available data, the automation of analysis and action through algorithms becomes a condition to do so. It is no surprise, then, to find computational algorithms to be used in a wide variety of context in which they shape and impact social and individual life. This variety is too broad to cover it here comprehensively. However, we can group their uses in three categories based on different goals, workings, and concerns. These are the use of algorithms for insight, decision support, and action.

4.2 Algorithmic insight

Algorithms are widely used to provide insight. Data do not speak for themselves, data-enabled insight depends on analysis. This is done with algorithms designed to process and analyze data to uncover patterns, trends, and hidden relationships. Examples include data visualization tools, data mining techniques, and certain machine learning models used in exploratory data analysis. Algorithms like these are essential in sectors like research, business intelligence, and academic studies, where raw data must be transformed into meaningful insight.

While there are many different kinds of algorithms for data analysis, it is helpful to differentiate between two approaches:

Many algorithms transfer mathematical calculation procedures into computer-readable routines. Programmers provide computers with a pre-defined series of steps for solving tasks in data analysis. This allows computers to automatically perform data analysis on vast datasets and to provide insights. This includes data preprocessing, sorting, searching, clustering, and pattern recognition. Since all steps and analytical procedures are pre-defined, these approaches are reliable and comparatively easy to explain but depend on the quality of the pre-defined steps and rules.

Example: Identifying campaign donors, K-Nearest Neighbors

Let’s assume a political campaign has a dataset with information about people. It is interested in gaining insight into which of the people the campaign comes into contact with are likely donors. One easy procedure that allows this is the K-Nearest Neighbors algorithm (KNN).6 Roughly speaking, KNN does the following:

The campaign has a data set with information about people. Such as their age, income, the number of times they have voted in the past, and whether they have recently donated. Now, for each new person entered into the campaign database, KNN calculates whether they are likely to do so in the future by looking at a small set of people in the dataset (let’s say three) that resemble them with regard to characteristics that the campaign has information about; in our case this would be age, income, and the times they have donated in the past. If most of these similar people have donated, there is a likelihood that the person of interest will be open to donate in the future. If not, not.

Alternatively, one could use machine learning algorithms.7 Machine learning algorithms also operate based on a set of pre-defined steps, but many introduce stochastic and probabilistic elements during their execution. This incorporation of randomness means they may not always yield identical outputs when executed multiple times with the same input, making them less deterministic in nature. Machine learning algorithms tend to be more data intensive than other approaches but also are more responsive to unforeseen patterns in data and temporal shifts. On the other hand, due to their stochastic and probabilistic elements, they can be hard to interpret and their returns difficult to explain or assess, leaving some uncertainty with regard to their use in real-world scenarios.

Example: Identifying campaign donors, machine learning

Let’s stay with our previous example but now look at how it could be solved through a machine learning algorithm. Let’s say a Random Forest algorithm.8 The campaign begins by collecting a comprehensive dataset on past donor behavior. After ensuring that the data is cleaned and processed, relevant features are selected and processed appropriately. The campaign randomly divides the dataset into a training set (e.g., 80% of all records) and a test set (e.g., 20% of all records). Using the training set, a Random Forest model is trained to recognize patterns and characteristics that are indicative of donation behavior. The model’s feature importance ranking is analyzed to understand which donor characteristics are most predictive of donation behavior. The trained model’s performance is evaluated on the test dataset using appropriate metrics to ensure it predicts accurately and generalizes well to unseen data. Once satisfied with the model’s performance, the campaign can then use it to predict the likelihood of new entries to the dataset being potential donors based on their characteristics.

Both traditional and machine learning algorithms offer valuable insights, yet they possess distinct strengths and limitations. Traditional algorithms are particularly helpful for data sets with a limited number of features per observation, or low dimensionality. In such scenarios, the design and application of classical statistical models are straightforward, and the simplicity of these algorithms often suffices. The limited variables mean there’s minimal benefit from the complexity of machine learning algorithms.

On the other hand, for high-dimensional datasets with numerous features per observation, traditional algorithms quickly reach their capacity. It’s in these complex scenarios that machine learning algorithms shine, leveraging their ability to handle vast amounts of data.9 However, these algorithms are resource-intensive to train. Their often stochastic and probabilistic nature can make them challenging to interpret, leading to potential skepticism and criticism from the public. For tasks requiring high reliability and transparency, the complexities and opacities of some machine learning models may be less suitable.

Actors in various fields rely on algorithms to provide them with insight. Examples include campaigning, in which parties use data and algorithms to identify likely voters, contributors, and model likely voting behavior.10 Another example is the use of data by news rooms to identify trending topics and features of successful stories.11 In these and other cases, the use of algorithms to generate insight not only changes the behavior and opportunities of people using algorithms, it also can lead to a reconfiguration of structures, organizations, and institutions in the field around new opportunities and demands associated with their realization.

Example: Algorithms, Obama, and the aftershocks

Famously the US presidential campaigns by Barack Obama in 2008 and 2012 relied heavily on algorithms to generate data-enabled insight about voters, donors, and volunteers. These insights were so important to the campaign that it changed the traditional organizational structure of political campaigns and put digital and analysis specialists at the core of the decision making group.12

Key to these adjustments has been the power of data- and algorithm-enabled practices for campaigns to generate insight about potential donors and to provide ways of maximizing their contributions to the campaign. These successes generated a whole industry of donor analytics and became crucial to campaign practices within the Democratic Party across the ticket.13 But heavy reliance on donation requests in campaign communications with their supporters might over time have contributed to a shift in the relationship between party and supporters. Sifry (2023) argues that this practice has led to a largely transactional and commodified relationship between the party and its supporters, severely reducing the resonance of its progressive claims and over time contributing to a donation fatigue.

This is an example that over the course of a few campaign cycles, innovative early uses of algorithms for insight provided a competitive advantage for early adopters. The public perception of the contribution of algorithmically gained insights to electoral success created a large set of imitators who adapted the same techniques and approaches. This then led to adaption and changes in the behavior of people rendering formerly successful practices mute. If we want to understand the use and impact of algorithms, we therefor must turn beyond the narrow analysis of the workings of algorithms and their uses and take a broader look at societal fields and action and adaption over time.

Of course, just because an algorithm does provide a result, that does not make the result true or useful. In the interpretation of results provided by algorithms, people need to remain aware of potential limitations and sources of error. This includes translating the general limitations of quantification introduced in Chapter 3 to the specific contexts algorithms are used in. More specifically, it also includes critically interrogating whether the steps in the analysis reflect both the characteristics of the underlying data and the problem they are supposed to solve.

4.3 Algorithmic support

Algorithms are also used to provide suggestions and advice. This includes algorithms that suggest courses of action or scenarios about future developments based on regularities identified in past data. Examples include algorithms that recommend users of digital media platforms content of potential interest, algorithms that advice police forces on where to expect a concentration of criminal activity, or algorithms that advice doctors on whether specific symptoms indicate a specific disease. Algorithms play a crucial role in decision-making processes across various sectors, helping individuals and organizations to make informed choices by offering data-driven suggestions.

In discussing these algorithms, their uses, effects, and evaluation, it is helpful to differentiate between algorithms providing advice and suggestions to experts and those providing advice and suggestions for lay people and users of digital communication environments and services.

4.3.1 Algorithmic support for experts

In a growing set of professional fields, algorithms support experts by providing assessments, prognoses, and scenarios. Algorithms promise to offset some of the vagaries and inefficiencies of human decision making and analysis of evidence.

Ideally, decision making in professional and institutional contexts aims for fairness to those concerned, consistency across cases, comprehensive accounting for the available evidence, and explainability. But even trained experts can fall short of these expectations. For example, psychological biases can unduly influence decisions of trained decision makers. Cognitive shortcuts, such as heuristics, can fail decision makers in specific contexts and lead them astray. Also, people can struggle with keeping aware of all the evidence relevant to a decision.14 These are some of the limitations of human decision making that can can lead to inefficient, unfair, or wrong decisions. Here, algorithmic and data-enabled decision support can help.

Algorithms can support decision making by predicting expected outcomes based on available data. Using rules provided by modelers or rules based on autonomously identified regularities in past data, algorithms model patterns, processes, behaviors, and outcomes of interest. They then can predict the likelihood of events happening or entities falling in a given category based on these models. This makes algorithms useful in various areas.

In medicine, computational algorithms, such as IBM’s Watson for Oncology assist physicians by predicting the likelihood of a patient suffering from a disease based on given symptoms or test results and by suggesting treatment plans with greater likelihood of success than alternatives.15 In finance, insurers use credit scoring models to assess the creditworthiness of individuals or businesses.16 In engineering, computer-aided design systems (CAD) offer suggestions on design optimization or error correction.17 In biological research, algorithms propose molecular structures for new potential drugs or materials or offer insights into genetic variants and their potential implications.18 In climate research, algorithms help in the development of climate models and the downscaling of general expectations to specific geographical areas.19

In these cases experts are supported by algorithmic support systems that suggest courses of action, diagnoses, or scenarios for given cases or moments. Algorithms provide experts with scores indicating whether people or other entities fall in a relevant category, for example whether they are likely voters or likely to default on a credit. Or they provide experts with scenarios of likely future developments or outcomes. Sometimes, these algorithmic assessments rely on one model. Sometimes they rely on multiple models and provide experts with different outcomes to compare and choose from.

In general, the goal of algorithmic decision support for experts is to increase efficiency and speed of decision making especially in time-sensitive or resource-starved environments, such as medical diagnoses or in the criminal justice system. In other cases, algorithmic support is more about providing access to a greater evidence base, providing access to different models and different outcomes, and for handling large amounts of data. This is not so much about efficiency but about providing practitioners with better access to insight based on greater troves of information made accessible through models.

The positive impact of algorithmic support systems has a counterweight in a set of concerns. If algorithms are trained on biased data, they will produce biased outcomes. For instance, if a hiring algorithm is trained on historical company data that favors one gender over another, it might reproduce that bias in its recommendations.

Many advanced algorithms, especially deep learning models, are often treated as black boxes because their decision-making processes are not easily interpretable. This lack of transparency can make it challenging to trust or validate the advice provided. Experts might become overly dependent on algorithmic advice and neglect human intuition or expertise. This is particularly concerning in fields where human judgment is crucial, like medicine or law.

Many algorithmic systems rely on vast amounts of data, which can raise concerns about user privacy and data security, especially when algorithmic support is provided on a software-as-a-service basis by external companies. Without proper safeguards, sensitive information could be at risk of being accessed by unauthorized parties.

Also, over time the suggestions of algorithms might turn into de facto algorithmic cages for experts and professionals. After all, if one deviates from an algorithmic recommendation and this turns out badly, the potential penalty is higher than simply following the algorithm even if that recommendation turns out to be wrong.

Example: Predictive policing

An area where the use of algorithms directly touches on democracy and the power of the state over its subjects is the criminal justice system. For example, algorithmic support systems are increasingly used by various police departments in the USA.20 Based on data on previous criminal activity, algorithms provide assessments of which areas are expected to feature heightened criminal activity so police can preemptively patrol them. Algorithmic assessments can also extend to individuals, such as assessments whether selected people have a heightened risk of criminal behavior and therefor merit higher police attention.21

Algorithmic support systems promise police departments a better way to allocate scarce resources more effectively to where they are most needed. This is clearly an important contribution. At the same time, there are important limits to data-driven and probabilistic approaches to policing, that have given rise to strong critique.

This includes shifts within systems relying on algorithmic support to underlying probabilistic approaches, abandoning less quantifiable approaches that nevertheless might be more suited to the task at hand.

Just as importantly, by relying on data that for reasons of historical discriminatory practices over-represent specific demographic groups in crime-statistics, algorithms guiding contemporary police activity risks reproducing these discriminatory practices. Over time these patterns might be reinforced through a vicious feedback loop of historical over-representation in data, heightened police attention akin to racial and social profiling, and subsequent overrepresentation in arrest reports and new data.

Over reliance on data-driven algorithmic support might thus contain a hidden trade off between efficiency and fairness within the criminal justice system.22 Of course this general critique can be also applied to other areas relying on algorithmic support systems.

4.3.2 Algorithmic support for non-experts

Algorithms can also provide support for people in their daily lives. These algorithmic support systems have great reach and are already present for many different types of uses and practices.23 Very prominently this includes algorithmic recommendation systems on digital services, such as news-feeds on social media sites, video and music streaming, or on online shopping sites. In navigation and travel algorithms provide suggestions for routes and travel itineraries. In more specialized contexts algorithmic support systems provide advice in personal life decisions in dedicated services or apps, such as food and diet suggestions, workout plans, or dating.

These algorithms are important in helping people navigate choice-rich and fragmented information environments and option spaces. Algorithmic recommendation systems are important and can help people navigate overwhelming and confusing information environments by suggesting information. They can also help people to make better choices regarding health, nutrition, or personal finance by shaping option spaces on the basis of data-driven analysis and prediction. At the same time, the growing reliance on algorithmic support systems in ever more areas of public and personal life can turn into a problem for politics and society, if these systems are implemented and handled without reflection, evaluation, and critique.

For algorithmic support for non-experts, similar concerns exist as for algorithmic support directed at experts. But beyond bias, transparency, de-skilling and over reliance, and privacy concerns there are also other risks to consider. This includes a homogenization of choices. If everyone relies on the same set of algorithms for recommendations, diversity in consumption patterns will necessarily be reduced. This in turn will lead to a homogenization of tastes and culture. For instance, if everyone watches the same suggested movies or reads the same recommended books, it could limit cultural diversity and novelty.24

This algorithmically driven cultural homogenization could lead to different outcomes. For one, it is possible, that international consumption and production patterns would shift toward generalized US cultural products, increasing the already felt cultural hegemony of the USA. This would also contribute to a spread of US cultural and political concerns to other countries, irrespective of the specifics of local contexts and cultures. Alternatively, algorithmic recommendations might shift patterns toward a more generalized international taste, due to large audiences for cultural products in Asia. Market forces would thus shift cultural production and consumption not toward US-based patterns but toward those of the largest consumer group of international consumers. Similar dynamics already emerged with American media and sports companies reaching out to Asian markets and in turn adjusting products and public statements to the sensibilities, concerns, and interests of Asian audiences and governments.25

Additionally, greater reliance on algorithms for content discovery and distribution puts companies providing these algorithms into a powerful position opposite creators who come to depend on producing content recognized and distributed by algorithms. When algorithms determining the visibility and discoverability of content shift, so can the relative prominence of those who create content for algorithmically shaped information systems.26

A recent case illustrating this dependency is Instagram’s shift from an algorithm predominantly relying on signals from the social graph in content recommendation decisions to one predominantly relying on signals within content items, which led to strong pushback by Instagram influencers with large followings.27

Example: Algorithmic recommendation

Digital information environments are often shaped by algorithmic recommendation systems. Based on predefined or automatically identified rules, users are shown information that is likely of relevance to them or that is likely to have them engage with it. On a social networking site like Facebook, this could be a post by a close friend or family member, on an entertainment app like TikTok this could be a funny video of a dancing hamster, on a microblogging service like X this could be a news item on current events.

Rules determining recommendations vary and can shift over time. For example, generally speaking there are recommendation algorithms that show people content prominently that others who they are connected with have posted or have interacted with. This would be a logic based on users’ social graph, or more broadly, their network.

Alternatively, machine learning algorithms can identify signals within the content itself that at any given time or for any given subgroup of users indicate likely engagement. Based on these automatically identified patterns within content, algorithmic recommender systems would decide about whom to display what information. Other logics for rules, or mixtures of logics, can also apply.28

In most digital information environments, some sort of algorithmic recommendation is necessary for people to navigate these information and choice-rich environments. At the same time, companies running these services use algorithmic recommendation systems in support of their commercial interests. By keeping people engaged with the service, companies are in a position to show them more ads, making them a more valuable partner for companies trying to reach people. These uses stand in obvious tension, especially when algorithmic recommendation shape people’s political and news exposure. For example, content with strong informational quality might be beneficial to people but not necessarily lead to interactions. Accordingly, it might not be recommended by algorithms. In contrast, other content creating controversy or arousing strong emotions might create interactions and therefore be recommended algorithmically, but over time might lead to a deterioration of the information environment. In short, purely commercial display and distribution logics are likely to be in conflict with logics focused on the quality of information or democracy.

Algorithmic support systems clearly hold great potential for experts in increasing their capabilities, information processing ability, and their efficiency. This has obvious consequences for the (rhetorical) power of experts and expert advice in democracies. Algorithmic advice systems aimed at the broad public can also be very helpful. They provide crucial assistance in choice-rich information environments and option spaces that without algorithmic support might be prohibitively difficult to navigate due to fragmentation or information overload.

While helpful, there clearly also are risks associated with these systems. These includes risks for people subject to algorithmic suggestions, the correctness of the suggestions provided, fields and systems coming to rely predominantly on algorithmic advice systems, and an increase in the power of companies providing algorithmic support systems in ever more political and social fields. Accordingly, the analysis of algorithmic support systems must always include their mechanics, uses, results, effects, and structural embeddedness.

4.4 Algorithmic action

Algorithms need not stop at providing insight or advice, some also can take automated action. These algorithms are designed to process data, make decisions, and execute actions without requiring human intervention at every step. Examples come from different areas.

Algorithms can allow machines to move and act. This provides the basis for self-driving cars where algorithms evaluate data from multiple sensors, such as cameras, LIDAR, and radar, to navigate, decide when to accelerate, brake, or turn, and to react to unforeseen situations.29 The same goes for their use in agricultural machinery, such as automated tractors where algorithms decide on when and where to plant, water, or harvest crops.30 Another more controversial example is their use in military and civilian drones where algorithms allow drones to fly specific routes, adjust to environmental conditions, and even decide on targets.31

Algorithms are also used to autonomously shape and act in digital information environments or markets. We already encountered algorithmic recommendation systems. These recommendations can provide choices for users. Alternatively, recommended content can automatically be displayed, for example through autoplay on Spotify, YouTube, or TikTok. In these cases algorithms come to autonomously decide on what content to display next and only leave the user to cancel the playback instead of actively choosing between suggestions; thereby reducing user agency. Additionally, algorithms are used to monitor and moderate digital communication environments by autonomously detecting and sometimes removing or hiding content that violates content policies.32

Algorithms can also analyze and act in markets. A prominent example is online advertising, where algorithms decide which ads to show to which users, and when. These display decisions are based on user profiles and user behavior on the one side and ad auctions on the other, determining a demand-driven price for advertisers to reach users with specific profiles.33 Another prominent example, comes from the use of algorithms for high-frequency trading (HFT) where algorithms can decide to buy or sell stocks in fractions of a second based on real-time market data.34

Automated algorithms are also used to manage large interconnected systems. This includes smart grids, where algorithms are used to monitor and balance energy consumption, dynamically adjusting energy distribution based on demand. On the level of the individual household, this translates into the use of smart thermostats where algorithms help devices learn user preferences over time and adjust heating or cooling settings autonomously to optimize for comfort and energy efficiency and smart homes more general, where algorithmically enabled systems can decide when to turn on lights, lock doors, or activate security systems based on user behavior and external factors.35

For these and other cases like them, algorithms promise to increase the efficiency of systems, manage complex interdependencies between subsystems and devices, allow the management and monitoring of large-scale systems that are beyond the control of individuals, and the automation of tedious or dangerous work. These are powerful promises that can increase people’s productivity using algorithmically enabled devices. The automated, data-driven management of large-scale systems promises greater coordination as well as fewer waste.

Clearly, there are many opportunities for algorithmic action. At the same time, the fully autonomous action of algorithms raises concerns. Some are the same concerns that we have encountered with algorithmic insight or support. But we also have additional concerns that are connected to the question of scale, responsibility, and interrogability.

Algorithms can act at vast scale and speed, much beyond human capabilities. This brings opportunities, as with the automated management of smart grids, connected internet of things devices, or smart cities. This promises efficiency gains in resource expenditure, as for example energy, and timeliness of decisions. On the other hand, scale and speed can contribute to wrong decisions or those with unintended consequences to be executed automatically at a scale and where errors are difficult to detect, monitor, or control in time. Algorithmic damages might thus accrue without the opportunity for timely control or rollback.

There is also the question of responsibility of automated algorithmic action. If an algorithm makes a mistake, who is responsible? The company providing the algorithm? The company providing the data to train the algorithm? The company implementing the algorithm in their product? Or, is it the user? The most obvious example of the responsibility conundrum are self-driving cars, where the question of liability in case of algorithmically caused accidents it unclear.

Finally, there remains the question of interrogability. Scale and speed in action as well as their integration in machine-human assemblages makes it in cases of error difficult to assess how and why decisions were made and what the degree of the error lies with the underlying algorithm or other elements of the assemblage.

Example: Algorithmic trading

Finance is an example for an industry heavily shaped by algorithms.36 In fact, hopes for the replacement of humans and the automation of financial markets go at least back to the early nineteen seventies.37 Today algorithms support trading in various areas of financial markets. Most prominent here is one type of algorithmic trading, high frequency trading (HFT). In his review of the economic literature on algorithmic trading Albert J. Menkveld defines algorithmic traders as:

“(…) all traders who use computers to automatically make trade decisions. An example (…) is one who aims to minimize the price impact of a large order that has to be executed. Such an order is typically executed through a series of smaller child orders that are sent to the market sequentially.”

Menkveld (2016), p. 8.

High-frequency traders are a particular subgroup of algorithmic traders. HFT run on “extremely fast computers running algorithms coded by traders who trade for their own account.”38 This category of algorithmic trader has received much attention and press. Back to Menkveld:

“(…) the key distinguishing feature of HFTs is their relative speed advantage for trading a particular security. One reason for such an advantage is information technology that enables them to quickly generate signal from the massive amount of (public) information that reaches the market every (milli)second. Examples are press releases by companies, governments, and central banks; analyst reports; and trade information, including order-book updates not only for the security of interest but also for correlated securities.”

Menkveld (2016), p. 4.

The case of algorithmic trading is interesting as here different concerns meet. For one, algorithmic trading is often associated with public fears about loss of control about algorithms and markets or unintended consequences of uncontrolled algorithms running wild. If trading algorithms act in scale and speed beyond human control or intervention markets can crash, firms can get damaged, and private fortunes big and small can get lost. Algorithmic crashes are not merely academic thought examples. There are many examples for algorithmic crashes large and small, some of them very dramatic. Often forensics after the events show that algorithms acted too fast, had unknown or overlooked errors, and contributed to effects that were difficult to identify or disentangle even after the event. At the same time, algorithmic trading can also serve investors considerably by removing friction from trading.39 Algorithmic trading thus shows both the opportunities as well as the risks of automated algorithmic action.

Algorithmic action thus clearly holds vast potential for a more efficient management of systems and markets. At the same time, the more systems rely on algorithms, the greater the dangers of runaway mistakes that might be consequential but difficult to identify and roll back.

4.5 Risks and fears

As we have seen, computer algorithms are used in ever more societal areas. This raises broad concerns. While in principle algorithms provide a set of clearly defined steps to solve a given problem, their current uses have raised the question whether this is still the case or whether current uses hide the steps contributing to a decision and making it difficult to understand or contest. We have already briefly encountered some of these concerns but some merit deeper discussion. In the following sections, we will focus on concerns about fairness, trapping people in algorithmically constructed bubbles and loops, the alignment problem, and the opaqueness of algorithmic decision making and its consequences.

4.5.1 Fairness

Once algorithms start shaping people’s option spaces, the question of fairness emerges. Algorithms make, or at least support, decisions about people spanning various areas of their life: they assign people’s credit ratings, they evaluate their job applications, they assess the likelihood of them engaging in criminal activity, or administer welfare benefits. These algorithmic assessments and decisions matter for the choices people have and the way they are treated by institutions of authority. This makes it important that algorithms are treating people fairly.

Defintion: Fairness

A decision can be called fair if people who resemble each other regarding the decision task at hand consistently are treated the same. If people with specific characteristics not relevant to the task are consistently treated differently from those they otherwise resemble, the decision can be called unfair or biased.

Take credit scores for example. If people with a steady job and high income consistently get good credit scores the underlying process can be called fair. A steady job and high income are clearly directly relevant to the task of assessing whether people will be able to repay a loan. But if people with a steady job and high income who happen to be women get a lower credit rating, the decision process would be unfair. Clearly gender should not be a variable directly relevant to the ability of a debtor to repay a loan.

Generally, different outcomes are not necessarily a sign of an unfair process. But decision making becomes unfair once differences emerge along characteristics not directly relevant to the decision task at hand.40

As we have seen before, algorithmic decision support for experts promises to improve on some of the limitations of human decision making. With regard to fairness, psychological biases and cognitive heuristics stand out as potentially skewing human decision making. Both might render decisions unfair by relying on not directly relevant factors. Recruiters might be looking at the school applicants graduated from as a heuristic for how to interpret their transcripts and infer the likelihood of them succeeding in a firm. Algorithms can take more information into account and produce replicable predictions for the likelihood of candidates succeeding in a firm. These can be rules-based algorithms that consider many different factors prior identified as being relevant, or these can be machine learning algorithms that in data-rich contexts can identify signals predicting success that modelers were not aware of beforehand.

Algorithms can support decision makers in making decisions fairer across various domains; either by applying complicated models consistently across contexts or by identifying new rules and models based on data. They also allow decision makers access to large amounts of evidence by incorporating it in models and their output. Additionally, they allow for the systematic auditing of decision making processes they are modeling, this potentially makes it easier to identify hidden biases and address them. But algorithmic decision support also carries specific risks to fairness that need to be accounted for.

The prominence of algorithmic decision making and decision support has led many academics, commentators, and practitioners to put special attention on the question of algorithmic fairness.41 Are algorithms contributing to fairer and more explicable decisions or do they continue, or even worsen, discriminatory practices? Are algorithms treating all people alike or are they treating people with specific characteristics, behaviors, or group affiliations worse than others? In the examination of algorithmic fairness two important dimensions emerge. One potential driver of algorithmic unfairness stems from configurations of institutional or organizational uses of algorithmic decision systems and the associated policy goals. The other stems from the data algorithms use to build models and base their assessments on.42

On a foundational level, the fairness of algorithmic decision support systems depends on the structural configuration of their use and the goals of the organizations and institutions using them. Importantly, organizations can use algorithmic decision support to enable them to form better decisions. What better decisions means in this context is of course open to interpretation. Some, will try and pursue the best possible decision for a given task and very actively consider the consequences for and welfare of people subject to algorithmic decisions. Others, will interpret better simply as more efficient, cheaper, or better for them without necessarily considering broader implications. For those in the first category, algorithms that produce unfair results would be an issue to address, monitor, and solve. For those in the second category, unfair outcomes do not matter much as long as the algorithm achieves its primary goals for the organization. Unfairness resulting from these uses of algorithms would thus not be primarily a technological problem with a technological fix but a result from the organizational goals of algorithm use.43

This is of course not to say that unfairness cannot result from the use of algorithms by organizations genuinely trying to achieve fair outcomes. But in these cases, organizations can set up dedicated auditing units, processes, and provide transparency of uses and outcomes for outsiders. In the best case, this could lead to unfair results being temporary and subject to improvement.

Unfairness can also result from data and modeling choices. Importantly, patterns of past discrimination can manifest in data algorithms are trained on.44 For example, if police officers stop Black motorists and pedestrians routinely with greater likelihood than Whites, they are more likely to pick up on otherwise undetected offenses by Blacks than Whites, for example carrying illegal drugs. This does not necessarily mean that Whites are less likely to carry illegal drugs, they are simply less likely to be caught during routine stops. But over time, police records will contain a greater number of cases of Blacks who carried illegal drugs than Whites. An algorithm predicting the likelihood of a person carrying drugs could thus easily use race as a predictor. Policing decisions based on the output of such an algorithm would thus continue and over time reinforce a police department’s history of discriminatory practices.

This is just one intuitive example, but data can contain other more hidden forms of bias or discrimination as well. This includes differences between gender roles and job prospects in societies expressed in text corpora,45 racial discrimination in medical data,46, or discriminatory patterns in grading.47 The identification of bias within data sets and the avoidance of biases in algorithmic assessments are very rich and promising areas of computer science and interdisciplinary research.48

The question of fairness within algorithmic decision making is a core question for both computer science and applied fields developing and evaluating algorithmic support systems in various contexts. As shown questions include aspects of the specific set up and constellation of algorithmic decision making and support in organizational and institutional contexts as well as technical questions associated with coverage and biases within data sets and specific modeling choices. This is an important area that is bound to grow in prominence with the growing use of algorithms in society and more pervasive awareness of them and their consequences.

4.5.2 Bubbles and loops

There are widely perceived risks associated with algorithmic recommendation systems for the public. Some fears focus on the risks of society fracturing into silos of shared tastes, interests, and partisanship. These fears react to suspected mechanisms behind algorithmic recommendation systems. Fears of fragmentation attach themselves to algorithms providing people with information and cultural products in accordance to their prior beliefs and interests.

One of the most pronounced public fears associated with algorithmic recommendation systems is the fear of filter bubbles. The reasoning behind the filter bubble is very intuitive. Digital platforms like Facebook, Instagram, TikTok, X, or YouTube all depend on algorithms to structure information environments for people. The companies running these platforms aim to increase the time users spend on them to achieve greater opportunities for ad display. They do so by shaping information people see according to their likelihood of interacting with it. This is where algorithms come in. Algorithms select content that has a greater calculated likelihood of people interacting with it than random content.

One potential mechanism to select content like this is to check content that people interacted with in the past and suggest to them similar content in the future. This is of course only one potential mechanism. Alternatively, the algorithm could suggest content that other people in a user’s social graph – say their friend and follower network – interacted with. Or an algorithm could suggest content that a user’s digital twins – other users on a platform that share demographic and behavioral patterns without necessarily being connected or known to each other – interacted with.49

The exact mechanisms might vary, but the expected effects stay the same. By selecting content similar to that which people interacted with in the past, or content that others that they resemble had interacted with, algorithms supposedly continue to show people information likely to correspond with prior interests or beliefs. Over time, this would reduce people’s exposure on the platform to serendipitous information outside their revealed interests or exposure to information contradicting their expressed beliefs. Over time, algorithmic information filters would trap people in information bubbles of their interests and beliefs, without providing them with views from the outside.

Especially regarding politics, this mechanism was seen as a great social threat. In 2011, the political activist Eli Pariser expressed these fears in his book The filter bubble.50 By showing people only information with a political stance, they already agree with, Facebook would trap people in bubbles of politically homogeneous information, leading people to lose sight of opposing political opinions and the reasons for them. Over time, Pariser argued, this would lead to political polarization and a break down of political empathy across partisan lines.

The filter bubble is probably one of the widest known ideas about the supposed effect of digital media and algorithms on politics. Luckily, its empirical foundation is very thin. Almost since its publication in 2011 various empirical studies have shown that people tend to encounter cross-cutting political information, even in heavily algorithmically shaped information environments.51 Sometimes, they even seem to encounter more cross-cutting information in digital communication environments, than in personal exchanges.52 Still, comparative studies show, that there is variation across digital services in the degree to which algorithms shape homogeneous or heterogeneous communication environments.53 Specific implementations of algorithmic recommendation systems, their change over time, and specific usage patterns of different platforms clearly matter for the kind of political information people encounter there. Accordingly, it might be a little early to declare an all-clear for algorithmically shaped information environments.

The available evidence indicates that algorithmically shaped information environments do not necessarily lead to people predominantly encountering politically homogeneous opinions in accordance with their prior held beliefs. Accordingly, Pariser’s suspected mechanism of how algorithms increase political polarization through filter bubbles does not hold. But this does not mean that algorithmic recommendation systems do not pose risks.

Importantly, most available evidence tends to look at general tendencies among the the general population. While it is true, that on average people do not tend to predominantly encounter political information in line with their beliefs, it might very well be that algorithms provide small, special interest, or radicalized groups with enough congruent information as to help them form and reinforce fringe, deviant, or radical beliefs and encourage them to action. In this scenario, the algorithm has not primarily the effect of isolating people in belief-bubbles, instead it functions more as a discovery and reinforcement device of fringe interests and beliefs.

Take the example of political radicalism. A user starts out by searching on a video sharing site for a fringe music group that happens to be openly or clandestinely aligned with right wing extremism. The algorithm recognizes this interest and after the first video ends suggests other content by the same band or connected to the political movement associated with it. Step by step, the algorithm pulls the user further into a communication environment with content of increasingly radical ideas. The algorithm starts a reinforcing loop. It uses a specific fringe interest and then provides suggestions for a user to start a journey further down the rabbit hole, potentially leading to political conversion and radicalization.54 Different radical groups have been shown to exploit this mechanism for information dissemination and mobilization, including the far- and extreme-right, as well as Islamist groups.55

Under specific conditions, this pattern can emerge even for the general public. In situations where very little information is available algorithms can struggle to find content to recommend. This includes breaking news, specialized and coded terminology, or topics of fringe interests. Strategic actors can exploit these data voids and publish misleading or radicalizing content.56 Recommendation algorithms on search engines or digital platforms will then point people using related search terms to this content, simply because information from other more balanced sources is not, or not yet, available. People starting out on their journey with content from these dubious sources can then be pulled algorithmically to other content from these sources or others like them.

To be sure, once alerted digital platforms can react, either by deleting or shadow banning illegal content or by stopping its algorithmic recommendation. But this does not necessarily solve the underlying mechanism. Importantly, this depends on the willingness of the platform to intervene and stop radicalizing loops. This might be the case for Western platforms interrupting feedback loops of illegal content, known foreign influence operations, or known militant or terrorist recruitment attempts. Radicalization on the frontiers of fringe but accepted opinions and radical domestic beliefs is much harder to identify and police through platform companies. Also, in some cases companies might have no interest to interfere, or might even tip the scale and accelerate loops.

Here TikTok’s role in shaping the information environment during the Israeli intervention in Gaza in reaction to the October 7, 2023 Hamas terror attack on Israel is instructive. While a full scientific study is still not available, journalists commented on how TikTok gave a biased view of the conflict. While videos showing Palestinian suffering under the Israeli military intervention circulated widely on TikTok, videos documenting Israeli suffering during the preceding terrorist attack were all but invisible.57 Here, questions emerge as to the influence the Chinese state has over TikTok, an internationally influential information structure provided by a Chinese company. And whether the Chinese state directly or indirectly influences algorithmic recommendation and distribution to further its geopolitical interests by shaping public opinion abroad. Especially during unfolding geopolitical crises, these questions loom large.

The presence and impact of algorithmic feedback loops is harder to show empirically, than testing for filter bubbles. It is difficult to identify in surveys and it does not hold for the general population but for small groups of people at risk of radicalization. Additionally, any empirical identification of specific loops depends on specific constellations of content providers and platforms and might be temporary.58 Research activity is therefor lower on this specific risk than the search for elusive population-wide filter bubbles. But its impact should not be neglected in the discussion of potential harms of algorithmic recommendation systems shaping digital communication spaces.

4.5.3 Alignment

There are also considerable concerns with algorithmic systems about the alignment between the goal of the programmer and the goals pursued by the algorithm. In his book The alignment problem, the science writer Brian Christian charts a specific challenge underlying any design of a rule-based system (Christian, 2020). How can the designers ensure that the rules they develop and implement in a system, when followed by the letter, lead to the results they wanted to achieve:

“(…) we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete – lest we get, in some clever, horrible way, precisely what we asked for.”

Christian (2020), p. 12f.

This is what Christian calls the alignment problem:

“How to prevent such a catastrophic divergence – how to ensure that these models capture our norms and values, understand what we mean or intend, and, above all, do what we want – has emerged as one of the most central and most urgent scientific questions in the field of computer science.”

Christian (2020), p. 13.

The alignment problem reinforces some issues, we already have encountered, such as the need for transparency regarding the uses, mechanisms, and effects of algorithms. But it also points us to a further – potentially more troubling – issue. What if we have a rule-based system which is transparent and on the face of it seems successful in achieving its goals but unobserved by the programmers and the auditors is misaligned to some unspoken but crucial values or goals of people running it. Oversights like these can mean that algorithms seem successful in pursuit of their goals but in that pursuit violate values or principles unknown to them. In artificial intelligence research, this problem is called value alignment.

The alignment question matters for any rule-based system but it matters especially once algorithms are deployed in scale. One strength of algorithms is their ability to provide automated and standardized decisions at scale. Once implemented, it does not matter to an algorithmic decision system how many cases it handles. Algorithms diagnosing the likelihood of cancer based on medical imagery do not care whether they work on 100 or 10.000 cases. Similarly, algorithms recommending music tracks, news items, or movies for your early evening break, do not care for how many people they provide recommendations. But the scale by which algorithms are deployed matters for auditors in identifying outcomes pointing toward an underlying alignment problem. Scale also matters for support staff fielding calls by people subject to these decisions trying to complain or to understand how their option spaces were shaped – and potentially restricted – by algorithms.

The question of scale also matters with regard to potential but unintended consequences of the application of algorithms in complex socio-technical systems. Algorithms are a technological intervention in social systems. While sometimes this happens in clearly delineated areas with little chance of spill-over effects, running algorithms at scale in only weakly delineated fields brings serious risks of unintended effects, for example through spillover or through reinforcing feedback loops. Examples include algorithms designed to increase engagement in digital communication environments. Should the algorithmic recommendation of entertainment and political content follow the same principles? Or do we need different rules for different content types? What are the potential unexpected and unintended side effects when recommending political content following rules that work for content designed primarily for entertainment? And how do we identify them? Introducing technological interventions at scale in complex social systems can carry large unintended consequences and needs to be approached and monitored with caution.

4.5.4 Opaqueness

It might come as a surprise that a concept standing for the sequential working through a list of predefined steps to solve a given problem has become a byword for opaqueness. But this is exactly what has happened to algorithms. Their uses in politics and society are opaque – be it by design or accident – and so are the specific rules they follow in their approach at solving tasks.

The legal scholar Frank Pasquale has found an evocative image for these concerns, the black box. In his book The Black Box Society, he discusses the opaqueness surrounding the uses of data and algorithms in assessing people’s reputation through their data shadows, shaping their option spaces by providing answers to their searches, and providing or denying them financial opportunities through financial service providers (Pasquale, 2015). Pasquale finds that various actors work actively on keeping their uses of algorithms and their inner workings secret. The most troubling of which is the denial of transparency by companies providing algorithmic decision-support systems employed to provide or deny people opportunities, be it in the provision of credit or in granting or denying them parol, supposedly to protect trade secrets.

Beyond active attempts at keeping the uses of algorithms in society opaque for security, commercial, or legal reasons, opaqueness can also emerge as a byproduct of the use of machine learning algorithms. Here, algorithms are set to solve a problem. But they do so by identifying the most promising sequence of steps to their solution themselves. While clearly successful in many cases, the uncertainty just about how the algorithm goes about in solving the task creates opaqueness and uncertainty.59

Once algorithms shape people’s option spaces across different societal fields – such as information spaces, finance, or the legal system – meaningful transparency about their uses, inner workings, and effects becomes crucial. Meaningful transparency encompasses information about in which fields, which actors use algorithms to achieve which goals. This also means that these actors are explicit about the rules, the sequence of steps, they programmed the algorithm to pursue. This goes double if they outsource these uses to vendors providing them with algorithmically-supported systems or solutions. It is also important to know what the outcomes of the algorithms are. Do they conform with expectations or are they creating biased or unintended results. Finally, people subject to algorithmic shaping need places to appeal algorithmic decisions and sources where they can find out why the decisions went the way they did. Only through regular and transparent audits and ongoing critical debate, can the sense of opaqueness created by the use of algorithms be countered and the potential of algorithms to improve society be realized.

This demands for transparency about algorithms, the data they use, their workings, the ways they are deployed, and evaluated. Without such transparency, algorithms threaten to create an algorithmic cage whose circumference shifts for unknown reasons and by invisible forces. Without meaningful transparency, this threatens to become a digital version of the dark comedies and nightmares about unresponsive, fickle, and opaque but merciless structures charted by Franz Kafka in his novels The trial60 and The castle.61 Like the structures charted by Kafka, an opaque algorithmic cage would suddenly shut close on its inhabitants without offering reasons or meaningful options of appeal. Clearly, this cannot be the template for the application of algorithms in politics and society.

4.6 What can be done?

These worries need to be taken seriously and addressed. Not the least because computationally automated decisions based on algorithms are now prevalent in various areas of society. Accordingly, covering the uses, effects, and regulation of algorithms is an important field of scientific activity going forward. Not the least because popular accounts might have little to do with actual uses or effects of algorithms.

At the same time, worries should not distract us and blind us with fear. The use of algorithms in society is not optional. So these issues should point us to improving algorithms and their use, not mislead us to entertain the illusion of somehow avoiding their use. This includes conscious and informed approaches to mechanism design underlying algorithms and algorithm auditing and forensics.

Mechanism design is an area in economics, interested in the design of rules to achieve desired outcomes in strategic environments.62 This is crucial in algorithm development, where rules must be carefully crafted to align with societal objectives. For instance, when algorithms are used to distribute resources, such as in traffic flow optimization, mechanism design ensures that equity and efficiency are considered. The success of these algorithms is predicated on their ability to adapt to human needs and the nuances of social behavior. Mechanism design does of course not ensure this success, but at least it provides a systematic and contestable approach in the development of rules underlying algorithms.

While mechanism design is one way to work on better algorithms by designing and interrogating rules and processes, algorithmic auditing and forensics focus on the outcome of algorithms. Auditing focuses on the continuous analysis of algorithm outputs and checking whether outcomes are fair, unbiased, and transparent.63 Algorithm forensics focuses on the analysis of an algorithm’s actual decision making process, in case audits find results to deviate from the goal of algorithm deployment. Of course, algorithm auditing is not a one-time process but a continuous one, ensuring ongoing accountability and alignment with evolving societal values.

Evaluating algorithms is a difficult task. It requires not only technical assessments of performance and accuracy but also evaluations against societal impacts and ethical standards. Transparency is a key component of this evaluation, providing insight into the algorithm’s function and fostering trust. For instance, an algorithm used in loan approvals should be transparent enough that applicants can understand why they were or were not granted a loan. This clarity is essential for building trust between users and algorithmic systems.

Example: Interrogating algorithms

Interrogating algorithms can take different approaches. But the following set of questions provides an impression of the potential scope and variety of these audits.

Let’s start with the basics:

  • “What problem is the algorithm intended to solve?” This foundational inquiry sets the stage for our audit by clarifying the algorithm’s purpose. It prompts us to consider if the problem was clearly defined and if an algorithmic solution is indeed appropriate.

  • “How was this problem solved previously?” This historical context is helpful, offering insights into the evolution from past methods to the current algorithmic approach, highlighting improvements or differences, and the limitations of previous solutions.

Next, we navigate the rule development process:

  • “Through what process were the rules developed?” Here, we scrutinize the governance of the algorithm. Were experts consulted? Were people subject to the algorithmic decisions and actions consulted? How were ethical considerations and potential biases addressed during this phase? The integrity of the algorithm is often rooted in the inclusivity and rigor of its development process.

Following this, the evaluation process, transparency, security, and regulatory compliance of an algorithmically enabled system need to be made explicit:

  • “How is the algorithm evaluated?” An algorithm must not only be built on solid ground but also must continually prove its worth. What metrics or criteria are established for its evaluation? Is there room for independent review, and does the algorithm demonstrate transparency in its decision-making process?

  • “Is the algorithm’s operation transparent?” We demand clear explanations of how the algorithm processes inputs to produce outputs. Transparency isn’t a luxury; it’s a necessity for accountability and trust.

  • “How does the algorithm protect personal data?” and “What safeguards are in place to maintain data integrity?” It’s not enough for an algorithm to be effective; it must also be a guardian of user privacy and data security.

  • “Does the algorithm adhere to pertinent regulations, and how does it remain current with legal standards?” An algorithm must follow rules and regulations within the digital realm and continue to evolve with the regulatory landscape.

An audit could now turn to the outcome of an algorithmically enabled system. First, we focus on the outcomes on the individual then on the societal level:

  • “How does the algorithm account for fairness, and what measures are in place to combat biases?” This leads us to consider whether diverse datasets were employed to ensure equitable performance across different demographics.

  • “What are the societal implications of deploying this algorithm?” From ethical dilemmas to potential job displacement, we must be prepared to confront and manage the ramifications of our technological advancements.

These questions are of course not exhaustive. But they can provide a first framework to illustrate the reach and different perspectives algorithm audits can take. It also illustrates the demands that systematic and continuous audits of algorithmic systems put before us.

Current practices in various sectors illustrate the application of mechanism design, auditing, and transparency. For example, in healthcare, algorithms assist in diagnosis and treatment recommendations. Here, mechanism design is crucial to ensure that the algorithms prioritize patient outcomes and ethical considerations.64 But just as important as design is the audit of algorithms used in medicine.65 Auditing these systems is an interdisciplinary effort, involving not only medical professionals and computer scientists but also data scientists evaluating whether the recommendations are accurate and free of bias.

These examples illustrate different approaches how the rules underlying algorithms can be consciously designed and their outcomes and workings examined. Of course, the ease of this endeavor varies. In dealing with algorithms deployed in clearly specified and limited contexts where processes are well understood and outcome distributions lend themselves to easy comparison with ideal results, this process is comparatively easy. In more complex environments, with many interacting features, and no clear benchmark for outcome distributions to be compared to, ensuring the correct and fair working of algorithms is much more difficult. Nevertheless, this section shows that algorithms and their results need not be a black box. Instead, algorithms and algorithmically enabled systems can be designed and evaluated. But society needs to choose to do so and demand of developers and deployers of algorithmically enabled systems the ability to act on this. As algorithms and algorithmically enabled systems shape the fabric of society more extensively, our focus must remain on developing and refining mechanisms that ensure their benefits are maximized while their risks are minimized.

4.7 The promises and the risks of automation

Clearly, the combination of data and algorithms is very powerful. It promises not only new insights about the world, it also promises the ability to act on these insights and thus shape the world and future. Algorithms learn from data by identifying regularities that they then use to predict the future. But algorithms also use data as inputs to initiate action. The realization of the potential of data and algorithms are mutually dependent. The quality of algorithms and their output depend on data and their quality. At the same time, the realization of the potential within data, that we discussed in Chapter 3, depends on algorithms.

Algorithms automate insight and action. This holds great potential. Societies encounter a continuously growing set of complex challenges with no clear solution, be it climate change, migration, international conflicts, aging society. These challenges are difficult to navigate. Algorithms with their capabilities of synthesizing and creating insight from data can be of great use in tackling these challenges.

Individuals face ever increasing choices in life, consumption, and information environments, while at the same time often facing tightening economic and temporal resources. Algorithmically shaped choice environments and advice can help people in navigating otherwise potentially overwhelming options and to make better choices in face of growing constraints.

By automating tasks at work, algorithms can help workers to be more productive and business to automate processes. This is often discussed as a threat to workers and prosperity. But in times of severe labor shortages in Western democracies, this can also be an opportunity to ensure prosperity in the context of a rapidly aging population, either by substituting labor or by making people at work more productive.

Realizing these potentials is not certain, though. For one, it is unclear how far the combination of data and algorithms can extend. Does data-enabled insight hold for all walks of life or are there limits? Can automation extend beyond the mere digital, or digitally mediated? Questions that we will reencounter when we will discuss artificial intelligence later in Chapter 7. In discussing algorithms it is easy to get seduced by the potential, when in fact there are very clear technical limits to their applications. In recent years, these limits have continuously been pushed back through technological innovation. Nevertheless, limits persist and need to be accounted for in any serious discussion.

More specifically, algorithmically enabled systems come with additional challenges, like ensuring fairness, avoiding bubbles and loops, providing alignment, or making what was opaque transparent. These are non-trivial challenges as they often point to underlying problems within societies that either shaped data or the ways algorithmically enabled systems are developed and deployed. They do not void the potentials of algorithms for society, but these challenges have to be addressed for this potential to be realized broadly and not to only benefit the few at the cost of the many.

Making algorithms work for society, is an interdisciplinary effort. Computer scientists, data scientists, social scientists, and practitioners from various fields must work hand in hand to develop algorithmically enabled systems and deploy them for different tasks in different contexts. Only through and open dialogue about opportunities, risks, and failures, can there be improvement of systems and over time trust in algorithmically enabled systems. The coming challenges are too big as that we voluntarily good give up on the potentials algorithms hold. At the same time, inherent risks and limitations need to be accounted for and constructively addressed.

4.8 Further Reading

For a non-technical introduction to algorithms in computing see Louridas (2017).

For fairness in algorithmic decision making see Barocas et al. (2023).

For an account of how algorithmic decision making can increase inequality see Eubanks (2018).

For a popular account of the problems of aligning algorithms to the goals of their deployers and larger social norms see Christian (2020).

For an account of the dangers of opaque algorithms and automation in society see Pasquale (2015).

For an introduction to an economic perspective on how to design better algorithms see Roughgarden (2016).

4.9 Review questions

  1. Please provide a definition for the following terms:
  • algorithm following Knuth (1968/1997);
  • fairness in the context of algorithms;
  • bias;
  • alignment problem.
  1. Please discuss the ways that bias can negatively impact the way that algorithms gain insight from data? What can we done to mitigate associated risks?

  2. How can we assess the fairness of algorithmic support systems? How can we improve on this?

  3. Discuss the threat of the alignment problem for algorithms that automate action. How can we account for this problem?

  4. Assess the threat of filter bubbles for society on the basis of the available empirical evidence. Where could the threat lie? Where not?

  5. Please discuss how you would approach an algorithm audit for and algorithm recommending content to people on a video sharing site like YouTube. Sketch the relevant question that need to be asked and discuss what data you would access or analyses you would run under ideal access conditions.


  1. For a historical account of different types of algorithms from Mesopotamia to computation see Chabert (1994/1999).↩︎

  2. For a broad discussion of algorithms in social life see Daston (2022).↩︎

  3. See Knuth (1968/1997), p. 4–6.↩︎

  4. For technical introductions see Cormen et al. (1990/2022), Kleinberg & Tardos (2005).↩︎

  5. For an introduction to algorithms focusing on their uses to solve tasks in the larger world see Louridas (2017).↩︎

  6. For K-Nearest Neighbors (KNN) see Hastie et al. (2001/2009), p. 459–484.↩︎

  7. For a general overview of machine learning see Alpaydin (2016/2021). For a technical introduction see Hastie et al. (2001/2009).↩︎

  8. For Random Forests see Hastie et al. (2001/2009), p. 587–604.↩︎

  9. For more on the modeling of high dimensional data see Wright & Ma (2022).↩︎

  10. For the use of data-enabled insights by campaigns see Nickerson & Rogers (2014).↩︎

  11. For the use of algorithm-enabled insights in the news see Christin (2020).↩︎

  12. See Kreiss (2012).↩︎

  13. See Kreiss (2016).↩︎

  14. For an overview of the principles and psychology of judgment and decision making see Baron (1988/2023). For a foundational text on biases in human decision making see Tversky & Kahneman (1974). For a critique of putting to emphasis on psychological biases in explaining people’s decision making see Gigerenzer (2018). For cognitive heuristics see Gigerenzer & Gaissmaier (2011). For limits to expert forecasting see Tetlock (2005/2017).↩︎

  15. See Somashekhar et al. (2018).↩︎

  16. See Citron & Pasquale (2014).↩︎

  17. See Bi & Wang (2020)↩︎

  18. See Jumper et al. (2021).↩︎

  19. See Kotamarthi:2021aa.↩︎

  20. For a sociological account of predictive policing in practice see Brayne (2021).↩︎

  21. For an overview of predictive policing see Ferguson (2017).↩︎

  22. For a foundational critique of probabilistic methods in the criminal justice system see Harcourt (2006).↩︎

  23. See Narayanan (2023).↩︎

  24. See Frey (2021).↩︎

  25. See Schwartzel (2022), Zhu (2022).↩︎

  26. See Cotter (2019), Duff & Meisner (2023).↩︎

  27. See Mignano (2022).↩︎

  28. For different mechanisms behind algorithmic recommendation see Narayanan (2023).↩︎

  29. For algorithms in self-driving cars see Badue et al. (2021).↩︎

  30. For autonomous algorithm in agriculture see Bechar & Vigneault (2016).↩︎

  31. For the use of algorithms in autonomous drones see Floreano & Wood (2015).↩︎

  32. See Gorwa et al. (2020), Douek (2021).↩︎

  33. See Singh (2020), MacKenzie (2022), MacKenzie et al. (2023).↩︎

  34. See Menkveld (2016).↩︎

  35. For smart grids from a policy perspective see Brown & Zhou (2019). For smart homes see Alam et al. (2012). On energy savings in smart cities see Kim et al. (2021).↩︎

  36. For a review of the scientific literature on algorithmic trading in economic see Cardella et al. (2014). For high frequency trading see Menkveld (2016). For a sociological discussion of high frequency trading as an economic sub field see MacKenzie (2021). For an agenda-setting journalistic account see Lewis (2014).↩︎

  37. See Black (1971a); Black (1971b).↩︎

  38. See Menkveld (2016), p. 2↩︎

  39. See Menkveld (2016), p. 19–20.↩︎

  40. See Chapter 4 in Barocas et al. (2023).↩︎

  41. For a textbook account see Barocas et al. (2023). For two early influential accounts raising the question of algorithmic fairness in society see O’Neil (2016); Eubanks (2018). For an early account of biases in computing see Friedman & Nissenbaum (1996).↩︎

  42. See Mitchell et al. (2021).↩︎

  43. See Mitchell et al. (2021).↩︎

  44. See Barocas & Selbst (2016).↩︎

  45. See Caliskan et al. (2017).↩︎

  46. See Obermeyer et al. (2019).↩︎

  47. See Hanna & Linden (2012), and Sprietsma (2013).↩︎

  48. See for overview of underlying issues and challenges Barocas et al. (2023).↩︎

  49. For a discussion of various mechanisms behind social media recommendation algorithms see Narayanan (2023).↩︎

  50. See Pariser (2011).↩︎

  51. See Flaxman et al. (2016); Scharkow et al. (2020); Yang et al. (2020).↩︎

  52. See Gentzkow & Shapiro (2011).↩︎

  53. See Kitchens et al. (2020).↩︎

  54. See Kaiser & Rauchfleisch (2020).↩︎

  55. See Perry & DeDeo (2021).↩︎

  56. See Golebiewski & boyd (2019).↩︎

  57. See Harwell & Lorenz (2023).↩︎

  58. On the challenges of doing research on this question see Kaiser & Rauchfleisch (2019).↩︎

  59. For an account of how people adjust to opaque algorithms affecting their option space see Rahman (2021).↩︎

  60. Kafka (1925/1999).↩︎

  61. Kafka (1926/1998).↩︎

  62. See Roughgarden (2016).↩︎

  63. See Metaxa et al. (2021).↩︎

  64. For an instructive examples of the challenges of identifying flaws in an algorithms in the field of organ donation see Murgia (2023).↩︎

  65. See Liu et al. (2022).↩︎