4.1. What kind of artificial intelligence are we talking about?

4.1.1. What is artificial intelligence?

These days, everybody seems to be talking about artificial intelligence—or AI for short. At the same time, expectations diverge widely:

Some say AI is the basis for future economic growth and a boon for human wellbeing. Already in 2002 the futurist, singularity booster, and later Google's Director of Engineering Ray Kurzweil publicly bet for "AI […] to surpass native human intelligence" by 2029 [Kurzweil, 2002]. The business and management professors Erik Brynjolfsson and Andrew McAffee expect that

"In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. […] The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning."

[Brynjolfsson and McAfee, 2017]

Others see AI as a potentially existential threat to human life as we know it. For example, Tesla founder and serial entrepreneur Elon Musk speculates about AI being "our biggest existential threat" and likes it to "summoning the demon" [McFarland, 2014]. Apparently, the physicist and public intellectual Stephen Hawking shared these fears when he said: "we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it" [Kharpal, 2017].

Bridging the gap between these two poles, Microsoft co-founder Bill Gates publicly speaks of AI as "both promising and dangerous" likening it to nuclear weapons and nuclear energy [Scott and Yin, 2019]. Of course, exactly how much of a relief this comparison provides probably varies among listeners.

What is it about AI that leads to these strong and widely diverging opinions? To answer this question, let's start by figuring out what term artificial intelligence means. In his history of the research field, Nils J. Nilsson defines artificial intelligence as

"[…] that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment."

[Nilsson, 2010] p. xiii

This definition provides a broad tent for scientific, engineering, and commercial endeavors affiliated with making intelligent machines or learning about intelligence through machines. The continued scientific quest for artificial intelligence starts roughly in the nineteen-fifties bringing psychologists, engineers, and computer scientists together. These efforts begin as the attempt at understanding human intelligence better by trying to recreate it in machines while at the same time getting machines to perform tasks traditionally assigned to humans, such as playing board games. In the early days of the field from the nineteen-fifties to the nineteen-nineties scientific and commercial efforts in artificial intelligence were dominated by knowledge-based approaches, trying to teach machines about the world, be it the meaning of words, grammar, or expert knowledge in specific subfields. Approaches like this started out being highly popular among scientists and funders but lost steam when the promised results failed to materialize. The associated frustrations and decreased scientific interest and funding opportunities have become known as AI winter. But each winter ends with the onset of spring.

The first signs of the onsetting AI spring appeared in the nineteen-nineties. Increased computing power and ever growing availability of large data sets documenting ever more aspects of human life and society in the context of the digital transformation led to growing interest in the uses of neural network models. Neural networks are a family of computational models that is inspired by the working of the human brain. In a very simplified account, the brain consists of networks of interconnected neurons. Each neuron receives stimuli. Once these incoming stimuli pass a certain threshold, a neuron sends out an electrical signal that itself serves as an input to other connected neurons.

This model of the brain concentrates on its information processing characteristics, in which interconnected neurons accept and process information and through their interconnections are able to achieve stunning feats of translating information into perception, knowledge, or action. Artificial neural networks follow the same logic in their architecture and functioning. Artificial neurons accept numerical inputs and translate them into a single output variable. These artificial neurons are arranged in networks with varying levels of network layers. A first input layer accepts unprocessed signals and puts them through to a series of so-called hidden layers. These hidden layers accept the outputs of the network layer above them, process these outputs, and transmit them to a further layer, until a final output layer is reached that provides the result of the model. This structure allows neural networks to "identify [...] and extract [...] patterns from large datasets that accurately map from sets of complex inputs to good decision outcomes" [Kelleher, 2019], p. 5. This process is called deep learning and enables machines to make data-driven decisions.

Deep learning has made machines highly efficient in automated pattern recognition and decision making without relying on knowledge or theory. This purely data-driven approach to artificial intelligence has produced spectacular results, allowing machines to beat humans at board games—such as Chess or Go—, to automatically translate text from one language into another—such as with Google Translate—, recognizing spoken language—such as with automated voice assistants, like Alexa or Siri—, or being able to recognize objects in images—a crucial task for self-driving cars. In fact, deep learning and its commercial applications have been so succesfull that deep learning has nearly grown synonymous with artificial intelligence, thereby reducing the field's richness and varied heritage to a limited set of data-driven models and approaches. It is no surprise then to find voices growing in volume that ask: Are purely data-driven approaches in artificial intelligence really intelligent in any meaningful sense of the word?

4.1.2. Narrow artificial intelligence versus artificial general intelligence

Let's go back to Nilsson's definition of AI:

"Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment."

[Nilsson, 2010] p. xiii

As discussed, this definition is very useful since it covers different aspects and goals of artificial intelligence research and development. At the same time, the definition also points to an inherent tension within the field. For example, what accounts for "function[ing] appropriately", what counts as "foresight", and—perhaps most controversially—what is the minimum requirement to meaningfully speak of "function […] in its environment".

Nilsson consciously takes a liberal approach to these questions:

"According to that definition, lots of things—humans, animals, and some machines—are intelligent. Machines, such as "smart cameras," and many animals are at the primitive end of the extended continuum along which entities with various degrees of intelligence are arrayed. At the other end are humans, who are able to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories."

[Nilsson, 2010] p. xiii

This approach works well for charting the development of the field of artificial intelligence and covering its varied points of origin and developmental paths. But in order to understand the larger societal effects of artificial intelligence it is limiting if not misleading. Such a broad account of what it means to be "intelligent" allows commentators to conflate highly specific feats of data-driven pattern recognition and decision making with more general and substantively different expressions of intelligence—such as creativity, embodied knowledge, or abduction. Impressive, but highly specific feats of data-driven prediction and decision making—such as winning against humans in board games like Chess or Go—are used to infer the subsequent replacement of human decision making in other contexts. This is a category error. By using undeniable and impressive progress of machines in one narrow category of intelligence—pattern recognition and decision making based on the automated analysis of large data sets—with another broader, general intelligence. This surprisingly widespread fallacy gives rise to what Erik J. Larson calls the "Myth of Artificial Intelligence":

"The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time—that we have already embarked on the path that will lead to human-level AI, and then superintelligence."

[Larson, 2021] p. 1

This myth is dangerous since it ignores the crucial differences between the type of intelligence found in current AI-empowered systems and a human level general intelligence:

"The myth of AI insists that the differences are only temporary, and that more powerful systems will eventually erase them. […] the myth assumes that we need only keep 'chipping away' at the challenge of general intelligence by making progress on narrow feats of intelligence, like playing games or recognizing images. This is a profound mistake: success on narrow applications gets us not one step closer to general intelligence. […] As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking low-hanging fruit. The jump to general 'common sense' is completely different, and there's no known path from the one to the other."

[Larson, 2021] p. 1-2

This error of using evidence of succesfull applications of artificial intelligence for specific narrowly defined tasks to project the iminent emergence of a general artificial intelligence that will be replacing human decision making in all walks of life matters especially in the discussion of the expected societal effects of artificial intelligence. Jumping from,

"Wow, Alexa automatically ordered fresh yoghurt because the fridge told her my supplies were running low!"


"A Skynet-level artificial intelligence is about to emerge and threatens to end life on earth!"

is a category error and logical fallacy. This does not mean that artificial intelligence has not made great strides and in turn made philosophers and cognitive scientists rethink crucial tenets of what exactly constitutes intelligence. But by conflating the two different categories of intelligence, we run the risk of misdiagnosing the nature, impact, and trajectory of artificial intelligence.

One way to account for these differences, is to differentiate between narrow artificial intelligence and artificial general intelligence (AGI). Narrow AI refers to systems developed for a specific, singular, or limited task. Examples for this include Alpha Go, an artificial intelligence highly adapt at playing the board game Go, and only the board game Go. While Alpha Go is staggeringly succesful in beating even the most adapt of human players at Go, it would be useless at playing Checkers. Of course, one could develop a program dedicated to dominating the world of competitive Checkers using the architecture and logic behind Alpha Go, but this would be a new artificial intelligence only dedicated to this one task. In contrast an artificial general intelligence would be a machine with the same cognitive and intellectual capacities as a typical human. For example, it would be able to hold a conversation in natural language, solve problems in different areas, perceive the world and its position in it, and reason about it.

While the public imagination focuses on AGI, scientific research and commercial applications nearly exclusively focus on narrow AI. Computer scientists might chafe at the term narrow AI as it characterizes what most people and programs in the field of artificial intelligence actually do. In the words of Michael Woolbridge:

"We don't refer to what we do as narrow AI—because narrow AI is AI."

[Woolridge, 2020] p. 42.

Still, both terms help us to keep in mind the difference between what we imagine AI can do and what it actually does. The importance of this difference starts to feature strongly in the public discussion of artificial intelligence and its applications in society. For example, with Rebooting AI the psychologist Gary Marcus and the computer scientist Ernest Davis have offered a highly influential critique for the general reader of the uses of narrow AI for tasks that demand for understanding and reasoning. While critical about the current purely data-driven state of artificial intelligence practice, they remain optimistic about the possibility to develop AI systems that incorporate understanding and that can be critically interrogated and evaluated regarding the foundations of their decisions and their outcomes.

The computer scientist and philosopher Brian Cantwell Smith is less optimistic about the eventual emergence of an AI with true understanding. In The Promise of Artificial Intelligence he emphasizes the difference between two intelligence tasks—reckoning and judgment. With reckoning he refers to calculation tasks current instances of AI are highly successful at, like those discussed above:

"[...] the representation manipulation and other forms of intentionally and semantically interpretable behavior carried out by systems that are not themselves capable [...] of understanding what it is that those representations are about—that are not themselves capable of holding the content of their representations to account, that do not authentically engage with the world's being the way in which their representations represent it as being. Reckoning [...] is a term for the calculative rationality of which present-day computers [...] are capable".

[Smith, 2019] p. 110.

In contrast, Smith uses the term judgment for the sort of understanding

"[...] that is capable of taking objects to be objects, that knows the difference between appearance and reality, that is existentially committed to its own existence and to the integrity of the world as world, that is beholden to objects and bound by them, that defers, and all the rest."

[Smith, 2019] p. 110.

This difference between these forms of intelligence is not just a matter of terminology or purely academic interest. Instead, by employing machines adapt at reckoning for tasks that require judgement, societies risk having machines calculate and initiate decisions based on the representation of the world and not the world itself and without any commitment to the consequences of these decisions. Conversely, focusing on the staggering successes of machines with reckoning tasks might lead over time to a devaluation of judgment, tasks that machines are not succesful at, thereby settling on a critically reduced account of what human intelligence is and should be about. Keeping the difference between reckoning and judgment in mind is therefore crucial in the discussion of what artificial intelligence can do and for what tasks it is employed.

Smith's book might be less accessible to the general reader and does not feature broadly in the public discussion about AI. Still, it is a very nuanced discussion about the nature of intelligence, being in the world, and the uses of artificial intelligence. In this, it reinforces earlier efforts in which artificial intelligence served as an approach to better understand the workings of the brain and human intelligence.

For the discussion of the impact of artificial intelligence on democracy, it is therefore important to be precise about what kind of artificial intelligence we are talking. While it is easy to extrapolate far reaching impacts of an imagined artificial intelligence on aspects of democracy, this is probably not the best use of our time. Instead, we will focus on the impact of aspects of artificial intelligence already in evidence: The power of data-driven predictions of outcomes of interests and the impact of this on specific aspects of democracy.