5.3. Digital structures of the public arena

Digital media have not only challenged the position of traditional structures of the public arena. Digital media have also led to the emergence of new structures hosting the contemporary public arena in digital communication environments.

This includes sites like Facebook, Instagram, TikTok, Twitter, and YouTube that allow people and competitors within the public arena to publish information and to reach large audiences. But this also includes companies that provide the means for people to run their own sites contributing information and commentary to the public arena. This includes services that allow the comparatively cheap hosting of sites or Apps - like Amazon Web Services - allowing people to run their own sites. Or services that provide them with the opportunity to monetize information or services - like PayPal or Patreon. Also services facilitating the hosting of digital ads contribute to the digital extension of the public arena. Digital ads contribute to the monetization of sites by allowing the owner to host ads and get payed for impressions and clickthroughs. They also support the new structures by allowing their owners to run ads themselves and create easy access points to their sites and information on sites like Facebook, Google, or Twitter, where their information might otherwise not have been able to reach an interested public.

These new structures are important for the new digitally extended public arena. But in their characteristics they deviate from structures that formerly hosted the public arena, news media, and follow different rules. This raises challenges in developing normative goals and binding governance rules assuring their contribution to the public arena strengthens instead of weakens it. In this section, we discuss some of the most pressing challenges raised by the digital extension of the public arena.

5.3.1. Responsibilities of digital structures for the public arena

Digital media have become important structures hosting the public arena. People use Google to search for news and information, they get news on their Facebook feeds and publicly comment on it, they post links to news items on Twitter and interact with others, or they post links to news items in messenger groups on WhatsApp or Telegram and discuss them with friends and family. In fact, news sites increasingly rely on social media and messenger services for people to find their content and visit their sites. The affordances, usage practices, and algorithms of digital media thereby increasingly shape the way people find, interact, and share news. They are crucial channels for the flow of information through the public arena.

These sites might not have started out with the goal of hosting the public arena, but by now they certainly do so. Accordingly, they have to accept the associate responsibility and accept for regulators and the public to hold them accountable. But this is easier said then done, while we have settled on what to expect from news media as structures hosting the public arena, digital media deviate from news media in decisive features and therefor bring specific challenges that need to be addressed if we want to understand or regulate their roles hosting the public arena.

Crucially, if we look at former structures of the public arena, they usually combined information production and distribution functions. News organizations combined editorial desks producing information and distribution units that transported information products to points of sale or sent it over the airwaves. Digital media are nearly always only information distributors. Google, Facebook, Twitter, YouTube, WhatsApp do not produce information but merely point people to it or give them the option to point others to it and to allow them to comment on it. This matters in our discussion about the role of these structures in the public arena.

The institutional norms, we discussed above for news media as structures of the public arena nearly always focused on information production, editing, and curating. The distribution function was taken as a given. But what are the rules for structures that do not produce information themselves but host it? Here, a set of thorny questions arises regarding information quality, the policing of behavior, rules for representation and balance, and the transparency by which information gets displayed and distributed. Let's quickly have a look at each of these issues.

One foundational principle in the American regulatory framework for digital media is that companies are not directly liable for the content its users post on their service. They only become liable once they have been informed about illegal content or content infringing the rights of others and refuse to take it down. This was the foundation for platforms being able to grow quickly and host staggering amounts of content without having to exercise prior editorial or curational control. The flip side of this is that digital media host and provide access to large amounts of uncontrolled and unchecked content.

This of course negatively impacts the public arena. While in the past information was checked by journalists or editors before widespread circulation in the public arena, today unchecked or downright false information can travel widely through digital media before it is checked. Even debunking false information will not stop its circulation through digital media. This has given rise to widespread fears of disinformation running rampant on digital media. It is obvious that platforms hosting information vital to the pursuit of the public good need to address the challenge of information quality.

This being said, it is unclear how this should look exactly. It is far from clear that platforms themselves are the best arbiters of truth, deciding which piece of information might be correct and which misleading or false. It is also difficult to exclusively rely on content produced by established news organizations and exclude information from up until then unknown or unverified sources. Imagine you living in an authoritarian country. How comfortable would you feel if digital platforms were exclusively hosting content from official media organizations aligned with the regime, while excluding voices and sources critical of it?

Also, related to that question is the challenge of moderating and policing speech. It is clear that platforms have a duty to protect their users from harassment and discriminatory or hateful attacks by others. But in practice this has turned out to be difficult to implement, especially in the context of political speech. Not all political speech is civil or polite, especially when directed at elites. We can regret this, but there is a reason that even impolite or uncivil political speech is protected. Often, especially with marginalized groups, impolite speech is part of their challenge of elites or majority groups in society. Deciding which impolite or uncivil speech to block, or users to deplatform is difficult and demands for clear criteria and processes. Having companies making these decisions on the fly without transparent and clear criteria risks damaging their legitimacy as hosts of the public arena or even the public arena as a space for political competition as a whole. This problem is only exasperated since most companies running digital media crucial for the public arena are based in the US or China and strangers to the political culture, legal systems, and contexts of the countries they have to make decisions about. This should give anyone pause trying to outsource these decisions to digital platforms.

There is also the question of meaningful representation in digital spaces. In the past it was easy to look at the output of news organizations or the output of a select set of news organizations to assess the degree to which societal groups were represented or not. This is more difficult to do for digital media. For one, it is unclear what to focus on. Should representation be established on the aggregate level, looking at what groups find representation in all content available on digital platforms? Or should representation be established on the level of content visible to each specific user? In other words, are we satisfied with different societal groups being represented in aggregat or must this be true for the content each and every individual does actually see?

Beyond the conceptual question, what kind of representation of society we expect from platforms, we also have to address the challenge of limited transparency. Platforms are inherently opaque to outsiders. So how can the public or even governments assess the contribution of digital platforms to the quality of the public arena? Clearly, there remains much to do to establish transparent principles and procedures that make digital platforms assessable with regard to their contribution to the public arena. This is an important ongoing debate for academics, regulators, journalism, and the public to establish normative goals and practical procedures to establish integrity and trust for digital structures of the public arena.

5.3.2. Algorithmic shaping of user behavior in the public arena

The importance of digital media as structures for the contemporary public arena also introduces the question after the role of algorithms. Digital structures of the public arena - such as Facebook, Instagram, TikTok, or Twitter - use algorithms to determine which content to show their users and in which order. Algorithms thereby potentially shape the visibility of information crucial or detrimental to the functioning of the public arena. At the same time, the precise workings of these algorithms are unclear and their impact on information distribution uncertain. But in this discussion a series of questions have been raised that provide interesting anchor points for future research.

Probably the most well-known expectation about how digital media might algorithmically shape people's behavior is the filter bubble. At around 2011, the political activist Eli Pariser [2011] looked at his Facebook feed and found suspiciously many posts that supported his political viewpoints, while seeing almost none that contradicted them. This got him to formulate the filter bubble thesis.

His reasoning was simple and compelling: Digital media, like Facebook, were interested in having their users to spend more time on the service looking at content providing the company with the opportunity to display and sell ads. To do so, they need to show people content they are likely to to be interested in or to interact with. Since many people like to see content they agree with, the companies developed algorithms identifying content supporting people's prior held beliefs or attitudes. For politics, the consequence was that people would only see content supporting their political beliefs. By using algorithmically shaped services to access the public arena, people would thus move into an algorithmic cage free of surprises only showing them information they were likely to agree with or support. This would be bad news for the public arena. Instead of making people visible to each other, digital structures of the public arena would hide them from each other. This reasoning proved to be as intuitively compelling as difficult to support empirically.

By now various studies have shown that people who use algorithmically shaped digital media to access news do not necessarily have narrower diversity of news exposure than those who use other services. In fact, in a by now classic study, the economists Matthew Gentzkow and Jesse M. Shapiro found for the US that people who were getting their news online were moving in ideologically less segregated information environments than those who talked about politics personally with others [Gentzkow and Shapiro, 2011]. Digital communication environments might thus actually contribute to broader exposure to political others than previous communication environments. Looking at the available empirical evidence more broadly does gives little indication that algorithmic shaping would capture people in algorithmic cages exposing them only to politically uniform content. However, there are other potential ways that algorithms might negatively impact the public arena.

Other contributions argue that digital media would try to increase time users spent by algorithmically selecting for highly controversial or emotionally impacting information. This could be sensationalist content or content likely to increase negative reactions, be it in form of critical comments or sharing. Algorithmic shaping of information environments might thus harm the public arena not by showing us too little of the political other but instead too much. Again, looking at information making the rounds on popular digital media, one might be inclined to agree. However, it pays to keep in mind that this diagnosis is still mostly based on speculation about the workings of algorithms and the reach of information. Other than the filter bubble, this expectation is only beginning to be systematically examined empirically. The jury is therefor still out on this.

Finally, another approach looks at the potential impact of algorithms on the radicalization of users. While arguments like the filter bubble focus on population wide effects, this argument focuses on the experiences and effects on select users who encounter extremist content. Here the argument goes that people in algorithmically shaped environments like YouTube might encounter mildly deviant or controversial content, such as content in support of the far right or terrorism, and through the recommendation algorithm might get sucked into rabbit holes of increasingly extreme content. For some people, this then might constitute a content journey into radicalism. Empirical studies have shown that the YouTube algorithm suggesting content of what to watch next could produce comparable patterns, pushing people to increasingly more radical content. While algorithms might therefore not have the population wide effects of tearing apart the shared public arena, they certainly can have detrimental effects on select and vulnerable people, thereby strengthening extremism on the margins of society. Clearly, more research is needed here. For example the recent rise of TikTok as a channel for information and news and its heavy reliance on algorithmic content selection points to interesting challenges going forward.

An important challenge for all research in this area is that both the workings of algorithmic shaping within digital media and its effects are highly opaque for academics and the public. This has given rise to far reaching speculation of the hidden workings of digital media and their supposed effects. This opaqueness has also contributed to intuitive but empirically elusive speculations, like the filter bubble, have achieved wide prominence that does not correspond with the empirical evidence. Opaqueness therefore legitimizes speculations while weakening the discursive strength of empirical evidence. Bad ideas therefore exit the field much more slowly than one could wish for. Some academics have reacted to this opaqueness by demanding broader access to data from digital media. While of course more data for academics is always a popular demand among academics, it remains dubious that data alone will solve the challenges presented by algorithmic shaping. Also, companies running digital media have to weigh the interests of their users with the interests of academics. Overall, research addressing patterns and effects of algorithmic shaping on digital media might profit from better and broader data access. But actual progress in the field might depend even more on creative research designs addressing the heavy logical as well as conceptual challenges that characterize this research area.

5.3.3. Geopolitics of digital structures

Digital media hosting the public arena have also introduced questions not usually asked in discussions about traditional structures of the public arena. Digital structures make it necessary to talk about the geopolitics of the public arena. Here, one question looms large. Increasingly societies have to figure out how to adjust to digital structures that host their public arena being run from countries different from their own.

In the past, structures hosting a country's public arena were run by organizations based within the same country. This gave governments direct control over, access to, and knowledge about structures hosting the public arena. This is still true for most media organizations. But it is decidedly different for digital structures. People from Germany are using Google to search for political news. People from the UK use What'sApp to coordinate protests. People from the US use TikTok to learn about politics. The public arenas of most Western democracies rely crucially on digital structures run from other countries. Most of those are hosted in the US. Services like Facebook, Google, Instagram, Twitter, or YouTube are crucial features of public arenas all over the world. But Chinese services start to figure prominently as well, be it TikTok or WeChat are increasingly used outside China while China remains closed to most Western structures.

By relying on digital structures hosted in other countries, countries open themselves up to potential interference or influence. Not surprisingly, once the geopolitical mood shifts from cross-border cooperation to competition, these dependencies of crucial societal structures become risks. Already in 2013, the NSA scandal revealed that the US government worked closely with digital technology companies to spy on allies. More recently, the growing worry about relying on foreign structures could be seen in the blocking of 5G network technology provided by the Chinese telecommunication company Huawei in many Western countries. The shut-down of Russian propaganda television station Russia Today in Germany following Russia's unprovoked attack on the Ukraine and the Russian blocking of access to Western digital media provide other examples for these fears.

But probably the most striking attempt at controlling foreign digital structures can be found in China. Here, the government has implemented a firewall that shuts off China from the Western internet and digital media run by Western companies. In the beginning, the Great Firewall was not taken seriously by the West or technology companies. But over time, the Great Firewall created a space that protected Chinese companies from Western competition, so that local structures could emerge that were open to regime control. Over time this has allowed China to develop a set of digital media that can compete with those coming from the US and that at the same time offer governments greater degrees of control. These structures promise a greater independence from the US while of course at the same time creating greater dependence on China. In the potential re-emergence of geopolitical blocks in the aftermath of the Ukraine war of 2022, this is an important development. Especially since already before the war, China was exporting its digital media to countries in the context of its international infrastructure and trade initiatives. Some have spoken of this as the Digital Silk Road.

Looking closely, we can thus identify three types of interdependency emerging from important structures of the public arena being hosted abroad. One is the direct dependence on structures that are potentially open to interference and access of a foreign government. This is the sort of risk made painfully clear to Western democracies in the context of the 2013 NSA scandal. When the world learned that technology companies hosted in the US provided the US government with broad access to its user data allowing it to spy on them. This is the same sort of risk leading various countries to block local infrastructure projects relying on technology provided by Chinese firms.

The second interdependence is the influence of laws and regulations. This influence can go one of two ways. The first route is that laws and regulation from the country providing a digital structure travels to the country using said structure. This is what Erie and Streinz [2021] describe as the Beijing Effect. Countries relying on digital media from China have to accept Chinese approaches to digital governance and data regulation. The other route goes the opposite way. Here, the country using digital structures from another country can set rules and regulations to allow market access. If its market is attractive enough, this can indeed lead to local laws and regulation being able to change laws and regulation in the country of origin of the digital structure. This is what Bradford [2020] has called the Brussels Effect. As the name suggests, the primary example for this sort of influence stems form the EU. Examples of how EU regulation starts to reign in US companies and to influence even regulation in the US itself are the Data Governance Act (DGA), the Digital Markets Act (DMA), the Digital Services Act (DSA), or the General Data Protection Regulation (GDPR).

The third type of influence concerns information flows and probably can be best characterized as a form of soft power. This is the kind of influence China and Russia are primarily afraid of and is the reason why they block access to the Western internet. Here, the fear is that people can get information from without autocratic regimes and might in turn become critical of said regime. But also the West is afraid of this type of influence. Here, the fear is that autocratic regimes use media structures to spread their propaganda beyond their borders. To mitigate these dangers, Western governments have for example closed down local stations of Russia Today, an international television station financed by the Russian state. Recently, similar fears have been raised with regard to the potential influence of the Chinese government in Western democracies through the digital video service TikTok, which becomes increasingly popular among young people in the US and other Western democracies.

Overall, these observations show that geopolitics start to matter in the discussion of the public arena, once digital media become important structures hosting it. Questions of mutual dependencies and influence through digital media will become even more important than they are already. Here, the geopolitical fault lines of the international system will come to shape the discussion of the public arena. Mounting or relaxing tensions will increase or decrease the importance of these fault lines.