The world’s leading digital media and regulatory policy journal

The algorithm’s grip: how digital gatekeepers are reshaping media pluralism

In his IIC Future Leaders Competition-winning essay, SARVJEET PAL argues that algorithmic influence is here to stay and explains how regulatory approaches should adapt

In March 2020, Twitter was alight with the news that Facebook was hiding legitimate information about COVID-19. The news media reported that an error in spam detection had blocked posts shared by diverse publishers, including BuzzFeed, The Independent, the New York Post, The Atlantic, Business Insider and The Times of Israel.1 Another concerning instance of algorithms playing a crucial role in global information narratives occurred in April 2023 during the Russia-Ukraine conflict, when Musk-owned Twitter (now X) attempted to downrank Twitter Spaces related to the conflict.2

The conventional notion of media pluralism is associated with deliberative democracy and implies that citizens have access to a wide array of information as a precondition for participating in the democratic process. It has been interpreted based on different nuances, such as a feature of public media with a remit to provide plural information in the public interest, the presentation of geographic and cultural diversity, etc. This essay will build upon the two layers of media pluralism: content and ownership.

Defining media pluralism in the age of new media services, the internet, the World Wide Web, social media and AI has truly challenged scholars and policymakers, as the end user is at the mercy of the recommender system. While the abundance of information initially seemed promising, as the new technologies allowed cheap and universal systems to disseminate any kind of information, the consolidation of (big) companies as intermediaries of the information itself has sparked criticism about how the digital ecosystem can be effectively open and plural and whether the democratic discourse really benefits from this.3

The challenge at hand is not just the quantity of information but the quality and diversity of perspectives reaching audiences, as algorithms increasingly shape public exposure to news and opinion. The fog of information abundance, coupled with the monopolistic tendencies of online platforms, risks marginalising minority viewpoints and exacerbating social divides. As exemplified in ongoing regulatory debates – such as the EU’s Digital Services Act (DSA) and the global call for algorithmic transparency – ensuring robust media pluralism remains an urgent democratic imperative.

Media plurality in the age of the algorithm

In the digital age, the traditional notion of media pluralism, which focuses on diverse ownership and available content, is challenged by information overload. Where once it was media outlets that were the scarce resource, now it’s users’ attention. The latest literature  suggests that the notion of media plurality has shifted from concern about the diversity of available sources and content to the diversity of choices people make and the range of content consumed by individuals, the so-called ‘exposure diversity’.

Traditional media concentration metrics, such as audience share or circulation figures, are proving inadequate for measuring algorithmic influence.

Users now rely heavily on new online selection intermediaries, such as search engines and social media, to navigate this abundance. These platforms serve as formidable gatekeepers, significantly shaping the content users encounter. Their algorithms and personalisation features can inadvertently narrow viewpoints, creating ‘filter bubbles’ or ‘echo chambers’.4 Internet functions, such as filter mechanisms and personalisation, reinforce people’s tendency to select only topics and activities that are relevant.5 This behaviour can lead to audience fragmentation and polarisation. Consequently, groups of like-minded individuals tend to bias their existing beliefs and reinforce extreme versions of their views after discussing ideas among themselves. The EU’s High Level Group (HLG) on Media Freedom and Pluralism commented that ‘such developments undoubtedly have a potentially negative impact on democracy. Thus, we may come to read and hear what we want, and nothing but what we want. The concern is people forgetting that alternatives do exist and hence becoming encapsulated in rigid positions that may hinder consensus-building in society.’6

In the age of algorithms, regulators face significant challenges with content production and distribution. Concerns about media concentration intensify as opaque social media platform algorithms increasingly shape news selection. Traditional, platform-centric regulations are largely obsolete, failing to address the immense ‘attention share’ commanded by tech giants such as Facebook, Instagram and YouTube, a global top news source.7 This concentration poses a potent risk of media capture and manipulation, particularly for information-poor users who are more susceptible to influence. The dynamic digital environment demands platform-neutral policies to protect media plurality, affirming that information sources remain critical for a healthy democracy.

The concentration of ownership

The algorithmic revolution has coincided with the unprecedented concentration of media ownership in the digital realm. Five major platforms – Google, Facebook, Amazon, Apple, and Microsoft – now control the vast majority of digital information distribution. Google processes over 8.5 billion searches daily,8 while Facebook’s family of apps reaches nearly 4 billion users worldwide. Over the past 40 years, media concentration in the United States has shifted from traditional media to digital platforms, leading to the emergence of over-the-top services. While the wireline industry has declined, wireless and internet service provider markets have grown, with major companies such as AT&T, Verizon, and Google dominating. The search engine market is highly concentrated, with Google and Microsoft controlling 97 per cent by 2022. Despite concerns about excessive concentration, the HLG report reveals that most sectors, with the exception of search, are moderately concentrated or competitive. By 2022, the media industry had grown to $1.34 trillion, driven by distribution companies and big tech.9

In EU countries, the Media Pluralism Monitor (MPM) results for the indicator assessing the risk of media ownership concentration show that the average risk level has increased from 80 to 86 per cent in recent years. It is the highest level of risk among the 20 indicators composing the MPM.10 When Google adjusts its search algorithm, it can drastically impact the visibility of news organisations, effectively determining which outlets succeed or fail in the digital marketplace. Changes to Facebook’s news feed algorithm have demonstrated a significant impact on news consumption patterns and even electoral outcomes.

The regulatory dilemma

The regulatory landscape surrounding algorithmic media concentration presents a complex web of overlapping jurisdictions, conflicting objectives and unprecedented challenges that traditional media regulation was never designed to address. The jurisdictional complexity is perhaps most evident in the enforcement challenges faced by national regulators attempting to govern global platforms. While traditional media regulation operates within defined national boundaries, algorithmic systems transcend these limitations, enabling ‘regulatory arbitrage’, in which platforms can exploit differences in national laws.11 The European Union’s Digital Services Act represents one of the most ambitious attempts to address this challenge, requiring platforms to provide algorithmic transparency and risk assessments.12 However, implementation remains fraught with technical and political obstacles, as platforms resist revealing details of the proprietary algorithms that constitute their competitive advantage.

Overlapping regulatory remits compound these difficulties. Content moderation intersects with competition policy, data protection converges with media plurality concerns and telecommunications regulation collides with broadcasting standards. This convergence has created ‘institutional friction’, where multiple agencies within the same jurisdiction may have competing mandates regarding the same platforms.

Traditional media concentration metrics, such as audience share or circulation figures, are proving inadequate for measuring algorithmic influence. The ‘attention economy’ defies conventional measurement, as a platform’s impact on media plurality cannot be captured merely through user numbers or time spent.13 Instead, regulators must grapple with the amplification effect of algorithms, where subtle changes in ranking can dramatically shift public discourse.14

The pace of technological change further exacerbates regulatory challenges. By the time comprehensive legislation is drafted, debated and implemented, the technological landscape has often evolved beyond recognition. This ‘pacing problem’ leaves regulators perpetually reactive rather than proactive, struggling to address yesterday’s concerns while new algorithmic innovations reshape media consumption patterns.15

Charting a new course of regulation for algorithmic pluralism

Much current regulation revolves around protecting the viewers’ security, privacy, etc. Addressing the challenges posed by algorithms to media pluralism and ownership requires a comprehensive, multifaceted and proactive approach. The goal must be to foster an ‘algorithmic pluralism’ where diverse voices and perspectives can thrive, rather than being suppressed or distorted by opaque, engagement-driven systems.16

The imperative is to move beyond traditional, often reactive, regulatory frameworks that have proven inadequate for the dynamic digital environment.17 A comprehensive approach to governing generative AI and digital media must be adaptive, participatory and anticipatory.

The European Union has taken important steps in this direction with the DSA and the European Media Freedom Act (EMFA). The DSA, while not solely focused on media, substantially improves mechanisms for the removal of illegal content and strengthens the effective protection of users’ fundamental rights online, including freedom of speech.18 The EMFA, which entered into force in May 2024, represents a pivotal, harmonised framework for controlling media market concentrations, explicitly linking media plurality and editorial independence. It mandates pluralism impact assessments for media mergers, ensures transparency of media ownership by requiring disclosure of legal names and contact details, and provides crucial safeguards against unwarranted content removal by very large online platforms (VLOPs). A notable innovation of the EMFA is the introduction of a ‘right of customisation’ for users, enabling them to change default settings on devices and interfaces to reflect their own media preferences, thereby promoting user agency over algorithmic defaults.19

In the United States, proposed legislation, such as the Algorithmic Accountability Act, aims to require companies to be transparent about the algorithms they use, conduct impact assessments to identify and mitigate potential biases, and provide clear explanations of how their algorithms function and make decisions.20

  Regulatory model/act  Primary focus  Key mechanisms  Impact on media pluralism (intended/observed)  Critiques/challenges
 Digital Services Act (EU)21  The act focusses on removing illegal content, upholding fundamental rights and ensuring platform accountability.The process involves conducting risk assessments for VLOPs, implementing safeguards for content removal and maintaining transparency in content moderation.The goal is to safeguard freedom of speech, prevent unwarranted content removal and indirectly support diverse expression.The primary focus is on content legality rather than systemic pluralism issues or regulatory lag.
European Media Freedom Act (EU)The act focusses on media market concentration, editorial independence and media plurality.Assessments of the impact of pluralism on mergers; the transparency of media ownership; protection against the removal of VLOP content; and the user’s right to customisation are all included.The initiative directly aims to increase the diversity of sources and content while enhancing user exposure to various viewpoints, thereby strengthening editorial independence.Implementation challenges across diverse national contexts; ongoing debate on balancing market and pluralism goals.
Algorithmic Accountability Act(proposed) (US)22  The proposed act aims to promote algorithmic transparency, fairness and the mitigation of bias.The law mandates the disclosure of algorithms, conducts impact assessments to pinpoint biases and provides justifications for algorithmic choices.The law aims to prevent bias and discrimination, potentially promoting fairer content distribution and increased visibility for diverse voices.The proposed law still faces challenges in its enforcement and in defining ‘fair’ and ‘transparent’ in practice, which could lead to regulatory capture.
  Notice and action model (e.g. NetzDG and EU)23  The model restricts ‘problematic’ online content, such as hate speech and misinformation.Platforms remove content upon notification.The impact on structural pluralism is limited, as the model concentrates on the symptoms rather than the underlying causes of the dysfunctional public sphere.‘Near-singular focus on restricting “problematic” online content’ risks ‘privatised government censorship’ and regulatory capture.
  Market self-regulation system (US)24 Standards and content moderation  are led by the industry.Platforms set their terms of use and moderation policies.While this allows for rapid innovation, it often lacks robust public accountability, which can perpetuate power imbalances.It also tends to focus on content restriction; leaves ‘mass surveillance and privatised government censorship unaddressed’; insufficient for entrenched power dynamics.

Figure 1: Comparative regulatory approaches to algorithmic governance in media

This comparative view highlights a crucial observation: the policy landscape is gradually shifting from reactive content moderation to proactive algorithmic design as a primary policy lever. Historically, regulatory efforts have predominantly focused on content moderation, exemplified by ‘notice and action’ models. However, the increasing recognition of the inadequacy of this approach stems from its focus on symptoms, such as hate speech and fake news, rather than addressing the underlying structural threats. A more profound intervention lies in influencing the fundamental design of algorithms themselves to inherently promote diversity, transparency and fairness.25 This approach aims to embed pluralism into the system from its inception, rather than attempting to filter out harmful content after it has been amplified. It requires regulators to develop deeper technical expertise and foster collaboration with AI developers and researchers to embed ethical and pluralistic principles directly into the architectural blueprints of digital platforms. It signifies a strategic move from censorship to a more comprehensive architectural governance.

Designing for diversity and transparency

The future of media pluralism hinges on a deliberate shift towards embedded algorithmic design. Instead of merely reacting to the proliferation of harmful content, algorithms should be engineered to actively promote diverse content and dismantle the isolating effects of filter bubbles.26 This involves a conscious effort to incorporate diverse data sources during algorithm training to mitigate inherent biases and to design systems that actively promote content challenging existing viewpoints, thereby fostering broader exposure.27

While the analysis has extensively detailed how AI algorithms contribute to filter bubbles, bias and the spread of misinformation, the evidence also presents AI as a powerful tool for positive change

To mitigate bias, a range of tools and methods for responsible algorithmic use should be widely adopted. These include rigorous code audits, data scraping, statistical analysis and comprehensive impact assessments designed to detect and address discrimination at various stages of the algorithm’s lifecycle.28 Furthermore, the teams developing these algorithms must be diverse and multidisciplinary, combining deep technical knowledge with expertise from the social sciences, ethics and human rights to ensure a holistic understanding of societal impacts.29

Promising interventions have already demonstrated the potential for algorithmic nudging to enhance diversity. For instance, studies have shown that subtly nudging YouTube’s algorithm to increase recommendations for videos from verified and ideologically balanced news channels can significantly increase news consumption and the ideological diversity within users’ news diets.30 Such examples illustrate that algorithms are not inherently detrimental to pluralism; their impact is a direct function of their design and the values embedded within them. This brings to light a crucial observation: the dual nature of AI as both a threat and a potential solution for media pluralism. While the analysis has extensively detailed how AI algorithms contribute to filter bubbles, bias and the spread of misinformation,31 the evidence also presents AI as a powerful tool for positive change.

Rebalancing media economics and ownership

To safeguard media pluralism, it is imperative to rebalance the economic power dynamics that currently favour dominant digital platforms. European and national competition authorities should move beyond purely economic metrics and explicitly consider the specific value of media pluralism when enforcing competition rules.32 This includes conducting proactive market assessments to identify potential threats to pluralism arising from mergers or market dominance.33

Ensuring fair compensation for journalistic content is also critical for the sustainability of diverse media. New legislation, such as Article 15 of the EU Copyright Directive, aims to foster plural, independent journalism by improving the bargaining position of press publishers vis-à-vis online market players and ensuring fairer remuneration for content used on online sharing platforms. News publishers are increasingly exploring licensing agreements with AI companies to formalise their content for training large language models, establish the financial value of their journalism and secure compensation for content that has often been scraped without permission.34 Financial support mechanisms, such as the EU’s Creative Europe programme, can play a vital role by dedicating significant budgets to media freedom, pluralism and literacy projects.

Cultivating digital and media literacy

In an information environment increasingly shaped by algorithms, empowering citizens with robust digital and media literacy skills is paramount. Such literacy enables individuals to critically evaluate information, identify biases and navigate the complex, often opaque, media landscape with greater discernment.35

AI-powered educational tools can be developed to provide personalised media literacy training, tailor learning experiences to individual needs and improve students’ ability to critically assess information sources.36 This cultivation of media literacy empowers individuals to become more critical consumers of online information and actively seek diverse sources, thereby helping them to break free from the confines of filter bubbles and echo chambers.

The enduring quest for an informed democracy

The digital age, with its ubiquitous algorithms and dominant platforms, has irrevocably reshaped our public sphere. Far from being an archaic ideal, media pluralism emerges as an urgent and indispensable democratic imperative. This analysis has demonstrated how algorithms, often driven by commercial imperatives and embedded with subtle biases, profoundly reshape content distribution and consolidate ownership. Such an approach has led to the proliferation of filter bubbles and echo chambers, the amplification of misinformation and a corrosive erosion of public trust in information sources.

The main problems are still tough to tackle: the hidden nature of ‘black box’ algorithms that make it hard to understand how decisions are made, the complicated and changing rules about ownership related to generative AI and the ongoing difficulty for national laws to keep up with the worldwide movement of digital information. These issues collectively threaten the foundational principles of an informed citizenry and a vibrant marketplace of ideas.

The future of our public sphere, and the resilience of our democracies, hinges on our collective ability to reclaim agency from the algorithmic atlas. It is a quest to ensure that the digital commons truly serves the diverse interests of humanity, fostering an environment where a plurality of voices can be heard, understood and engaged with, rather than merely serving the algorithms designed solely to grab our attention.


Sarvjeet Pal

Sarvjeet Pal is studying for a Masters in public policy at the Indian Institute of Technology Delhi. This essay was co-authored with Nishat Bhatotia.

1 Hellweil R (2020). Facebook is flagging some coronavirus news posts as spam. Vox, 18 March. bit.ly/43FVoG5

2 Barr K (2023). Musk’s Twitter Downranks Twitter Spaces Regarding the ‘Ukraine Crisis’. Gizmodo, 3 April. bit.ly/49tyNjE

3 Moore M and Tambini D (eds) (2018). Digital Dominance: The Power of Google, Amazon, Facebook, and Apple. Oxford University Press.

4 Prat A (2020). Measuring and Protecting Media Plurality in the Digital Age: A Political Economy Approach. Knight First Amendment Institute at Columbia University, 10 August. bit.ly/4i92RTP

5 Bimber B (2003). Information and American democracy: Technology in the evolution of political power. Cambridge University Press and Sunstein CR (2009). Republic.com 2.0. Chapter 4 part II. Princeton University Press.

6 European Union (2013). A free and pluralistic media to sustain European democracy. The Report of the High Level Group on Media Freedom and Pluralism. bit.ly/3JW8oke

7 Pew Research Center (2025). Social Media and News Fact Sheet, 25 September. bit.ly/49oh7WE

8 Nonofo J (2025). Google Search Statistics 2025: Understanding the Numbers Behind 8.5 Billion Daily Searches. Global Tech Stack, 26 March. bit.ly/4pjJg5Z

9 Buckweitz J and Noam E (2025). Media ownership and concentration in the United States of America, 1984-2023. Global Media and Internet Concentration Project, 3 February. bit.ly/43CyM9k

10 Centre for Media Pluralism and Media Freedom (2024). Why accurate measuring of media ownership concentration matters. Blog, 4 March. bit.ly/4phllns

11 Ishkhanyan A (2025). The sovereignty–internationalism paradox in AI governance: digital federalism and global algorithmic control. Discover Artificial Intelligence, Vol 5, Article 123, 23 June. bit.ly/4oa5rur

12 Council of the European Union (2022). DSA: Council gives final approval to the protection of users’ rights online. Press release, 4 October. bit.ly/483lK67

13 Spence W (2020). Facebook, the Attention Economy and EU Competition Law: Established Standards Reconsidered? European Business Law Review, 31(4), 693–724. bit.ly/44l2Oi6

14 Marsden C and Meyer T (2019). Regulating disinformation with artificial intelligence. European Parliamentary Research Service. bit.ly/3XCkAK1

15 Napoli PM (2015). Social media and the public interest: Governance of news platforms in the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9), 751-760. bit.ly/4ieEuV8

16 Verhulst SG (2023). Steering Responsible AI: A Case for Algorithmic Pluralism. Working paper. arXiv, 20 November. bit.ly/4icHxgq

17 Taeihagh A (2025). Governance of Generative AI. Policy and Society, 44(1), 1–22, 3 February. bit.ly/4oo5Ycp

18 European Commission, Media freedom and pluralism (webpage). bit.ly/4phMHtR

19 European Commission. A new push for European democracy (webpage). bit.ly/4pnf3D9

20 Wyden R (2023). Algorithmic Accountability Act of 2023: Summary. bit.ly/4ra8q8H

21 European Commission. A Europe fit for the digital age (webpage). bit.ly/3X227GH

22 U.S. Congress (2023). Algorithmic Accountability Act of 2023. bit.ly/4pda73s

23 See, for example, the Greens/EFA Notice and Action proposal. bit.ly/47P28np

24 Keaveny J (2005). In Defense of Market Self‑Regulation: An Analysis (run this line on with next) of the History of Futures Regulation and the Trend Toward Demutualization. Brooklyn Law Review, 70(4). bit.ly/49ZrY9E

25 Keaveny J (2005). In Defense of Market Self‑Regulation: An Analysis of the History of Futures Regulation and the Trend Toward Demutualization. Brooklyn Law Review, 70(4). bit.ly/49ZrY9E

26 See note 25.

27 Liu X, He D and Wu D (2020). Breaking Social Media Bubbles for Information Globalization: A Cross-Cultural and Cross-Language User-Centred Sense-Making Approach. Data and Information Management, 4(4), 1 December. bit.ly/4i9ojs4

28 Digital Future Society (2024). Towards accountable algorithms: tools and methods for responsible use. bit.ly/4o0KwJZ

29 See note 28.

30 Yu X, Haroon M, Menchen-Trevino E and Wojcieszak M (2024). Nudging recommendation algorithms increases news consumption and diversity on YouTube. PNAS Nexus, 3(12), 518. bit.ly/4pnleaj

31 See the Forum on Information and Democracy’s recommendations on the pluralism of curation and indexing algorithms at Centre for Media Pluralism and Media Freedom (2023). Breaking the filter bubbles: Recommendations to promote pluralism online, 21 February. bit.ly/4o2P1E0

32 See note 6.

33 See note 6.

34 Brown PD, and Jaźwińska K. (2025). Journalism Zero: How Platforms and Publishers are Navigating AI. Tow Center for Digital Journalism, Colombia University, 4 June. bit.ly/3JXQPjO

35 See note 25.

36 See note 25.

Welcome to Intermedia

Install
×