The world’s leading digital media and regulatory policy journal

Artificial intelligence and the issue of antitrust

AI poses serious questions for competition regulators, who are grappling with the expected impacts of the technology on markets and consumers. FRANCESCO LIBERATORE and MARTY MACKOWSKI consider the concerns and offer a competition law perspective

Recognising the extraordinarily rapid evolution of artificial intelligence (AI) and its broad impact across many sectors, regulators in the UK, the European Union and globally are moving quickly to ensure that the market remains competitive and that the technology is not used by companies to gain an unfair advantage or harm consumers.  In this article we first examine the key competitive concerns relating to AI and the guidance provided by the cases decided thus far.  We next explore the regulatory approaches being taken by the UK as well as the EU and US and the common themes that underlie them.  Finally, we cover some practical considerations for companies as they navigate this evolving landscape.

On 23 May 2024, the IIC and Squire Patton Boggs co-hosted a roundtable discussion on how to balance innovation and regulation in a space that is developing at great speed.  That discussion, under the Chatham House Rule, featured contributions from regulators at the UK Competition and Markets Authority (CMA) and Ofcom, as well as representatives from IIC strategic partner members and other industry participants who are developing and utilising AI products and services.  Key learnings from that discussion are incorporated in the analysis below.

Lessons from previous case law: the key competition risks of AI

There is a wide spectrum of potential EU and UK competition law issues arising from the use of AI, and regulatory decisions thus far have only begun to address them. The first is that AI can facilitate collusion. This happens where AI allows businesses to exchange information that is competitively sensitive, forward-looking, disaggregated and company specific.

At one end of the spectrum, there is little doubt that the use of pricing algorithms to implement resale price maintenance (RPM) or a price fixing cartel is illegal. Examples include:

  • Consumer electronics manufacturer Asus implemented RPM through the use of sophisticated monitoring tools, allowing the supplier to intervene swiftly in case of price decreases.1
  • Sellers on Amazon’s UK website used automatic repricing software to monitor and adjust prices to give effect to an offline price fixing cartel whereby they had agreed not to undercut each other’s prices.2
  • Casio set minimum prices over a five-year period for online resellers of its digital pianos and keyboards. Casio used price monitoring software to monitor RPM implementation in real time.3

At the other end of the spectrum, the application of EU/UK antitrust rules on self-learning pricing algorithms is more complex. In 2021, CMA published a paper titled ‘Algorithms: How they can reduce competition and harm consumers’, in which it outlined hypothetical theories of harm, including ‘autonomous tacit collusion’.4 The CMA noted that ‘simulation studies show that there are clear theoretical concerns that algorithms could autonomously collude without any explicit communication between firms. For example, Calvano et al (2019) showed that Q-learning (a relatively simple form of reinforcement learning) pricing algorithms competing in simulations can learn collusive strategies with punishment for deviation, albeit after a number of iterations of experimentation in a stable market.’ However, there is little or nothing in the way of directly applicable precedents to date.

It is settled case law that competitors can intelligently adapt to the market without infringing EU/UK antitrust law, as long as there is no ‘concurrence of wills’ between them, replacing independent decision making with collusion (Wood Pulp II). However, it is an open question whether self-learning algorithms that signal prices to each other and learn to follow the price leader would fall within this safe harbour. There is no precedent to date on this latter scenario, but the case law on price signalling may provide a useful analytical framework. In Case 39850, EU Commission Decision, Container Shipping, 14 container liner shipping companies regularly announced their intended future increases of freight prices on their websites, via the press, or in other public ways. These announcements were in absolute price percentage increases. They did not provide full information on the new prices to customers but allowed the carriers to be aware of each other’s pricing intentions and made it possible for them to coordinate their behaviour. The parties ultimately agreed to commitments to address the European Commission’s concerns.

Exploitation of market power

Another concern is that AI can facilitate exploitation of market power or foreclosure of competitors. This can happen through a merger or an exclusive cooperation agreement resulting in the combination of a large and unique set of big data or control of other essential inputs required for AI models; or it can happen where a dominant company’s use of such a large and unique set of data or other key inputs does not constitute ‘competition on the merits’. Recent examples include:

  • Mergers or partnerships in which the EU Commission or CMA has considered the question of the accumulation of big data or other inputs and its impact on competition.5
  •  Cases of abusive leverage of a dominant position facilitated by AI to discriminate against competitors or customers through ‘self-preferencing’, such as in the ‘Google shopping’ case.6

All these cases demonstrate that the application of traditional antitrust concepts to the use of AI is far from straightforward.

Issues when establishing antitrust liability

EU Commissioner Margarethe Vestager: ‘… barriers to entry everywhere’

Even assuming that an anti-competitive object or effect is established, the question arises whether, and under what circumstances, EU/UK antitrust liability can be established if business decisions are made by self-learning machines rather than by the companies. Liability can only arise from anti-competitive conduct that is committed ‘intentionally’ or ‘negligently’. Defining a benchmark for illegality requires assessing whether any illegal action was anticipated or predetermined (for example through programming instructions) or whether it could have reasonably been foreseen and avoided. In trying to define a benchmark for illegality, the CMA referred to ‘ineffective platform oversight’ in its paper.7 A previous EU Commission note to the OECD on ‘Pricing Algorithms and Collusion’ makes an interesting statement in this regard: ‘An algorithm remains under a firm’s direction and control and therefore the firm is liable for the actions taken by the algorithm.’8 Similarly, the Commission’s 2023 horizontal cooperation guidelines indicate that ‘firms involved in illegal pricing practices cannot avoid liability on the ground that their prices were determined by algorithms’, because just like an employee or consultant, ‘an algorithm remains under the firm’s control, and therefore the firm is liable even if its actions were informed by algorithms’. This sounds like a presumption of direct liability, but it remains to be seen whether such a presumption would find support in statute or the existing case law on liability.

The use of AI can also be considered an aggravating circumstance. For example, concerning the previously mentioned investigations into RPM involving Asus9 and others, the European Commission stated that the effect of these price restrictions may be aggravated due to the use by many online retailers of pricing software that automatically adapts retail prices to those of leading competitors. So, as AI continues to progress, and the lessons from previous case law only go so far, how can we determine who is liable for the decisions and actions of AI: the developers, users, and/or beneficiaries?

The ex ante regulatory approach

The EU and UK legislatures have taken the view that antitrust law enforcement may be too slow to tackle competition issues arising from the use of AI in digital markets and have adopted or proposed ex ante regulation to prohibit certain types of conduct without the need to establish antitrust liability. The EU Digital Services Act (DSA) requires very large online platforms to mitigate risks of systemic AI use. Among other things, at least once a year, independent auditors are required to assess such compliance.  The EU Digital Markets Act (DMA) requires designated ‘gatekeepers’ of core digital platforms to comply with certain dos and don’ts, including an obligation not to engage in self-preferencing and to carry out ranking and related indexing and crawling on their platforms based on transparent, fair and non-discriminatory conditions. The DMA also includes restrictions on the use of personal end user data, which may affect the training of their AI models and make it more difficult for gatekeepers to develop potentially biased models.

An algorithm remains under the firm’s control, and therefore the firm is liable even if its actions were informed by algorithms

The UK Digital Markets, Competition and Consumers (DMCC) Act was enacted in May 2024. Similarly to the DMA, it purports to address self-preferencing concerns by vesting the CMA with new authority to intervene proactively.  Specifically, the CMA has the power to designate specific tech companies as having ‘strategic market status,’ which in turn imposes additional rules on such companies to ensure fair dealing, choice, and transparency.  The CMA has the power to issue fines for non-compliance with those obligations of up to 10 per cent of a company’s worldwide turnover.  Notably, this represents a shift in the CMA’s role from an ex post to ex ante regulator, with a role in digital markets similar to other sector-specific UK regulators, such as Ofcom in the electronic communications and online safety sectors.

The US, by contrast, has not followed the ex ante regulation approach.  In the absence of legislation proscribing specific conduct, President Biden has issued an executive order emphasizing the Federal Trade Commission’s (FTC) ability to use existing authority broadly ‘to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI’.  Consistent with that directive, both FTC and Department of Justice (DOJ) officials have explained that they are using the broad antitrust enforcement tools at their disposal to regulate AI, including the prohibition of ‘unfair methods of competition’ under Section 5 of the FTC Act. 

International cooperation

Given the cross-border nature of digital markets, the OECD noted in its 2018 report, ‘Going Digital in a Multilateral World’: ‘Governments may need to enhance co-operation across national competent agencies to address competition issues that are increasingly transnational in scope or involve global firms.’10 Against this backdrop, the US, EU and UK competition agencies have issued joint statements to reaffirm their commitment to cooperating in this area, including through participation in high level meetings as well as regular staff discussions. The fact that US, EU and UK antitrust officials will have an official place to meet regularly to talk about policy and exchange views may be expected to bleed into how they approach enforcement in AI in the future, including for AI foundational models.

Despite the differences in regulatory approach noted above, regulators are focused on addressing similar competitive concerns. Indeed, in both their public statements and investigations to date, regulators globally – including in the US, EU and UK – have sought to address similar worries relating to AI markets and the competitive dynamics that characterise them.

Shared regulatory concerns

The UK’s CMA, for its part, has rolled out a series of publications documenting its comprehensive study on the antitrust risk arising from the ‘increased presence of the largest and most established technology firms across multiple levels of the foundation models value chain’ which ‘is happening both through direct vertical integration but also through partnerships and investments’.11 According to the CMA, ‘some of these firms have strong positions in one or more critical inputs for upstream model development, while also controlling key access points or routes to market for downstream deployment. That creates a particular risk that they could leverage power up and down the value chain.’ The CMA identified three key competition risks arising from the expanding use of foundational models (FM) and the growing interconnection of the FM value chain: (1) firms controlling critical FM inputs by restricting such access; (2) powerful incumbents exploiting their consumer facing positions in FM markets to curtail competition and (3) cooperation between existing players further reinforcing their market power through the FM value chain.

In this same vein, the EU Commission has identified similar concerns in both public statements and the inquiries it is pursuing. In a February 2024 speech, EU Competition Commissioner Margrethe Vestager said: ‘Large Language Models depend on huge amounts of data, they depend on cloud space, and they depend on chips. There are barriers to entry everywhere. Add to this the fact that the tech giants have the resources to acquire the best and brightest talent.’12 The European Commission is currently undertaking a consultation in which it has issued an open call for stakeholder contributions on competition in generative AI. In its call for contributions, the Commission explained that it ‘has become clear in the past that digital markets can be fast moving and innovative, but they may also present certain characteristics (network effects, lack of multi-homing, “tipping”), which can result in entrenched market positions and potential harmful competition behaviour that is difficult to address afterwards.’

Meanwhile, in the US, the FTC and DOJ have similarly expressed concern regarding control of ‘essential’ AI inputs and the need to guard against ‘bottlenecks’ across the AI stack. 

FTC Chair Lina Khan: ‘…guard against bottlenecks achieved through illegal tactics’

At a February 2024 tech summit on AI hosted by the FTC, Chair Lina Khan explained:  ‘History shows that firms that capture control over key inputs or distribution channels can use their power to exploit those bottlenecks, extort customers, and maintain their monopolies. The role of antitrust is to guard against bottlenecks achieved through illegal tactics and ensure dominant firms aren’t unlawfully abusing their monopoly power to block innovation and competition.’ FTC and DOJ officials have explained that, as AI offerings depend on a set of necessary inputs, such control can be used to protect existing market power and leveraged for competitive control of related markets. The FTC and DOJ have identified key FM inputs as including the underlying data needed to train FMs, labour expertise and access to computational resources such as cloud infrastructure and GPUs (graphics processing units).  Beyond access to inputs and markets, the FTC and DOJ have also expressed concern about incumbents’ bundling or tying generative AI offerings with existing core products (such as cloud services or software) and about using certain sensitive personal data to train FMs.

To help ensure that interventions are effective, regulators are actively building their technical expertise relating to AI, including through conducting market inquiries, creating teams specialising in AI-related sectors and hiring technical experts such as data scientists.  While information asymmetry between companies and regulators still exists, the gap appears to be narrowing.

Regulators thus remain focused on improved market outcomes and on ensuring that markets remain fair and open, with a diversity of competitors and consumer offerings, in the AI space.  And they are scrutinising control of both inputs and outputs (or market access points) across all levels of the AI stack.  They have explained that observations regarding market outcomes provide guidance regarding where greater intervention may be necessary, and that such intervention will be more proactive in AI-related segments in order to avoid the less desirable alternative of waiting to see what happens and attempting to restore competition after markets have tipped.

The CMA approach to AI risks

Providing further insight into its enforcement priorities and anticipating the new powers it has since received with the passage of the DMCC, the CMA has more recently published a new document providing an update on its approach to AI regulation both broadly and as it relates to FM development specifically.  In its AI strategic update the CMA said regarding its view of AI competition risks: ‘Taking a broader view of AI systems, firms’ misuse of AI and other algorithmic systems, whether intentionally or not, can create risks to competition often by exacerbating or taking greater advantage of existing problems and weaknesses in markets.’13  The CMA pointed to three specific examples:

  • ‘AI systems that underpin recommendations or affect what choices customers are shown and how they are presented’ and thus have the potential to ‘distort competition by giving undue prominence to choices that benefit the platform at the expense of options that may be objectively better for customers.’
  • Firms may use AI systems to ‘assist in setting prices in a way which could facilitate collusion and sustain higher prices’.
  • Firms may use AI systems ‘to personalise offers to customers’, which could potentially allow incumbent firms to ‘analyse which customers are likely to switch, and use personalised offers, selectively targeting those customers most at risk of switching, or who are otherwise crucial to a new competitor, which could make it easier for such firms to exclude entrants.’

As to competition risks around FMs specifically, the CMA stated that its ‘strongest concerns arise from the fact that a small number of the largest incumbent technology firms, with existing power in the most important digital markets, could profoundly shape the development of AI-related markets to the detriment of fair, open and effective competition.’  The CMA went on to describe specific concerns mirroring those in its April 2024 AI foundation model update paper14 relating to incumbents’ control over critical inputs for FM development and key market access points for FM services. According to this new document, the CMA’s approach to AI risks (including the AI FM-related risks identified above) will be guided by the following six principles:

  • Access: ongoing ready access to inputs.
  • Diversity: sustained diversity of business models and model types.
  • Choice: sufficient choice for businesses and consumers so they can decide how to use FMs.
  • Fair Dealing: no anti-competitive conduct.
  • Transparency: consumers and businesses have the right information about the risks and limitations of FMs.
  • Accountability: FM developers and deployers are accountable for FM outputs.

The document acknowledged the additional powers that the CMA has since received with the passage of the DMCC and embraced the CMA’s new role as an ex ante regulator, stating: ‘We are ready to use these new powers to raise standards in the market and, if necessary, to tackle firms that do not play by the rules in AI-related markets through enforcement action.’ 

The CMA noted that the new authority granted by the DMCC would give it ‘the ability to respond quickly and flexibly to the often rapid developments in these markets, including through setting targeted conduct requirements’ for firms designated as having strategic market status (SMS).  The CMA likewise acknowledged its greater power under the DMCC to observe and test SMS firms’ algorithms, which is essential to ensuring it can effectively address risks posed by their AI systems. The CMA will continue to focus on market outcomes in determining where and how to intervene using its new authority under the DMCC.  And the intrusiveness of such interventions will depend on how far the competitive harms it is seeking to address have advanced.

The risk of regulatory overreach

While some industry participants have noted the potential for regulatory overreach this presents, the CMA has explained that it is hard to test the counterfactual and that in many instances waiting to see what happens and then attempting to unwind competitive harms is a worse outcome for everyone involved.  Markets have not yet tipped and there is currently a good narrative regarding the availability of multiple models for offering AI products and services.  But the CMA believes that intervention will be required to ensure that this remains the case.

To identify potential bad outcomes of the type that the CMA will look to address, companies should look to well-functioning markets as a guidepost.  In such markets, innovative products and services are being offered to consumers, but those consumers are not broadly unhappy with how their data is collected or with how difficult it is to switch between competing services. The CMA will also rely on other regulators with sector-specific expertise to understand how those markets are operating.  For example, it will turn for guidance to Ofcom’s market reports regarding the state of the telecoms sector, which many in the industry view as authoritative.

As it has done historically, the CMA will also continue to coordinate with sector-specific UK regulators where AI issues fall within their overlapping remits.  For example, it is coordinating with Ofcom which is considering AI-related issues in its work regarding child online safety and harmful content online, for which Ofcom has developed ‘codes of practice’ providing guidance to firms utilising AI. 

In terms of next steps, the CMA laid out the following work programme for the remainder of 2024, including:

  • A forward-looking assessment of the potential impact of FMs on how competition works in the provision of cloud services as part of the ongoing cloud market investigation.
  • Monitoring current and emerging partnerships closely, especially where they relate to important inputs and involve firms with strong positions in their respective markets and FMs with leading capabilities.
  • Stepping up the use of merger control to examine whether such arrangements fall within the current rules and, if so, whether they give rise to competition concerns.
  • Continuing the dedicated programme of work to consider the impact of FMs on markets throughout 2024, including: (1) a forthcoming paper on AI accelerator chips, which will consider their role in the FM value chain; (2) publishing joint research with the Digital Regulation Cooperation Forum on consumers’ understanding and use of FM services; and (3) publishing a joint statement with the Information Commissioner’s Office on the interaction between competition, consumer protection and data protection in FMs.

In the meantime, companies and their antitrust counsel must consider whether the current development and use of AI may create potential exposure to antitrust scrutiny later.

Practical considerations for antitrust counsel

AI gives rise to complexities which will inevitably have an impact on how antitrust counsel should advise businesses developing and/or applying AI. Counsel should understand why and how businesses intend to either use AI or participate in one or more segments contributing to AI development, particularly:

  • How AI will aid business processes
  • What information will be processed and exchanged with other parties
  • Which other parties will participate in the AI ‘network’ and which will be excluded
  • Whether the AI network will be public or private and, if private, who are the ‘nodes’ of the AI network
  • What is the ‘relevant market’, what is the position of the business in such a market, and what, if any, control does the company have over access to key inputs or market access points. Some of these elements may also require economic input.

Counsel should then assess the potential antitrust theories of harm (e.g. is it RPM, hub and spoke, or unilateral foreclosure, etc.) and try to disentangle the pro-competitive effects from the anti-competitive effects. Compliance safeguards could include changes to the AI structure, use or policies. This will depend on the circumstances of each case, including whether the potential competitive concerns relate to use of AI tools or to the functioning of the AI sector itself. For example, as regards big data pooling agreements, companies deploying AI tools could send their data to a platform and get back aggregate data with no indication of which company it comes from. That would still give companies information that would help build better cars or make existing ones run better – without undermining competition. Or companies might limit the type of information they share. So, car companies might decide not to share information that would tell rivals too much about their technology. Online shops might share data without saying when products were bought, or for how much. And companies also need to be sure that pooling data doesn’t become a way to shut rivals out of the market.

Companies operating in sectors contributing to the development of AI, in deciding whether to restrict access to market entry points or to key inputs – including data, compute resources, and engineering talent – should closely examine the potential competitive impact and the justifications underlying their decisions, particularly given the stated intention of regulators globally to scrutinise how control of such access may give rise to competitive concern.

Finally, as both AI technology and the competition regulations governing it are rapidly evolving, counsel should monitor the use and development of AI and applicable regulation and reassess the initial risk analysis whenever there are significant changes or advances in technology.


Francesco Liberatore

Francesco Liberatore is a partner at Squire Patton Boggs based in London and Brussels. He is a specialist in competition law and its application to AI and other digital market issues.

Marty Mackowski

Marty Mackowski is a partner in Squire Patton Boggs’ competition-antitrust practice based in Washington, DC. He specialises in the application of antitrust and competition law to AI and other technology sector issues.

1 Case 40465, EU Commission Decision, Asus. Similar cases were brought against other manufacturers of consumer electronics including Philips (Case AT.40181), Pioneer (Case AT.40182) and Denon and Marantz (Case AT.40465).

2 Case 50223, CMA Decision, Trod Ltd and GB Eye Ltd.

3 Case 50565-2, CMA Decision, Casio.

4 CMA (2021). Algorithms: How they can reduce competition and harm consumers, 19 January. bit.ly/4c4sIYX

5 For example, Microsoft/Mistral AI partnership CMA merger inquiry; Microsoft/Activision CMA merger inquiry; Case COMP/M.7217, Facebook/WhatsApp; Case COMP/M.6314, Telefónica UK/Vodafone UK/Everything Everywhere/JV; Case COMP/M.4731, DoubleClick; Case COMP/M.8124, Microsoft/LinkedIn; Case COMP/M.4726, Thomson/Reuters.

6 For example, Case AT.39740, EU Commission Decision, Google Search (Shopping).

7 See note 1.

8 OECD (2023). Algorithmic competition – Note by the European Union, 14 June. bit.ly/3X7OSFC

9 See note 1.

10 OECD (2018). Going Digital in a Multilateral World, 11 May. bit.ly/3RAkzE3

11 CMA (2024). AI Foundation Models: Update paper, 11 April. bit.ly/3KqB3ue

12 Vestager M (2024). Making artificial intelligence available to all - how to avoid big tech's monopoly on AI? Speech at the 22nd International Conference on Competition, Berlin, 29 February. bit.ly/4e938E8

13 CMA (2024). CMA AI strategic update, 29 April. bit.ly/3yFpI6U

14 See note 11.