The world’s leading digital media and regulatory policy journal

Promoting innovation, mitigating risk: AI policy around the world

As the EU announces the world’s first AI Act and generative AI dominates the headlines, RUSSELL SEEKINS reviews how policy is taking shape among leading economies

Many jurisdictions, most notably the EU, have been progressively considering the policy implications for AI development over recent years. Most had favoured a ‘wait and see’ approach alongside a statement of principles designed to guide developers and reassure the public. It was the release of ChatGPT in November 2022 that galvanised the debate. Its power and accessibility caught the public imagination and adoption spiralled, reaching 100 million users within two months of launch. 1 Discussions about AI regulation broadly, and AI safety in particular, came into sharper focus in light of growing public concern (see figure 1).

Figure 1.  Feelings towards AI in the US. Source: Pew Research Center

Categorising risk

Dragoş Tudorache is the European parliament’s co-rapporteur for the EU’s AI Act. In his opening remarks during a session at  the IIC annual conference he framed the EU’s approach to AI governance in terms of the ‘protection of society and citizens’. The aim, he said, was to categorise risk and allocate mitigation without hindering innovation. This contrasts with the US, where the approach is driven by industry and its ecosystem. In his view it relies on the actors knowing what’s needed, with the state intervening only as a last resort. In China meanwhile, the main concern is the interest of the state and control of citizens. Tudorache went on to observe that an approach which brought together the thriving ecosystem of the US with the protections of the EU would, eventually, be a desirable outcome.

A tendency towards cooperation

A number of issues come together to create an environment that favours international cooperation. The first is the asymmetry of the actors. The US is the AI superpower, with a 38 per cent share of AI start-ups globally in 2020, followed by China with 25 per cent.2 China has a stated aim of becoming the leading AI power by 2030 and has filed over half of the world’s AI-related patents. Meanwhile, investment in US AI technology firms was $93 billion in 2021 and is expected to rise to over $300 billion by 2026 –  far ahead of China (where access to US technology, especially AI chip technology, has been progressively curtailed).3 Together, these two economies dwarf the contributions of any other country. The EU meanwhile has the ambition of leading the world in AI policy from a relative position of consumption strength and industrial weakness. (The UK is Europe’s leading country for AI research and technology, a distant third behind the US and China and is not in the European Union.4 ) This means that, to be effective, the EU’s new law will apply to all services supplied into its territory. With China taking a similar approach and the US likely to follow, extra-terrestrial applicability is expected to become the norm. This creates the potential for what APEC (the Asia-Pacific Economic Cooperation, a forum consisting of 21 economies in the Asia Pacific region) has described as a ‘noodle bowl’ of conflicting and duplicate rules that could result in hindering the innovation, trade and economic growth that everyone claims to want. As was seen in the Bletchley Park Declaration (see the section under United Kingdom below) the move towards agreement on principles and outcomes, if not the rules themselves, is gaining momentum.

AI definition

A frequent observation made by sceptics of AI regulation is that there is no agreed definition for it. The Organisation for Economic Co-operation and Development (OECD) has been working on a definition for many years and admitted to the difficulties of finding a consensus among experts. OECD member countries recently approved the latest definition:

‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’ 5

It is this definition that is expected to be adopted by the European Union.

Policies around the world

Almost every significant economy now has an AI statement of principles or guidelines to varying degrees of detail. Only the EU has, so far, introduced an overarching AI-specific law. In this article, we will consider the policy positions of the countries and blocs broadly considered to be the most relevant in terms of policy development – the EU, US, UK, China and APEC.

The European Union

European policymaking first took shape in 2018 with the setting up of the high level expert group (HLEG) and the subsequent issue of guidelines that ‘trustworthy AI’ should be lawful, ethical and robust. This was followed by a multi-stakeholder forum, steered by the HLEG and tasked with developing policy recommendations. The current AI Act was proposed in 2021, followed by the AI Liability Directive in 2022.6 In the words of the European parliament’s co-rapporteur for the EU’s AI Act, Dragoş Tudorache, the time for guidelines and principles ‘is over’. In keeping with the EU’s aim to be a global leader in AI policies, ‘firm laws are required’.

The AI Act

The EU proposals take a risk-based approach designed to distinguish between different levels of risk posed to fundamental rights and the safety of users. Different obligations arise in each risk category. The four categories proposed are:

  • Unacceptable risk. The system is prohibited. E.g. social scoring
  • High risk. The system is permitted subject to compliance with AI requirements and ex ante conformity assessment. E.g. recruitment
  • Transparency risk. The system is permitted subject to information and transparency obligations. E.g. chatbots
  • Minimal or no risk. The system is permitted with no restrictions.

Discussions to finalise the text of the Act were declared complete on 8 December 2023. Penalties have been agreed at up to 7 per cent of global turnover or €35 million, whichever is higher. The definition of a high risk system is relatively broad in scope and includes having the potential to harm health, safety, fundamental rights, the environment, democracy and the rule of law. Agreements have been reached on obligations for providers of such systems and on conformity assessments, foundation models and generative AI, and testing. As well as social scoring, the legislation bans AI systems that ‘manipulate human behaviour to circumvent their free will’ or which exploit vulnerable groups. There are strict constraints on the use of facial recognition technology, with exceptions for defined law enforcement purposes. Also banned are biometric classification systems that use sensitive characteristics such as political and religious beliefs, race or sexual orientation.

Dragoş Tudorache at the IIC annual conference, Cologne, 2023

The rules are expected to be in place by April next year. It is accepted that there will need to be an implementation period of between one and two years to enable both the establishment and staffing of regulatory bodies at national and EU level and to provide time for industry to prepare. During this period the European Commission will consult with the companies most affected and invite them to become ‘early adopters’ of the legislation, with the Act coming into force in 2025.7  In his comments at the IIC’s annual conference, rapporteur Dragoş Tudorache said that the vast majority of AI systems and tools will be unaffected by the new rules and would continue as before.

The United States

The Biden administration has a stated aim of leading the development of AI policy, framed in terms of the United States’ strategic competition with China. The government has already limited China’s access to AI chips and the policy objective is to maintain technological leadership.

President Biden is keen to find a balance that mitigates risks without stifling innovation. The administration is known to take the view that safety and innovation are related issues and that companies need clear guardrails in order to be confident in the development and adoption of AI systems and services. Other observers have pointed out that companies don’t want to be left to wholly self-regulate, as they were with social media.

The National AI Initiative Act of 2020 established a federal advisory committee to provide coordination across agencies for the purpose of AI research. The aim was primarily to ensure US leadership in artificial intelligence through investment and expanded access to ‘resources and tools that fuel AI research and development’.8 With contributions from major companies, the White House produced an AI Bill of Rights that identified five principles that should guide the ‘design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence’.9

The administration is known to take the view that safety and innovation are related issues and that companies need clear guardrails

On 30 October 2023 President Biden issued an executive order on ‘safe, secure, and trustworthy artificial intelligence’.10 Among other actions, the order requires developers of the most powerful systems to share safety test results with the US government and directs the National Institute of Standards and Technology to set standards for testing before public release. In keeping with the goal of the US government to be a global leader in AI development, the order also sets out principles to promote innovation and competition and commits to engage with other governments to establish ‘robust international frameworks’.

The hope is that the executive order may form the basis for a future AI Act, but this is thought unlikely to happen in the foreseeable future, principally because of the diverse views in congress. The spectrum of opinions range from, at one end, those demanding urgent bipartisan legislation to, at the other, the view that existing regulation is sufficient and nothing needs to be done until the dangers are clearer. It’s worthy of note that many states have implemented specific regulations of their own, including Massachusetts (requiring disclosure of algorithms, in response to ChatGPT) and New York (prohibiting the use of tools that, when automated, could result in bias).

The United Kingdom

The UK government has a stated ‘pro-innovation’ approach to AI regulation. The framework is based on the identification of risks and ethical challenges, comparable to the EU. However, the focus is on AI impacts rather than technology. For example, a chatbot offering advice on fashion choices presents a very different level of risk from one involved in dispensing medicines. The five principles underpinning the framework have a familiar ring:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress.

The principles have been issued to sector regulators who are expected to take a proportionate approach to implementation, including devising appropriate requirements and any necessary assurance and technical standards. Meanwhile the government will have responsibility for monitoring, assessment and feedback, cross-sectoral risk assessment and interoperability with international frameworks. There is no plan for a separate regulatory body nor, for the time being, any intention of introducing AI-specific regulation – the government considers existing rules to be sufficient.

International summit

The ‘world’s first AI safety summit’ was held at Bletchley Park in England in early November 2023. It was attended by the three key players – the US, China and the EU – and in total there were representatives from 29 countries as well as a number of high profile industry figures. The Bletchley Declaration committed signatories to broad principles of focusing on development that is human-centric and safe and that recognises the specific risks of ‘frontier AI’, especially in cybersecurity, biotechnology and disinformation.11 Countries agreed to support a report on frontier AI led by Professor Yoshua Bengio, to be published ahead of the next summit in France in a year’s time. (There will also be an interim ‘virtual mini-summit’ hosted by South Korea.) A further agreement on AI safety testing was signed by 10 countries, including the UK, the US and major European states, as well as by leading technology companies.

China

China is broadly recognised has having two main goals for its AI policy. First, it is concerned with stability, especially political and social stability. This means that concerns for national security, public opinion and ‘social mobilisation’ take priority over individual rights. Second, it wants to be the major AI power in competition with the United States, so enabling trade and innovation is critical.

Facial recognition technology in operation in Shenzen, China

The Chinese government has taken an iterative and incremental approach to regulation building, rather than seeking to establish an all-encompassing AI Act. It has implemented a set of AI-related laws, regulations and guidance – 30 in total since 2016, five in 2023 alone. The aim is to regulate specifically, picking off significant issues as they arise. This is a model which suits a country in which laws can be passed, and added to, at speed. The code of ethics for the ‘new generation of artificial intelligence’ was published in 2021, the algorithmic recommendation in 2022 and regulations on the ‘administration of deep synthesis of internet information services’, a response to deep fakes, took effect in January 2023. In October 2023 a provision designed to regulate the datasets used for training AI models came into force. It includes a requirement to pass a security assessment and a provision that the dataset contain no more than five per cent illegal and harmful content, the definition of which is extensive and includes data that might damage China’s image or undermine national unity and social cohesion.

The most significant regulation is the ‘administrative measures for generative artificial intelligence services’ that took effect on 15 August 2023, just over one month after the law was passed. This covers only generative AI provided to the Chinese public, including anything generated outside China. The law gives the Chinese authorities the power to classify AI systems according to risk. The categories have not been released, but it’s thought they will be similar to those proposed by the EU. Service providers will have to discharge a series of responsibilities, ensuring that AI-generated content adheres to all existing Chinese laws, including censorship laws, and that it is watermarked to ensure that its origin is clear. Any system or service that has ‘public opinion attributes’ or ‘social mobilisation capabilities’ will need to be security assessed and registered with the authorities. The definition is very broad and can include blogs, chatrooms, livestreamed video or virtually any channel that enables public expression. In a case in 2019, the courts in China recognised that content produced by generative AI is copyrightable.12

Asia

APEC has said that it is seeking to collaborate as much as possible. In November 2023 its business advisory council stated that international cooperation was necessary to avoid the adoption of conflicting approaches to governance and that interoperability and consensus were preferable to creating a ‘noodle bowl of regulation and data policies’. The priority should be on open trade and the free flow of data. It suggested that it was in the interest of APEC countries to participate and contribute to efforts to create international frameworks. Observers point to the challenge of reconciling the diverse sets of rules and regulations that exist currently across Asia. A consensus is most like to emerge around what best contributes to the promotion of trade and economic growth.

With special thanks to Squire Patton Boggs for their contributions to this article.


1 Hu K (2023). ChatGPT sets record for fastest-growing user base. Reuters, 2 February. reut.rs/47MKM89

2 Xiang N (2023). Chinese AI is not the threat the U.S. thinks it is. Nikkei Asia, 10 November. s.nikkei.com/3GzN66v

3 International Data Corporation (2022). China’s Artificial Intelligence Market Will Exceed US$26.7 Billion by 2026. 4 October. bit.ly/3tecICX

4 Keary T (2023). Top 10 Countries Leading in AI Research & Technology in 2023. Techopedia, 16 November bit.ly/3Tkz1BM

5 Russell S, Perset K and Grobelnik M (2023). Updates to the OECD’s definition of an AI system explained. OECD Policy Observatory, 29 November. bit.ly/41cmMZP

6 The AI Liability Directive establishes a fault-based liability regime and compensation rules for damage caused by AI systems. The legislative procedure is currently on hold pending adoption of the AI Act, to which it is closely linked.

7 This is similar to the process followed for the Digital Services and Digital Markets Acts.

8 ‘The Biden Administration Launches the National Artificial Intelligence Research Resource Task Force.’ White House press release 2021, June 10. bit.ly/3RjqfkL

9 The principles are: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and ‘human alternatives, consideration and fallback’. Blueprint for an AI Bill of Rights. bit.ly/489kaOB

10 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. White House briefing, 30 October 2023. bit.ly/47SlaXM

11 ‘Highly capable general-purpose AI models, including foundation models, that could possess dangerous capabilities sufficient to pose severe risks to public safety’.

12 Guadamuz A (2020). Chinese court rules that AI article has copyright. TechnoLlama, 19 January. bit.ly/41fvhTS