The world’s leading digital media and regulatory policy journal

The rise and rise of online age assurance

As the debate over the protection of children online intensifies, ROBERT MACDOUGALL, NICK EVANS and GIULIA DE BERNARDI assess how age assurance measures are evolving in the UK and EU


Age assurance has rapidly ascended the European regulatory agenda, emerging as a fundamental measure to protect children on the internet.  Many regulators now require companies, such as social media platforms, online marketplaces and online gaming providers, to implement age assurance methods that are effective and appropriate. Platforms must also balance age assurance with users’ rights, including their right to privacy and freedom of speech.

Building on the findings of our recent report and webinar,1 this article assesses the current European age assurance regulatory landscape, comparing and contrasting approaches in a number of different countries, before concluding with the key themes currently shaping age assurance regulation across Europe.

The regulatory background

Age assurance brings into play several different regulatory requirements, with regulations relevant to online safety, audiovisual media and data protection of particular importance.

Online safety

In the UK, the Online Safety Act (OSA) mandates highly effective age assurance for services that host harmful content, ranging from pornography to self-harm and eating disorder material. Ofcom’s guidance provides a non-exhaustive list of seven methods considered capable of being highly effective.2 These include credit card checks, photo-ID matching, facial age estimation and digital identity services. Ofcom also lists three approaches it considers incapable of meeting age assurance duties, such as relying on users’ self-declaration (e.g. checking a box to confirm being over 18).

In the EU, the Digital Services Act (DSA) requires platforms of all sizes to take steps to protect minors online. The European Commission’s Guidelines on the Protection of Minors Online3 detail expectations for when and how age assurance should be used to comply with DSA duties. According to the guidelines, age verification is considered suitable for high-risk services (e.g. platforms hosting adult content or designed for an 18+ audience), while age estimation is appropriate where medium risks are identified (e.g. ensuring users are over 13 for a social media platform that, based on its risk assessment, has restricted its service to 13+ users). Platforms operating in the EU are expected to offer at least two age assurance methods and a redress mechanism for users to challenge outcomes. Like Ofcom, the Commission specifies that self-declaration is not considered compliant.

Additionally, the Commission is working towards a harmonised age assurance solution. A white-label age verification app, currently being piloted in six member states, and forthcoming EU digital identity (EUDI) wallets, will aim to set the compliance benchmark for age assurance, enabling users to share proof-of-age tokens without disclosing any other personal information. Once fully rolled out by 2026, these tools are intended to set the ‘gold standard’ for privacy-preserving age verification across the EU.

Platforms operating in the EU are expected to offer at least two age assurance methods and a redress mechanism for users to challenge outcomes

A number of member states have also called for the establishment of a digital age of majority at EU level, as evidenced by the Jutland Declaration signed in October 2025.4 To this end, an expert panel has been commissioned to assess whether and, if so, how the EU should pursue this path and require age verification on all social media platforms.

Audiovisual media services

The transposition of the Audiovisual Media Services Directive (AVMSD) at EU member state level requires regulated services – including video sharing platforms (VSPs) – to take appropriate and proportionate measures to protect minors from content that may impair their physical, mental or moral development. Such measures may include age assurance. In practice, however, the specific obligations around age assurance and their enforcement vary significantly, reflecting differences in how individual member states have transposed AVMSD requirements into national law.

Data protection

Many age assurance methods involve processing some form of personal data, which may also include special category biometric data. Given the sensitivity of the data in question, a number of data protection authorities have published specific age assurance guidance. Moreover, under the General Data Protection Regulation, member states have set minimum ages to consent to personal data processing, ranging from 13 to 16 depending on the jurisdiction. Consequently, services are required to make ‘reasonable efforts’ to verify that a parent or guardian has provided consent on behalf of a child whose age falls below the minimum age of consent.

In summary, drawing a neat line between online safety, audiovisual media services and data protection regulations is not straightforward, with regimes often overlapping. VSPs, for instance, may be subject to age assurance requirements arising from the DSA, member state transposition of the AVMSD and other country-specific laws. In addition, different national regulators prioritise different criteria when assessing age assurance approaches.

Comparing and contrasting approaches

In our report, we reviewed the regulatory environment at UK and EU level, along with studies in Belgium, France, Germany, Ireland, Italy, the Netherlands and Spain.5 To be considered effective under DSA guidelines and most official guidance at a national level, we found that age assurance methods should be accurate, robust, reliable and non-discriminatory. In some jurisdictions, regulators expect methods to be non-intrusive and accessible, with clear complaints mechanisms provided for users. More stringent requirements may apply depending on a service’s content and risk level.6

Common criteria across jurisdictions

Accuracy:  Included at the EU level and in national age assurance guidance in the UK, France, Germany, Italy, Ireland, the Netherlands and Spain. This reflects expectations that age assurance must be technically sound and assessed against appropriate metrics.

Robustness: According to guidance at the EU level and national guidance in the UK, France, Germany, Italy, Ireland, the Netherlands and Spain, methods should be duly tested and platforms should take appropriate steps to mitigate against potential circumventions.

Reliability: Emphasised at the EU level and in national guidance in the UK, France, Germany, Italy and Ireland. Reliability commonly refers to the degree to which the age output from an age assurance system is reproducible and derived from trustworthy evidence.

Non-discrimination: According to the EU guidelines and national guidance in the UK, France, Germany, Italy, the Netherlands and Spain, regulated services should ensure that chosen methods do not discriminate against some users based on characteristics such as disability, language and ethnic, gender, religious and minority backgrounds.

Non-intrusiveness: Emphasised at the EU level and in national guidance in the UK, France, Ireland, Italy and Spain. To meet this criterion, platforms are expected to only process data that is strictly necessary for the age assurance purpose and adopt the least intrusive method possible. The data used to assure age should not be stored or used for other purposes.

Examples of differences across jurisdictions

The French and Italian regimes go further than other countries reviewed, having implemented binding technical standards for age assurance to access services disseminating pornographic content. The standards require that the entity verifying age must be legally and technically independent from the content provider (‘separation of duties’). Additionally, methods must maintain double anonymity. This means deploying ‘double-blind’ methods where the platform does not learn the identity of the user and the age verifier does not learn what service the user is trying to access. The French and Italian standards also require session-based verification, meaning that users’ age must be verified at each session of service use.

Another notable difference is that a system that uses biometric data to identify or authenticate persons is deemed overly privacy-intrusive in Italy. This means that age assurance through photo-ID matching is considered to be a non-compliant method in Italy for services disseminating pornographic content, whilst it is considered capable of being highly effective in the UK. 

Case study: OpenBoard

While several methods may be capable of delivering effective age assurance, their appropriateness will vary depending on service-specific risks, applicable national requirements and how the method is implemented. The following case study illustrates factors that a fictional online company should consider in designing an age assurance process that meets different regulatory requirements.

OpenBoard is an online forum hosting discussions across a range of interest-based communities. It is based in the UK but is also popular in Italy and the Netherlands. Whilst not its primary purpose, OpenBoard’s terms and conditions do not prohibit adult-only content from being uploaded and therefore it restricts access to users over the age of 18 years old. As a result, it has determined that it should implement effective age assurance to prevent minors from accessing the platform.

Figure 1: Age verification process for OpenBoard

In the UK, OpenBoard deploys facial age estimation and photo-ID matching, two methods that Ofcom considers capable of being highly effective age assurance. In implementing them, OpenBoard needs to ensure that its methods are technically accurate, robust, reliable and fair. 

Given that OpenBoard operates in EU member states, it also needs to consider whether its approach aligns with the DSA guidelines. For example:

  • Age estimation approaches may not be considered appropriate and proportionate safeguards where the terms and conditions for a service specify a minimum age of 18.
  • A redress mechanism should be provided for users to challenge age assurance outcomes.
  • Platforms should offer more than one age assurance method. In OpenBoard’s case, this helps avoid the unintended exclusion of users who lack government IDs.
  • ID-based age verification should be based on anonymised age tokens issued by independent third parties, rather than the platform itself, particularly as it may offer access to adult content.
  • To future-proof its approach, OpenBoard could consider aligning with the technical specifications of the EU age verification app.

The company also operates in Italy, where there are specific requirements. If OpenBoard is disseminating pornographic content, it needs to consider AGCOM’s technical standards for verifying age. The standards note that the processing of biometric data to identify the user during photo-ID matching and the direct collection of IDs by the platform is overly privacy-intrusive and non-compliant. In order to be compliant:

  • The entity verifying age must be a legally and technically independent third party.
  • Methods must maintain double anonymity.
  • The users’ age must be verified at each session of service use.

Key themes of the European regulatory approach

Based on our analysis of the regulatory regimes in the UK and the seven EU member states included in the study, we identify five key themes relevant to age assurance regulation across Europe.

Age assurance should be grounded in risk assessments

In both the UK and EU, safety measures should be proportionate to the level of risk that users may encounter on a service. Platforms must evaluate the likelihood and severity of harm to children arising from their content, features and algorithms – including legal but harmful content. Findings from risk assessments are expected to inform age assurance measures. For instance, high-certainty age verification may be suitable for high-risk services, while age estimation may be sufficient on a service where medium risks are identified. For low-risk services, other safety measures may suffice. 

Regulations are generally technology neutral

While self-declaration approaches are broadly rejected as insufficient, regulators are not mandating the use of specific age assurance methods. Rather, they leave services with the freedom to choose their preferred solutions, so long as they can demonstrate how these fit regulatory assessment criteria. For instance, in Germany, the Commission for the Protection of Minors in the Media (KJM) runs a certification scheme for age verification systems considered effective to meet legal duties, with over 100 tools certified as effective under its criteria. Ultimately, platforms remain fully accountable for ensuring that their chosen methods are effective, proportionate and compliant, including when relying on third parties.

Privacy-preserving approaches are a baseline expectation

Beyond compliance with data protection laws, platforms are increasingly expected to prioritise ‘least intrusive’ methods and perform a data protection impact assessment  before implementing any age assurance system. Additionally, regulators across Europe have started to explicitly encourage privacy-preserving approaches, such as double-blind methods. To align with privacy expectations, the European Data Protection Board recommends that platforms assure age without fully identifying users and permanently storing outcomes. Looking ahead, the upcoming age verification app and EUDI wallets are expected to establish privacy-preserving approaches as the gold standard for compliance in the EU. In the UK, Ofcom and the Information Commissioner’s Office are expected to publish a joint statement in 2026 to clarify the interplay between the online safety and data protection regulatory regimes, following on from prior joint work. 

Age assurance as the foundation of age-appropriate design

At both UK and EU level, platforms are expected to mitigate risks across the entire user journey. Therefore, age assurance may be leveraged beyond account creation, to tailor access to risky content and features based on users’ age, as required under online safety regimes. Relevant changes may include turning off risky features (e.g. geolocation, video autoplay) and configuring private-by-default settings on minors’ accounts. For example, Ofcom is considering requiring platforms to disable certain actions (e.g. sending gifts or recording) on children’s livestreams. Additional changes may be needed for younger age groups (e.g. 13-16). In practice, verifying users’ age is seen as the first step in creating a safer digital experience.

There remain differences in prevailing regulatory approaches

While many regimes reference age assurance, regulatory requirements exhibit differences in scope and implementation details. As discussed above, the DSA guidelines recommend the use of age verification, not estimation, for high-risk services. By contrast, in the UK both approaches are capable of being highly effective and compliant with the relevant duties. For platforms, differences like these mean that a single model rarely fits all.

Conclusion

Age assurance is a fast-moving area, with regulatory, political, civil society and commercial developments all having a material impact on how platforms respond. In 2026, age assurance is expected to continue to evolve as it increasingly becomes a cornerstone of the online safety regulatory landscape.

In the EU for example, the protection of minors has been marked as a priority in the upcoming 2026 review of the AVMSD, which already applies age assurance requirements. The review may also consider how the AVMSD regime overlaps with the DSA. In the UK, Ofcom may introduce additional safety measures under the OSA’s codes, including expanded deployment of age assurance.

While continuously evolving and technology neutral, the regulatory trajectory in both the UK and EU points towards risk-based, privacy-preserving and robust systems. Ultimately, for in-scope online services, age assurance should not be viewed merely as a means to keep children from using adult services, but as a key building block of a safer online experience.


Robert Macdougall

Robert Macdougall is a director in Deloitte’s EMEA Centre for Regulatory Strategy, where he leads the Centre’s work on digital regulation. He previously worked at Vodafone Group, Ofcom and the Office of Fair Trading (now the Competition and Markets Authority). [email protected]

Nick Evans

Nick Evans is a senior manager in Deloitte's EMEA Centre for Regulatory Strategy, focusing on digital regulation. He previously worked in the strategy and policy team at Ofcom. [email protected]

Giulia De Bernardi

Giulia De Bernardi is a senior analyst in Deloitte’s EMEA Centre for Regulatory Strategy, focusing on digital regulation. She previously worked in the European Union delegation to Singapore. [email protected]

1 The report, Online Age Assurance - Responding to the evolving regulatory landscape, is available at bit.ly/4iisf9K and the webinar recording can be viewed at bit.ly/3KhmDjH

2 See bit.ly/3XfqEYN

3 See bit.ly/481iEkt

4 Council of the European Union (2025). The Jutland Declaration: Shaping a Safe Online World for Minors. 10 October. bit.ly/4oMKFSC

5 Based on an analysis of regulatory requirements effective as of August 2025. This analysis focused on local criteria going above and beyond cross-European regulation such as the GDPR.

6 It is important to note that there is no generally accepted definition for these criteria and their interpretation and application may differ across jurisdictions.

Welcome to Intermedia

Install
×