Contents
Sustainability and the green agenda
Protection of online copyright
The twin transition
The keynote speech was delivered by Klaus Müller, President of BNetzA, who discussed the importance of the digital transition enabling the green transition. The energy transition, he said, will only work if it can be digitalised.
In the long term, digitalisation is a prerequisite for the integration of decentralised types of renewable energy into the grid. It will require close cooperation between the transmission system operators, the distribution system operators and the energy sector itself. The twin transition is central to the EU’s Green Deal, which aims to decouple economic growth from the use of limited resources.
For regulators, it is important to ensure, firstly, access to spectrum bands in sufficient time for companies to plan their investments. Secondly, there should be consultations in the run up to changes to ensure that all interests are taken into account in the final decision. Thirdly, regulators need to encourage cooperation among market participants, especially in sharing passive infrastructure.
Closing the digital divide
Jonny Bunt from BT described the voluntary accord in the UK under which social tariffs are offered by operators, with no legal framework. While often held up as a success, no-one, he said, is happy. Take-up is low, at around five per cent of eligible households, and the accord distorts the market for low cost players. The incentives are available to 25 per cent of households but are not big enough to encourage take-up. In addition, some ten per cent of these households couldn’t afford broadband even if it was discounted to five pounds. The view at BT is that the subsidy should just apply to those with no income other than benefits, rather than to anyone with some kind of income support.
Chair of BEREC, Professor Konstantinos Masselos, said that to achieve the EU objective of 100 per cent coverage to a standard of at least 5G by 2030 will require huge investment – between 174 and 227 billion euros. He saw three high level objectives: incentivise investment, optimise network deployment costs and to ensure demand so that services are viable in the long term. He also pointed to the challenge of putting cabling into existing buildings which, he said, needed innovative solutions and at least the possibility of reusing existing cabling infrastructure.
The ’middle mile’
Mariam Sulaberidze from OpenNet Georgia described how, in a very mountainous country in which all of the profitable geography has been covered, the project is providing the ‘middle mile’ to help improve connectivity for 1,000 villages, with support from government and donors. The ISPs are then encouraged to provide the last mile, but this requires take-up in order to be sustainable. So the second part of OpenNet is a digital adoption programme in which training courses are delivered to residents. The courses are based on a needs assessment to ensure they are relevant to the users. The Association for Progressive Communications is also involved in providing internet access for populations that are not commercially viable. Mike Jensen noted that in some areas the revenue per user is only one or two dollars per month. Solutions have included using universal service funds and charitable donations to create Wi-Fi hotspots. Costs have come down to a point where access for a small village can be achieved for $3,000 to $4,000. Spectrum is often not available for this, so his association advocates a ‘use it or lose it’ policy where spectrum has already been allocated to large commercial providers. Low earth orbit satellites also offer good possibilities. While not affordable for individual use, shared across a village it can be economically viable.
Data governance
The session was firstly addressed by Dr Paul Twomey from the Global Initiative for Digital Empowerment (GIDE). He outlined the results of research that indicated an increasing lack of trust in the internet among consumers around the world. It’s a problem that users want governments to solve. However, with 242 pieces of privacy legislation in 137 countries, sovereign risk for companies is already very high. The solution proposed by GIDE is that consumers should be included in the ‘market for data’, with intermediaries empowered to negotiate the rights under which their personal data is used.1
Nicole Darabian gave a presentation based on the essay that won her the IIC’s future leaders competition.2 She noted that cyber attackers are inevitable, and the task is to put in place measures that minimise their impact. Industry has more incentive than ever to build cyber security into their products, but there need to be organisational processes to protect stored data, especially for NGOs, schools and other organisations with fewer resources.
A representative from the Communications Alliance in Australia said that while harmonisation was frequently discussed for cross border data flows, other elements of data usage such as fraud, online safety and disinformation could also benefit from being harmonised. Dr Prapanpong Khumon, an adviser to the Personal Data Protection Committee of Thailand, noted the fragmented nature of international data transfer at present. For example, transferring data from the EU to Japan is straightforward since it has adequacy recognition. But if a corporation needs to send the same data to Singapore, a country with a well-established data regime, the process is complicated since the regime is not recognised as adequate. A range of contractual clauses would need to be imposed to allow redress in the case of a breach, and these are different when data is returned from Singapore to the EU. He cited the APEC cross border data rules (cross border privacy rules, or CBPR) as a means of cutting across these barriers. These are currently confined to member states, but there are attempts to make them global, starting with membership of the UK and UAE. Another panellist pointed out that, when mapped, there is significant overlap between the EU’s General Data Protection Regulation (GDPR) and CBPR, and that perhaps interoperability is a more useful concept than harmonisation.
AI governance
The session began with the European Parliament’s co-rapporteur of the EU’s AI Act, Dragoș Tudorache, explaining the EU approach to AI regulation. Many countries are taking the approach of assessing and mitigating risk. The division arises over whether this should be achieved through principles, as in the US, or whether hard regulation is now needed. The EU recognises the value and enormous potential good of AI, and doesn’t want the benefits overcome by mistrust. The aim is to mitigate risks without placing unnecessary restrictions that would hinder innovation. At its heart is categorisation, with current debates focused on finding precise definitions.
A representative from the Federal Communications Commission (FCC) pointed out that in the US, there are a multiplicity of views about AI regulation in congress alone, but there is deep engagement with the issue across all of government. Up to a dozen states have already passed some kind of legislation on AI or algorithms. Different federal agencies are also considering what actions to take. It’s because this is still in debate that the executive has stepped in with an ‘AI bill of rights’ which has evolved into voluntary commitments.
Embracing AI in media
The editor in chief of Studio 47, a regional broadcaster in Germany, explained that he saw AI as the saviour of local journalism. The company has developed its own AI tool which is used to automatically produce newscasts, including text generation, speech synthesis, video editing and distribution. Editors, he said, remain solely in charge. Amidst concerns over job losses, the media industry, he said, must embrace AI as the tool of the future.
A panellist from a civil rights organisation was critical of the risk-based approach to AI. Some uses, he said, should be banned altogether, such as in policing and migration. Risks not regarded as serious are still problematic, such as racial and gender bias. He noted that the first iteration of EU AI regulation had not taken account of technologies such as generative AI and as a result had needed to be reviewed and updated.
Sustainability and the green agenda
In the opening remarks, panellists pointed out that energy efficiencies were being introduced into networks all the time, through new technology and refurbished hardware. Fibre networks are the most energy efficient of all, and by minimising a strict design of active components and burying cable with minimal excavation, new deployments are up to 60 per cent more efficient than before. One concern is that energy consumption footprints are measured in ways that are not consistent. Solving this is seen as a key ask from regulators. BEREC (the body of European regulators for electronic communications) are involved in exploring more harmonised measures, which is broadly welcomed. It was also noted that there are tensions between sustainability and consumer outcomes. For example, rolling out a single network is more sustainable than rolling out two, but the loss of competition may not benefit consumers.
Digitalisation can do much to help with broader industry efficiencies. Digitising buildings, with more sensors and systems management can dramatically reduce energy use. Digital grids and forecasting will do much to effectively integrate renewable energy sources into the grid. But sustainability should also include resilience. Regulators should consider how to look at how networks can respond to colder winters, hotter summers, increased rain and other natural events. BNetzA, the German regulator, has already published a strategy paper on network resilience following a case involving serious flooding.3
The circular economy, in which waste is minimised and products are re-used and recycled as much as possible, was also raised as an important component of sustainability. In some cases the principle of refurbishment and re-use is built into business models. But it was acknowledged that the industry had further to go on this. Another issue raised was the bureaucracy inherent in different levels of government, from national to regional and local, which resulted in huge delays in building new infrastructure. The solution is greater cooperation across government as well as among regulators, to remove barriers to building.
Digital platforms policy: regulation and competition
The first speaker, Professor Peter Alexiadis, noted that the EU was the first jurisdiction in the world to regulate markets through ex ante regulation. The UK was, by contrast, introducing its own, more nuanced approach. This avoids extensive litigation, but may take longer to resolve issues. In the US the issue is more geopolitical. He noted that, following the dismantling of telecoms monopolies in Europe, competition law has acted as a ‘backstop’ to catch issues that are beyond the reach of regulation. However, digital markets have tipped into quasi monopolies and the market is failing. At the core of the approach of the Digital Markets Act is fairness and contestability. Fairness, he said, is a nebulous concept, while contestability is based on the traditional principles of barriers to entry or barriers to expansion. In the DMA the European Commission has given itself significant responsibilities, including decisions on gatekeepers, market reviews and fines. All of these are open to legal challenge up to the European Court of Justice and it’s likely that there will be substantial litigation for years to come.
digital markets have tipped into quasi monopolies and the market is failing.
A representative from Vodafone welcomed the possibility of the Act opening up spaces for competition in areas such as payment services and interoperable messaging services, but felt that a flexible interpretation of the rules would be important given the case law, rather than a principles-based approach. He anticipated that the market share of core platform services would ultimately shift in favour of European businesses and a more diverse and heterogenous market overall.
Protection of online copyright
A commissioner from AGCOM began by describing the new anti-piracy law passed by the Italian government which allows websites to be blocked within 30 minutes. This is being implemented through a new platform on which rights holders can report the DNS or IP address of an illegal website. The system also involves the ‘whitelisting’ of legitimate sites to make checking faster. A panellist from a South African-based broadcaster noted that there was new legislation finally coming into force which updates the 1978 legislation but which still doesn’t take into account generative AI and cyber piracy. He noted that traditional wholesale channel piracy was easy to deal with under existing copyright law, but more difficult was the use of content via apps on the internet, especially when they are masked to look legitimate. Over 50 per cent of piracy apps have been shown to contain malware, making this a broader cybercrime issue.
A representative from a television and VoD trade association noted that, in spite of extensive legislation in the EU, there were still insufficient tools to enforce content protection online. AGCOM, she said, was doing a fantastic job but it was one territory out of 27. Illegal distribution is estimated to cost 1 billion euros in Germany. It also costs jobs, estimated at 10,000 in Italy. In Australia and the UK there are moves to allow for pre-emptive court orders against known sites prior to sporting events.
The European Commission copyright recommendation on tackling piracy of live events then came under debate. It was suggested that rights holders were hoping for legislative initiatives. The current situation is that there will be a follow-up to the recommendation in November 2025, while the Commission assesses its affects and then, if necessary, introduces legislation.
The broadcasting evolution
The opening remarks were from Emily Davidson from Channel 4 in the UK. She argued that if policymakers continue to think that public service broadcasting is important, then public service content needs to be easy to find, wherever people want to find it. It’s important that regulation keeps up with this. The draft media bill in the UK includes a new ‘prominence’ regime which gives Ofcom powers to extend the framework to cover smart TVs and sticks. She noted that the previous UK bill, from 2003, made no mention of the internet. Another panellist, from MultiChoice in South Africa, agreed that broadcasters were now evolving into content providers covering a range of platforms but increasingly through on-demand services. Broadcasters will need to continue to innovate and cannot afford to be risk averse. Frustrations arise when regulation presents barriers. In Africa, he said, broadcasters are heavily regulated while online is mostly unregulated, in many cases not even requiring a licence. This means that consumer complaints about a streamed show cannot be dealt with until it is broadcast. He also pointed out that with Netflix, for example, running live streaming of golf competitions, even definitions such as linear and non-linear are becoming irrelevant for regulatory purposes.
A representative from the Australian regulator ACCC raised concerns over the impact of the growth in online services on local output. Regulation is in place to protect local services (radio, she noted, was especially important for emergency services) but they continue to decline. Australian Sky News are now running their service on free regional broadcast networks, for example. The ACCC is in the process of putting in place a prominence network and the equipment through which content is accessed. This raises issues of control of the data, types of contract and how government should intervene.
Different ways for streaming services to contribute
In Canada too there is an updated Broadcasting Act giving the CRTC authority over streaming services as well as broadcasters, but with expanded objectives. Traditionally, broadcasters were required to spend 30 per cent of their content budget in Canada, subdivided into areas such as local news and indigenous content. Thus is hard to apply to a Disney Plus or Amazon Prime. Hearings are underway to establish what contribution could be expected of streaming services. Ideas include creating a fund into which they would contribute, credit applied to services which invest in Canadian content or types of ‘discoverability’ such as promoting Canadian content, and training Canadian filmmakers, especially indigenous filmmakers.
The session concluded with agreement that television was alive and well and ‘living in a golden era’. There are many reasons to be optimistic.
Countdown to WRC-23
This breakout session considered the issues on the agenda for the World Radiocommunication Conference (WRC-23) taking place in November and December, with a panel representing different industry interests. Dr Guillaume Lebrun from Meta began the session by drawing attention to the imbalance in spectrum allocation between mobile networks and Wi-Fi. Although each had roughly 1 GHz of bandwidth, most of the internet, he argued, (85 per cent in Germany) is run over Wi-Fi and only 5 per cent on mobile networks. In addition, while MNO spectrum is exclusive, Wi-Fi spectrum is shared with other services such as fixed link and satellite. For this reason Meta’s focus is on the 6 GHz band. He noted that international mobile telecommunications were not using their exclusive allocation to trigger ecosystems, as was suggested, and were putting off investments from other service providers.
Gonzalo de Dios from Amazon’s Project Kuiper reflected on the challenges of deploying a global satellite system. The rules are outdated and organisations like the International Telecommunication Union (ITU) have a role not just in harmonising the spectrum, but in how people view satellite technology. He singled out mobility and earth stations in motion as a particular issue given its role in the growth of maritime and aeronautical broadband connectivity. The goal of this part of the agenda is to achieve parity between geostationary and non-geostationary satellites. He also suggested that developing a framework for a nimbler and more responsive ITU should be a key part of the agenda.
Darko Ratkaj from the European Broadcasting Union emphasised the complementary nature of the different technologies for broadcasters, who were using them in collaborations such as hybrid and broadband broadcast. The UHF band is one part of the spectrum that is under pressure, but is essential for terrestrial broadcast where it is used for distribution and production. The band is already shared and this should not change.
A representative from Nokia discussed how networks could be improved to take account of the future growth in data volume. Site densification is expensive and spectrum efficiency for most technologies is now close to the Shannon limit.4 He argued that the most economic and environmentally suitable solution was adding spectrum to existing sites. Low band spectrum could improve rural grids and mid band would improve capacity and performance in urban areas. A cooperative model would also work for the UHF band.
Universal service funding
A representative from the FCC in the US outlined the principles behind interventions in areas where there is no business case to build networks. As well as a robust network it has to be affordable, adopted and available to all citizens equitably. These ‘high cost areas’ take up the majority of universal service funding and alongside them sit subsidy projects designed to improve affordability, including connecting schools and hospitals. A speaker from Botswana explained that its universal fund only began in 2014 and the main priority has been building infrastructure. This has got mobile broadband into 221 villages so far. All government schools are connected and are supported by IT officers to train both staff and pupils. A range of strategic partners contributed to the fund and the government is now also fully supportive. In total about 1.6 million of a population of 2.5 million are now connected.
Another speaker, from a community service organisation, recognised that there was a great deal of scepticism in the industry about universal service funds, with some justification. Many have not been well managed. He noted that licences are expensive and spectrum costs also favour large operators. One problem has been that infrastructure, such as MNO towers, is often not financially sustainable and falls into disuse.
USO fund oversight
One speaker noted the problem in Uganda where the regulator oversees the fund and therefore is involved in operational issues. It was noted that a possible alternative was the US model, in which the FCC sets policy but the fund is run by a separate non-profit body, the Universal Service Administrative Company, which is responsible for collecting and redistributing funds. In Botswana, this role is performed by a board of independent trustees. In Mauritius the fund is managed by the ICT authority under strict principles stating that it can only be used for social purposes and with no governmental interference, although the government does offer direction, such requesting free internet for post offices and remote villages.
One aspect of the debate concerned the source of funding. One contributor pointed out that there is little logic in taxing the network providers when the technology they are providing is enabling all kinds of public benefits. For this reason, there is a move in many places towards funding from general rather than industry taxation, although many governments find this difficult to accept.
Virtual worlds
The metaverse and AI continue to be driven primarily by the gaming industry, especially by graphic card chips from companies such as Nvidia and with headsets driven by Qualcomm’s Snapdragon chips. Virtual experiences, such as pop concerts, have had audiences of 16 million people. Esports is increasingly popular, and tournaments can attract large numbers of visitors. A representative from Meta said that uses of the metaverse, in areas like healthcare and engineering training are already here and the company is partnering with a number of universities to develop immersive learning. Headsets are now of much better quality and more affordable, if still expensive. The company still sees huge economic opportunities from the metaverse. A panellist from the German ministry BMDV said that, following a dialogue with industry, it was concluded at the EU level that there was no need for regulation, while there could be huge benefits from investment in the sector. There are some concerns about neural technology and the use of data. But more broadly, existing regulation, such as the GDPR and the Data Act, will apply to the metaverse.
Protecting media freedom
The session was first addressed by Teresa Ribeiro from the Organization for Security and Co-operation in Europe. She noted the decline in media freedom and freedom of expression around the world, describing it as a ‘cold war’. An information ecosystem celebrated as a beacon of free speech is now being used to spread false and misleading information, and distrust. A necessary approach is now a combination of noise reduction and amplifying public interest content and news-worthy information. She suggested that there should be a public interest framework for online media. The ‘surveillance capitalism’ business model of the large platforms must be confronted. The role of the media in democracy and security must not be allowed to become a thing of the past. In the second address, Markham Cho Erickson from Google argued that good content regulation involves legal clarity for the entire ecosystem over what is and what isn’t lawful, with governments in charge and companies complying with clear ‘rules of the road’. The UK’s online safety bill, he said, contains a broad duty of care provision which creates considerable uncertainty. He also pointed out that in many parts of the world there is a worrying trend towards curtailing democratic freedoms under the guise of national security or political censorship.
A representative from Ofcom highlighted the risks that may come from AI generated news content. Models are more likely to scrape data and stories from free and therefore less reliable sources and this won’t be apparent to readers. It was noted that while media journalists could be biased, readers know and can decide for themselves in a way that isn’t possible with AI generated content.