Francis Fukuyama famously wrote a book entitled ‘The End of History and the Last Man’.1 The main thesis was that as countries around the world moved to liberal democracies there was no reason for more debate on the best way to organise society. Although Fukuyama’s prediction did not come true, there is reason to believe that we have reached ‘the end of history’ in telecommunications. Fibre networks provide all the bandwidth to buildings we can ever need and 4G/5G mobile networks deliver near-perfect connectivity to many, much of the time. All that is left is to fill the gaps – those places where there is not high-speed broadband or good mobile coverage. This article sets out why it is time to declare the end of telecoms history and looks at the implications. As Fukuyama found, forecasting is perilous, but in this case the forecast looks highly likely to be accurate.
History
There are many excellent resources covering the history of telecommunications and little need for more than a brief summary here. That history could be considered to start in 1844 when Samuel Morse sent his first message and saw a major advance in 1901 when Guglielmo Marconi made the first trans-Atlantic wireless transmission.
For fixed line communications major steps were firstly the delivery of copper telephone wires to almost all homes in the developed world and then the progressive increase in the data rates of those lines, from simple dial-up modems delivering 20-30 Kbps to advanced fibre-to-the-cabinet solutions providing around 100 Mbps. Key advances in fibre have led to its increased deployment direct to many homes or, if not, to a point very close to the home.
On the mobile side key breakthroughs occurred with the cellular concept and then the progressive improvement in mobile systems from 1G through now to 5G. But advances have slowed and for many their experience of 5G is no different from 4G.
There are two key reasons why we have now reached the end of the development of fixed and mobile systems. The first is that we do not need higher speed and indeed will see no benefit from it. The second is that the growth in demand for fixed and mobile data is coming to an end and so we do not need increased network capacity either.
User requirements
Let us turn first to the speed needed. Demand for the highest speeds and the largest data volumes is almost invariably driven by video consumption. A person can only watch one video stream at a time, so understanding the data rates associated with the highest quality video feed required is a good current upper limit. By way of example, Netflix recommends that high definition requires 3 Mbps and 4k video requires around 15 Mbps.2 When I wrote ‘The 5G Myth’ in 2016, Netflix was recommending 5 Mbps and 25 Mbps, respectively. So over the last eight years, peak speed requirements have actually fallen to around 60 per cent of prior levels. Data rates for 8k video are much higher. But demand and supply of 8k is weak and for most the benefits over 4k are minimal. So, for a household where one person is watching 4k and two people are watching in high definition (HD), the total speed required would be 21 Mbps. That is just 5 per cent of a gigabit connection. And it is more likely that this will decline rather than increase in future. Mobile screens are far too small to make watching video at high resolution worthwhile. Mobile network operators (MNOs) have found that ‘throttling’ video to 1 Mbps or even less has no noticeable impact on the viewing experience of those using a mobile handset.
A somewhat different question is the speed needed for instantaneous web browsing. The issue here is less one of absolute speed and more of ‘latency’ – the time taken for a request (eg for a new page) to be sent to a server and a response received – as shown below. Beyond a certain speed other factors, such as the maximum turnaround time at the server and the delays inherent in the internet TCP/IP protocols, become constraining. This data rate is currently around 5-10 Mbps (hence most users will not notice an improved browsing experience once data rates rise above this point). Resolving this requires changes to internet protocols and architectures – something that has to occur on an international basis within internet standards bodies and key industrial players.
Figure 1: Internet page load time. (Source: Cloudflare3)
A 2022 paper, entitled ‘Understanding the Metrics of Internet Broadband Access: How Much Is Enough?’ assessed the rate needed for fixed access and came to similar conclusions, albeit at a higher data rate of around 20 Mbps.4 It states that: ‘Above about 20 Mbps, adding more speed does not improve the load time. The limit on the load time is the latency to the servers providing the elements of the web page.’
Overall, there is a clear conclusion. Beyond a certain speed there is very little benefit in going faster. The user experience will not change. This speed appears to be around 10 Mbps for mobile connections and around 20 Mbps for fixed connections. There may be multiple fixed users in a house, so 50 Mbps per house may be a safer upper limit. This will align with the experience of most of those who have paid for packages of speeds beyond this and not noticed any improvement as a result.
Many countries have gigabit targets – the desire to deliver data rates of 1 Gbps or more via broadband. As the conclusion above makes clear, this is pointless. There is no benefit above around 50 Mbps and requiring 1 Gbps could result in significantly more expensive solutions, especially where fibre is difficult to deliver and alternatives, such as fixed wireless access technology, would suffice.
Beyond a certain speed there is very little benefit in going faster. The user experience will not change.
Now let’s consider data demand. We have nearly reached a plateau in user requirements. This may not be immediately evident to those used to increases in fixed and mobile data volumes of 50 per cent a year, leading to huge increases over decades. These were driven predominantly by a move towards consuming video content (which includes social media feeds, YouTube and more) via broadband connections rather than broadcast systems. But this trend is now nearly complete and we are close to consuming as much video on mobile and home devices as we have time to watch. As a result, data growth rates are falling and seem likely to reach near-zero around 2027. This was something I predicted back in 2016 in ‘The 5G Myth’ where I developed the following S-curve chart:
Figure 2: The S-shape adoption curve. (Source: The 5G Myth, 2016)
This was based on the fact that adoption of many new services and products often follows an S-curve, and then an extrapolation of the data from 2007 to 2016. Since 2016 the data does broadly seem to have followed that trend. The chart below shows the combination of the data available from 2013 to 2016, my forecast from 2016 and more recent data from Barclays Research covering the period 2016 to 2023. There are multiple other sources that align reasonably with this.
Figure 3: Mobile growth rates 2013 to 2023. (Various sources5)
What is striking about this chart is the nearly straight-line decline in annual growth rates over the last decade. Growth has fallen from around 75 per cent in 2013 to around 20 per cent in 2023. This is a drop of 5.5 per cent a year. This straight-line decline in growth is exactly what would be expected for an S-shape adoption curve and the 2023 growth aligns near perfectly with my 2016 S-curve prediction. Both logic and the S-curve suggest that this decline will continue. Data growth will fall to zero around 2027, with significant variations by country.
Fixed data requirements are also following the same curve, albeit with a significant ‘blip’ around the Covid era – as can be seen in the chart from Communications Chambers at figure 4.6 Indeed, fixed growth is further down the curve at around 10 per cent per year.
Figure 4: Global fixed line annual growth rates. (Source: Communications Chambers)
There is huge variation from country to country and data usage can depart from trend when, for example, new tariffs are introduced. But the evidence seems clear that growth rates are on an S-curve and will reach zero well before the end of the decade.
There are some who reject the conclusion that we have sufficient connectivity. They note that there have been many historical forecasts of sufficiency that have proven ludicrous, such as Bill Gates’ ‘640k [of computer memory] ought to be enough for anyone’. They suggest that new applications will inevitably emerge, and we will return to data growth and requirements for higher speeds. I believe that they will be proven wrong for two reasons:
1. We can predict the future, and indeed I predicted well the iPhone and other events of the last 20 years.
2. Even if we cannot, investors will not fund ‘build it and they will come’ network deployments anymore.
I have written multiple books with clear predictions for the future.7 In my first book, written in 2000 (and published in 2001) – seven years before the ‘unpredictable’ iPhone, I said that in 2020 (20 years into the future) we would:
- Book flights with our personal communicator
- Have a communicator which will work out when to wake us up based on our diary
- Have a home security system that can automatically lock doors
- Have a personalised news feed to our communicators
- Have robots that cut the grass
- Have excellent speech recognition
- Check in at airports using the phone
- Link a communicator to a seat-back display on an aircraft
- Pre-order coffee ready for collection on arrival at the nearest Starbucks and have directions to get there
- Measure biometrics with sensors integrated into clothing
- Receive recommendations to nearby restaurants and automatically book
- Have average data rates to the home of 60 Mbps.
It’s not perfect, but I’d argue that it is pretty good. I would contend that we can predict the future, that I have done so pretty well over the last 25 years and that looking ahead to the next 20 years is entirely possible. Others, such as Bill Gates, have made very public errors, but that does not mean all forecasts are flawed.
The second reason is that investors will no longer put up the money for futuristic networks without a clear business case. MNOs have been saying very publicly that they will not invest in new hardware for 6G. The investment community has seen MNO share prices languish in recent years and is disinclined to invest more. And the entire telecoms community has rather lost credibility over its claims that 5G would be ‘a generation like no other’. To expect that better networks will be deployed in the hope that dramatic revenue growth is just around the corner is wishful thinking.
Implications for key actors
Equipment supply
The implications are most obvious in the equipment supply industry. If networks are not expanding to deliver more capacity and new generations of technology are not needed, then network operators will only buy equipment to replace obsolete systems. While this is still a large market, it is smaller than the major suppliers are accustomed to and will require some adjustment – as has already started with Ericsson and Nokia both announcing redundancies in the last few years8 9 and smaller suppliers, such as Airspan, going through bankruptcy.10 Quite how much the market will contract is hard to predict. Operators have already been cutting back, with less investment in 5G equipment, and are probably already close to ‘maintenance only’ spend. We might, then, expect perhaps a few more years of small contractions in the radio access network (RAN) market before it stabilises.
Equipment suppliers will not need to spend as much on research and development, which will be focused more on improving their current products than inventing new generations of technology. Nor will they need to attend so many standards meetings or worry so much about filing for intellectual property rights. Reductions in these areas can help better match their costs to future market revenues.
Overall, the future is assured for the larger suppliers, such as Ericsson, Nokia, Samsung and Huawei, albeit at a lower level of sales than currently.
Operators
Most mobile and fixed operators have not seen revenue growth above inflation for many years but hold out hope that somehow this will turn around. It won’t. Operators need to accept that they are important utilities, delivering data connectivity reliably. They should, ideally, restructure and cut costs to adjust to this new reality.
Indeed, there is scope for new low-cost operators to emerge that outsource much of their operations and adopt a similarly disruptive model to the low-cost airlines. For example, imagine at the most extreme a mobile operator that:
- Had no masts, renting space from a towerco
- Had no RAN equipment, renting it from the suppliers along with a maintenance contract from them
- Had no core network, buying it as a service from a hyperscaler
- Had no shops or physical presence, performing all activities online
- Had no central office, using rented premises
- Potentially, had no direct customers, selling wholesale capacity to MVNOs who handle the customer relationship.
Such an operator would have as an asset their spectrum licence and other rights to be an operator, and their brand. They would essentially be a project management entity, with perhaps only a few hundred staff. This would give them a dramatically lower cost base than their competitors and the ability to offer lower cost tariffs as a result. Since consumers mostly select their operator on the basis of cost, they might win a larger share of the customer base.
Academics
Academics should continue their research – they may find cheaper or better ways to deliver existing services. But their focus should be on lower cost and more environmentally friendly solutions, and not on ever-faster systems.
Regulators
Regulators will no longer need to find new spectrum bands for cellular every few years and then conduct auctions. Indeed, the demand for spectrum may abate across most areas. A different way to manage spectrum will be appropriate, discussed further below, to deliver quality national networks rather than to use markets to optimise the use of spectrum.
Regulators may also have to consider whether fewer operators may be better for a country, with perhaps only a single underlying fixed and mobile network in many places – just as we only have a single network for electricity, water, gas, sewerage, rail, road and other utilities.
The remaining task – ubiquity
While some have all the connectivity they need, many do not. Outside urban areas broadband is often slow and mobile connectivity non-existent. Even in urban areas there are not-spots and indoor coverage can be particularly problematic, especially where the building is clad with modern materials that are near-impenetrable to radio waves. There have been many initiatives to improve this, but much work still remains.
The problem is that such connectivity is generally uneconomic for an operator to provide (otherwise they would have done it already). In outline there are three elements needed:
1. Finding more money to pay for uneconomic deployment
2. Reducing the cost of deployment through new approaches, such as airborne platforms
3. Putting in place a regulatory framework to get the operators to use any money diverted their way to deliver nationally desired coverage.
None of these are particularly difficult. Money can be found as direct payments from government, or indirect payments through allowing operators to retain licence fees and not pay auction fees. Plenty of ways exist to lower costs with many new high-altitude platform solutions being trialled that countries could promote, indoor coverage through using existing Wi-Fi solutions and network roaming approaches that provide large improvements in capacity and quality. An appropriate regulatory framework is also simple to envisage – a government sets a clear objective to the regulator to deliver specified coverage. The difficulty is not that solutions do not exist – they do – but that a change may be needed to current relationships between government and regulator, and both may need a shift in mindset from the view that the markets can effectively manage network deployment and efficient use of spectrum to one that advocates limited intervention.
The end of (telecoms) history
We should celebrate the end of telecoms history. It has been an amazing journey, from a time where contacting another person took days or months to a world where most can contact most others on the planet from a device kept in their pocket and can access almost any information, book, film or music instantly from almost anywhere. It is an incredible achievement.
We should also acknowledge it and focus efforts where they are now needed – on ubiquity, ease of use and suitable technical and economic structures. This is a major change from an industry used to a new mobile generation every decade and will be akin to turning a super-tanker. The industry will in all likelihood need a period of painful adjustment with the greatest opportunities for those who best understand how to benefit from the end of telecoms history.
‘The End of Telecoms History’ is available as a book published on Amazon.