The Streaming Tsunami: Part 3 - Requirements For A Video CDN Blueprint
We return to our Streaming Tsunami series with a discussion of the predicted 10x increase in overall streaming ecosystem bandwidth required to meet demand if/when streaming becomes the first choice for consumption and the challenges this implies.
More articles in this series and about OTT/Streaming:
As home broadband and mobile speeds increase, and as content consumption habits shift inexorably to streaming-first, the Streaming Tsunami builds and builds.
As described in the opening article in this series the Streaming Tsunami is the wave of demand for streaming video, both Live and VOD, that consumers are creating. The wave is enabled by excellent internet connectivity and the availability of more content on streaming services. Part two of this series looked at where the Tsunami would “hit land” and why all the streams cannot just easily be delivered to every consumer, even if our individual broadband speeds clearly imply that this should not be a problem. That problem is caused by a mixture of bottlenecks in the end-to-end internet-based video delivery chain that delivers content from Origin servers via CDNs, then ISPs, then access networks, and often through home networks to a wide range of consumer devices. Industry observers see regular reports of the biggest streamers in the world grappling with the challenge of live video delivery at scale.
As national broadcasters expand their streaming delivery, as a collective group they are going to see their prime-time daily audiences consume more of their content over the internet. In a country like the UK with a single timezone, where about 30% of the population watch TV during most evenings, this would mean about 20-25 million concurrent streams across multiple broadcast channels and VOD libraries. In a country like the US with c. 330 million people or Australia with c. 26 million people, with multiple timezones in each country, the prime-time audience is regionalised (e.g., 9pm in New York City is 6pm in Los Angeles) so viewership is staggered by timezone populations (e.g., c. 155 million people live in the US’s Eastern timezone, c. 95 million in the Central timezone, c. 22 million in the Mountain timezone, and c. 55 million in the Pacific timezone). Therefore, regional audiences in the same timezone will place demand on the broadband networks in their regions. However, a national live event still draws in the whole national audience so the networks must be ready for the big concurrent viewership driven by these national events.
This anticipated future concurrent viewership in the UK compares with today’s biggest peak streaming audiences for special events like the World Cup that reach about 2-3 million people. In the US, a major televised event like the Superbowl attracts about 100m-120m people at peak across all viewing platforms, which is the 30% of the total population figure we are used to seeing in TV viewing circles, although the streaming-only figures reported for the Superbowl in 2023 were about 6% of that figure.
The Streaming Tsunami happens when a larger-than-before number of people stream concurrently using today’s network set-up, and it is creating a visibly growing bottleneck in the ISP network domain. The predicted and required 10x-20x increase in streaming demand needs to be managed somehow. It is time to consider a Blueprint for what a full-scale video streaming infrastructure could be.
The Blueprint’s Objectives:
The design for streaming at full-scale must achieve certain key objectives.
Reach
Public broadcasters and commercial broadcasters both want to reach the maximum number of people possible. The former because of their public service mandate. The latter because of their advertising-based business models. In addition, subscription-based streaming services, whether Live or VOD, need to be able to reach the maximum number of people in the territory in which they offer their services. Let’s assume that national reach above 95% of the population is acceptable to most major media companies.
Reliability & Performance
Media businesses need certainty their content can be consumed whenever and however the consumer wants. Buffering, jitter, latency problems, freeze-framing, and a whole list of general audio and video quality issues must be a thing of the past when streaming is at full-scale. Whether the VOD library is always available or there is an EPG with scheduled content, whenever a consumer presses play things should just work. And as sports broadcasters and fans know very well, latency – or the time between when the content departs its source until it is played out on the viewing device – must somehow achieve a synchronised viewing experience between devices, so that we avoid frustratingly disjointed viewing experiences in public places, between neighbouring properties, and even on different devices on the same network.
Efficiency & Sustainability
The energy consumed from delivering video needs to be as low as physically possible. While ubiquitous devices like SmartTVs, tablets, PCs and Smartphones should all evolve to consume less energy, and while the centralised content production, encoding and origination processes should also be managed to be highly environmentally sustainable, the part of the video delivery ecosystem that is set to scale dramatically to support the 10x increase in demand is the part in the middle – the content delivery network. At full-scale, the content delivery network must deliver the >95% population reach. The content delivery network includes the multi-purpose broadband and mobile networks of course, so the specific video delivery element of this infrastructure is where we must focus our video delivery efficiency efforts.
Security & Reliability
With public service broadcasting moving to streaming first, streaming infrastructure and systems must fulfil the “critical national infrastructure” requirement, which is already supported by national broadcast networks and national telecommunications networks. Public service programming, whether video based or audio based, must reach the population when required, and so the networks must withstand external attacks and meet stringent reliability requirements.
Cost-effectiveness
Today we use multiple network types to deliver media services: satellite, terrestrial broadcasting in various frequency bands, IPTV, CableTV, and OTT Streaming. At some point in the not-so-distant future we may find that OTT Streaming is the only method of video and audio distribution we need. We may decide that the high bandwidth telco networks with IP-connected devices are sufficient. On the one hand, cost-effectiveness of distribution will be achieved by the right-sizing of these various networks for the purposes of media delivery. On the other hand, with the critically important environment-saving objective we must all achieve, we should ensure we avoid unnecessarily oversizing and overengineering a streaming platform.
A Blueprint:
The demand on a Video Streaming Delivery Network of the future is defined by the size and location of the audience at any given moment and the bitrate of the video that is being consumed. Let’s run some numbers.
In a country, or single timezone region of a country, of about 60 million people, if 20 million people stream concurrently at 10 Mbps then the total bandwidth used is 200,000 Gbps or 200 Tbps. This compares with special peak streaming events today that can require 20-30 Tbps in a country of 60 million people. The 10 Mbps average is a conservative consideration as a planning parameter. A SmartTV requires a higher bitrate to receive a quality video on a large screen – it is obvious when the quality is not good enough. While a Smartphone, being much smaller, can use a lower bitrate. This average of 10 Mbps assumes the vast majority of video consumption during prime-time viewing is on the big-screen in the home, and it also assumes that some streaming will be taking place at higher bitrates, such as 15-20 Mbps for UHD and HDR content. This article does not focus on the capacity that could be required when immersive viewing (e.g., virtual reality, multi-camera video delivery) becomes more prevalent, when individual users may require hundreds of Mbps of connectivity – the expected impact of immersive viewing will be explored later in this series.
Average broadband speeds today in many developed countries are close to 100 Mbps. The growing percentage of fibre-connected high-speed broadband connections is driving this uplift. On this basis alone, consumers should easily expect multiple high quality video streams to reach their homes without any problems whatsoever. And if home networking can handle an incoming 40-50 Mbps, then the viewing experience should be flawless. In the UK, for example, where average broadband speed is currently 81 Mbps across 27 million broadband-enabled homes, then the whole population should theoretically be able to concurrently stream up to 2,200 Tbps.
However, on February 15th 2023 the UK’s major ISPs self-reported their record peak network usage which totalled just under 100 Tbps, while the LINX (London Internet Exchange) reported a new throughput record of 7.87 Tbps. This record was driven by the simultaneous delivery of live football matches on Amazon Prime and BT Sport and a new release of the Call of Duty video game. Anecdotally, other Streamers noted an adverse impact on the quality of their own video services during these events on February 15th. So, while total home broadband capacity is 2,200 Tbps and therefore over 10x the capacity required for a prime-time TV audience to stream at 200 Tbps, this recent experience raised red flags about streaming capacity.
A further consideration for the Blueprint design is the absolute maximum streaming capacity we should have for a given average bitrate. The streaming service should ideally reach any member of the population wherever they are, which can include very concentrated pockets of viewership. If, for example, a very large proportion of a country’s regional population stream an event – local football derbies and regionally significant cultural moments are good examples – then the video delivery network near that population should be sufficient to handle the load and maintain delivery performance. The broadband connectivity would probably be fine in the UK, for example, as shown by national average broadband speeds. But what if the rest of the video delivery network is too far from the viewing population? This could cause performance and reliability issues by overloading centralised network links. Ideally, the video streaming delivery capacity should be set up for 100% of the population so very large regional audiences receive good service. In the UK, for example, instead of 20 million people and 200 Tbps, it would mean approximately 66 million people and about 660 Tbps. This is still well within the 2,200 Tbps consumer broadband speeds in place today, and that is before a jump from about 100 Mbps average broadband speed to the Gbps broadband speeds that some fibre-broadband operators are already selling.
The red flag of February 15th in the UK points towards a bottleneck somewhere inside the video delivery infrastructure and ISP core network infrastructure. Other countries are seeing similar problems because the telco infrastructures and video consumption habits are generally similar in all developed countries. A mix of issues involving CDN Edge network capacity, transit capacity, peering capacity, network routing rules, and origin-to-edge connectivity result in a level of consumer QoE problems. These same issues are seen again and again by big streamers when they reach new consumption levels. So, what can be done about it and what should national broadcasters consider? The Streaming Tsunami series will continue by looking at where we can transform existing streaming video delivery networks in order to deliver the full-scale streaming blueprint’s objectives.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Six Considerations For Transitioning To Cloud Based Video Distribution
There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…