The Streaming Tsunami: Part 6 - A National Blueprint For Video Streaming Delivery (Part 1)

The shift from DTT to OTT centric delivery and full-scale streaming is set to generate growth from current peak streaming demand to a potential 10x increase in required capacity. In this two-part article we use the UK as a model to present a theoretical new streaming infrastructure based on a unified edge network.


More articles in this series and about OTT/Streaming:


Full-scale streaming has a clear premise – we already know what scale to support on the video streaming delivery network. So far, most broadcasters have worked on consumption-based CDN models because streaming usage, and the peaks and troughs of capacity usage, have either been low (i.e., too low to commit to capacity models), too high (i.e., infrequent high peaks of viewers) and/or unpredictable. But at full-scale streaming we know what capacity usage to expect every day at a national level, and it will be fairly consistent and predictable.

An Edge server can be quite accurately sized from a capacity perspective. While the mix of traffic being served makes a difference to the maximum streaming egress performance that can be managed, we can at least have a minimum level of throughput for one Edge server on which to base a system’s committed delivery capacity. Recently published speed tests by Varnish (early 2023) and Broadpeak (late 2022) pointed to 2RU servers streaming at 1.1 Tbps and 750 Gbps respectively. Each server had different CPUs. These tests chose traffic that would allow the server to be pushed to the limit. In the real world, cache hit rates, live vs. VOD, VOD variety, origin connectivity, video bitrates requested, streaming protocols, etc. all make a difference. A conservative base level of throughput we can use for capacity planning is 50 Gbps per 1RU of Edge server.

What about redundancy? Many broadcast systems have redundancy built in. At full-scale streaming, we need to be clear what happens in the event of delivery capacity failures. Building out the overall delivery capacity for concentrated regional-level viewing probably gives sufficient capacity redundancy at a national level. Even if the platform has enough streaming capacity for an entire national population, most of the time this level won’t be reached at the same time so the system naturally contains server-level redundancy. If PoPs fail, redirecting traffic to other PoPs to serve the extra traffic will need to be designed in. If an entire content delivery network goes down, clear disaster-recovery options must be available. In streaming, a worst-case scenario should be that the bits and bytes must be delivered farther away from the viewer, which risks latency and buffering but most viewers could receive their content. Even if significant video delivery capacity is unavailable, or if the risk of delivery issues is high, then reducing the bitrate per viewer to allow the video to get through is a useful strategy. And if network failures occur that are outside the purview of the video streaming delivery network, then there can even be options to switch from fixed-line broadband to mobile or vice versa.

Location of Edges is also critical to full-scale streaming. Choosing the right number of Edge locations is a discussion point between content providers and individual ISPs. ISPs know their network topology, their congestion points, their customer bases, and their customer demand for data including video. When ISPs work with wholesale or national-level access network providers (e.g., in the UK, most ISPs work with either BT Wholesale or BT Openreach) then the ISPs co-locate their broadband infrastructure inside buildings owned or operated by the access network provider. For full-scale streaming, that real-estate map is the focus of the targeted Edge deployments. Picking the locations which provide the best mix of performance, cost and overall system efficiency is the right starting-point.

A National-level Architecture

The UK is the example in this article. Other telephony/broadband environments around the world look very similar. In mobile-first economies, like India, the infrastructure is different. But in any country where telegram and then telephony services evolved over a century ago and are moving from copper to fiber networks, the UK example below is a good reference point.

As noted previously, the Edge scales with the audience. So where is the audience in the UK? Wherever people live, with major population centers as shown in Fig 1.

Figure 1 - UK population density.

Figure 1 - UK population density.

The UK’s broadband network is distributed around the country primarily via the same real estate locations as the original telephone system. British Telecom’s Openreach is the primary access network operator in the country and operates the telephone exchange buildings and the infrastructure that connects exchanges with homes and businesses. BT Openreach has 5,600 exchanges in the UK. About 2000 of them serve 65% of the population, or about 43 million people. It’s clear from the map of the UK that those 2000 are in the purple, red, orange, yellow, and some of the green areas. The final one-third of the population is served by the remaining 3,600 exchanges. The Greater London metropolitan area alone has 176 telephone exchanges, given the density of the population (c. 9.5 million people). The Greater Manchester and Liverpool areas have 44 and 40 Exchange buildings respectively, that serve a combined 4 million people.

In the list of 2000 exchanges, some of the exchanges are serving much smaller areas like Daventry in the Midlands that has about 30,000 inhabitants and 10 exchanges. The average number of people served per exchange in this list of 2000 exchanges is 22,500 people. The maximum average number of people served per urban area is 62,955 and the minimum average is 1,054.

But part of the transformation of streaming video delivery relates to the point that BT Openreach is currently working towards reducing the number of exchanges from 5,600 to 1,000. Why? Because FTTP (fiber-to-the-premises) is, by design, a more centralized system. This is because fiber transports light, while copper is electrical. Distances of network connections with fiber can be much, much longer and still retain excellent performance. Not only should fiber networks have better performance, but due to the longer network links they can have a reduced real estate footprint, which also reduces the amount of network intervention and maintenance overheads. This separate article with input from BT Openreach from the beginning of their decade-long FTTP transformation program covers this subject in much more detail.

This more centralized access network is built for data, agnostic to what that data is for. Video is just one of the uses of this network, although we know from Cisco’s various State of the Internet reports that about 80% of all internet traffic is video. A CDN is built to move content more quickly from its source to the users. It is the fast lane of the internet highway. But CDNs often stop either before the ISP network begins, or they are embedded in anywhere from 2 to 50/60 ISP locations in a country like the UK. Netflix is reported to have 100 locations in the UK, but they are the giant of the streamer CDN group.

A Potential Video Edge Blueprint In The UK

Video delivery is at a key inflection point as streaming for broadcasters moves from 10-20% of total content delivery towards 50% of total content delivery in the next 5-6 years. 100% streaming delivery is the end-game, but it’s not yet clear when that time will come.

Sustainability of streaming delivery is a key requirement for any future deployment. Dedicating a network to video delivery and then using it for other video processing tasks or putting it into “power-down” mode at off-peak times could be the right balance between most-efficient video performance to each viewer as we move from c. 20-30 Tbps of special peak bandwidth consumption towards 200-300 Tbps of regular peak bandwidth consumption. This 10x increase assumes 10 Mbps average bitrate which is fine for our current “2D flat-screen” viewing method. But if immersive viewing formats become popular then the video delivery network capacity will need to be increased. Dedicating the platform to video also creates an opportunity for a unified and aggregated content delivery environment, reducing both the number of servers required and the amount of content movement required.

Let’s imagine that we deploy the Edge capacity much more deeply in the UK’s broadband networks. The Edge would need to be a specialized consumer video delivery service, and therefore probably outside the realms of a product offered by a regulated access network operator (e.g., BT Openreach) or wholesale broadband network operators (e.g., BT Wholesale). But the deployment itself should be in PoPs that are highly distributed. The 1000 BT Openreach exchanges that form the basis of the UK’s FTTP network could be the final deployment location of the Edge servers. They could support content delivery for all the major broadcasters and streamers, and they could be set up to serve the same content to consumers using different broadband service providers.

This ability to serve content from a single Edge to consumers on multiple ISP networks is a “reversed origination” concept, and it could be the most efficient model for full-scale streaming video delivery. To explain, CDNs today often deliver content for multiple content providers from Origins that interface to many internet service providers, often through an intermediate cache layer or Origin Shield layer. If we use this same method and apply it to distributed Edge servers in the 1000 locations, we could have an aggregated video delivery platform that interfaces upstream to multiple ISPs and downstream to singular access network operators. In the UK, imagine an Edge server with BT, Sky, TalkTalk, and many other ISPs upstream, with BT Openreach – the access network operator - downstream.

If we connect a single unified Edge to multiple ISP networks at this entry-point into the access networks, single pieces of content like a live stream or a VOD asset can be brought to the Edge by a request from a consumer of one ISP, but then reused by consumers on other ISP networks. In principle, rather than each ISP building out their own deep Edge network to reach the necessary scale for their customer base (and then perhaps shrinking it if their customer base falls), the capacity can be scaled and managed for both the ISPs and the Content Providers by a Video Edge Network provider.

This design is technically feasible. To implement it will require focused collaboration between ISPs and some type of Video Edge Network service provider that would aggregate this part of the internet’s traffic on behalf of the main video streamers in the market.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.