The Streaming Tsunami: Part 5 - Scaling With The Audience At The Edge

The Edge network scales with the audience. The more people that stream concurrently, or the higher the average bitrate requested by a consistently sized audience, the more capacity the Edge network needs. Achieving best possible efficiency at the Edge requires intelligently managing content and performance monitoring, tuning the infrastructure specifically for video, and correctly locating the video delivery capacity.


More articles in this series and about OTT/Streaming:


Intelligently managing content at the Edge is a transformative step in video delivery efficiency. The first principle is to flatten the layers of hardware between Origin and Edge. The traditional architecture has multiple caching layers, each layer designed to hand content down in a hierarchical manner to the layer below. An approach that reduces hardware layers to improve energy efficiency involves Edges interconnecting with each other. It looks like a very flat structure where each Edge can communicate to the Origin, but in reality each Edge refers to other Edges first in order to obtain content to fulfil the viewer request.

This requires appropriate network connectivity between Edges and it requires software that is designed to work in this way. This “mesh-network” approach can then be used intelligently to either stream files from one Edge to another Edge and on to the viewer, or it can be used to copy files from one Edge to another ready for subsequent viewers. This approach minimizes core network requirements, and opens up the opportunity for highly distributed, highly efficient network architectures that can scale for large audiences very quickly.

Tuning infrastructure specifically for video is useful because it can energy-optimize for the overwhelming majority of internet-based content delivery. About 80% of total internet traffic is video, and about 80-90% of that is VOD content. Video-specific servers can have simpler hardware and different deployment locations than servers working on website delivery and other high-volume, low-bandwidth use cases like e-commerce and interactive gaming. The simpler hardware reduces energy consumption and given the proportion of total internet bandwidth required for video, there is a good argument to give video its own “high occupancy vehicle lane”.

Correctly locating this video delivery capacity means placing Edge servers as close to consumers as appropriate to achieve the required energy efficiency and delivery performance and costs. In a full-scale streaming environment, the bulk of consumer viewing takes place inside homes during the evenings. The map of the population therefore tells us where bandwidth is needed. Just like the original telephone exchange locations, to serve anyone in the population at any time we must build out the right infrastructure where people live.

While these three design principles help deliver video efficiently at full-scale, the important side-benefit of this approach is that bandwidth for the rest of the internet’s traffic will be improved, by freeing up important centralized servers that can reside at peering points or further back in ISP core networks.

Edge Utilization

The Edge already performs multiple functions for VOD and Live video delivery – this article highlights 7 specific functions. But the fact that video viewership has very consistent peaks and troughs during a day means that if Edge capacity is built out to deliver for peak, then at off-peak times we have a choice. We either switch off unused capacity to avoid unnecessary energy use, or we use the infrastructure for tasks that might otherwise be performed in another computing environment, like transcoding video files. Whichever choice is made, the goal is to continuously streamline the overall video infrastructure to use it more intelligently and more efficiently.

Any student of the history of industrialization will remember that the desire for greater efficiency has always led to a push for economies of scale. Economies of scale basically requires achieving high levels of resource utilization. In manufacturing industries, higher resource utilization means producing higher volumes of outputs from the same capacity. This has the effect of spreading a fixed manufacturing process cost over more units of output, which reduces the production cost per unit. 

Number of Outputs / Process cost = Unit Cost

100 Outputs / $1000 = unit cost of $10

Vs

1000 Outputs / $1000 = unit cost of $1 >>> a 10x efficiency improvement and a 10x unit cost reduction

Achieving economies of scale through capacity utilization improvement leads to lower price per unit which generally means lower costs to buyers, increased sales, and higher profits to shareholders. The virtuous circle of economies of scale begins.

Streaming video delivery networks – our “streaming infrastructure” - can take a similar path to yield positive economic results for the Media industry. Infrastructure that is utilized heavily on a regular basis (e.g., every day) can provide lowest possible cost per unit of delivery (in video streaming terms this is gigabytes (GB) of video). If 100 Gbps of streaming video capacity is used continuously for 24 hours per day every day for a month it will deliver 32PB of content. This would be much more cost-effective than 100 Gbps of capacity used for just 1 hour per day which would only deliver about 1 PB in a month. Yet the cost of the infrastructure in both scenarios is the same because the 100 Gbps of capacity needs to be set up and maintained with the necessary servers, routers, switches, and network connections. The obvious argument is that the unit cost in the latter case is lower if other streamers use the capacity. This is true, but viewing patterns on a daily basis have a stubbornly consistent pattern and “filling up the capacity with some content delivery” is not as easy as it sounds. Delivering video into different time zones where viewing time is different is an idea (e.g., peak viewing in the US East Coast at 9pm is 2am in the UK when UK-based capacity would be idle for the local audience) but this has the obvious drawback that the consumer location would be a long way from the Edge, and delivery performance would suffer. Capacity planning scenarios for full-scale streaming must consider local content viewing patterns.

To illustrate two different but typical video delivery scenarios, in the first chart below a Streamer delivers a fairly consistent amount of content every day. This could be a national broadcaster for example, with daily news, talk-shows, soap operas, etc. While there are periods of idle capacity every day, the capacity is used very frequently. This particular graph shows a peak of about 1.4 Tbps every day, which has delivered about 125 PB of content over the period shown. 

The second graph shows a very different perspective of a live sports streamer. On the assumption that the same amount of content is being delivered, this example shows that 13 Tbps of capacity is required to deliver 1.4 Tbps.

Flexible capacity creates greater utilization potential. If one level of capacity is available all the time for regular use that heavily utilizes it, and then another level of capacity is available on-demand for irregular and less frequent use, then this should be expected to provide the most cost-effective and efficient use of capacity. But the concepts of cost-effective and efficient might actually be mis-aligned in video streaming delivery. Efficiency might mean the best use of the capacity when it is available, while cost-effective could mean lowest cost capacity. Using a lot of capacity for short periods of peak viewership might not be cost-effective – it depends how big the peaks are, and how frequent or infrequent they are, and how video streaming delivery performs during peaks. Serving peaks could be very expensive if there is simultaneously a need for guaranteed video streaming delivery performance. Guaranteeing a lot of capacity at peak could be expensive to procure simply because of higher market demand for a finite supply of capacity, and if delivery capacity is not guaranteed and performance relies on best efforts and luck, then there is an increased chance of performance issues. This can easily lead to profit-impacting outcomes, like subscriber churn, customer complaints, and reduced viewing times. The concept of cost-effective needs to be in the context of performance and the business results this performance generates. It could be more cost-effective to pay a premium for guaranteed capacity for your entire audience to achieve best possible QoE and financial results.

Guaranteeing delivery capacity to achieve a streamer’s profitability objectives does not need to be a burden only carried by a single streamer in a market. It is possible to build out capacity specifically for multiple broadcasters to share, which makes sense as their audience shifts between them from hour to hour and day to day. Through industry-level collaboration there is a way to achieve the ideal balance of latency, QoE, network usage and total cost of delivery. 

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Six Considerations For Transitioning To Cloud Based Video Distribution

There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…