CDN For Live And VOD

CDNs are much more than just high-speed links between Origins and ISP (Internet Service Provider) networks. Instead, they form a complete ecosystem of storage and processing, and they create new possibilities for highly efficient streaming at scale that will likely replace broadcasting technologies. In this article we look at the different workloads for Live and VOD to understand better how they operate.


This article is part of 'The Big Guide To OTT - The Book'

A typical mix of content delivery for an OTT service from a large public or commercial Broadcaster that delivers both Live and VOD is about 80-90% VOD and 10-20% Live. The CDN workloads in each scenario are different with important ramifications for the technology.

Figure 1 – Live and VOD workloads place different pressure on unified Cache servers.

Figure 1 – Live and VOD workloads place different pressure on unified Cache servers.

For Live content, large audiences are watching the same content at the same time. Inside the CDN, this workload places pressure on server memory, CPU, and network ports to sustain the stream and the bit-rate. As a CDN Edge Cache operates as a proliferation mechanism for the same live-streams – e.g., one 5 Mbps HLS stream in, with 1000 x 5 Mbps HLS streams out – the job is to sustain egress with low latency to every single viewer.

On the other hand, the VOD workload is driven by individual consumers watching varied content that is being streamed at different times. Pressure is placed on Storage, CPU, and streaming algorithms at both Origin and CDN levels. Not only does the stream of a single VOD file need to be sustained, but this has to be achieved on potentially hundreds or thousands of discrete files to unique viewers.

Edge Computing

Edge computing has been a hot topic over the last few years as OTT video and general use of public cloud services have expanded. For OTT, edge computing can be defined as processing video into its final delivery format at the CDN Edge. Specifically, this involves delivering from the Origin to the Edge in a mezzanine format, like CMAF, and then processing at the Edge to the client device in the required package, like HLS, DASH or MSS. But where is the Edge?

In theory, the Edge should be as close to the consumer as possible. In a perfect world it would be possible for every consumer device to function as an Edge device. While peer-to-peer networking offers some potential here, at least for Live use cases, there isn’t a viable consumer device level solution for peer-to-peer networking for VOD content.

One step further away from the consumer is the telco’s access network where there are street cabinets, mobile masts, and the original telephone exchanges. While these may become edge locations in future, today the volume of locations and the volume of video does not justify the expense. In addition, access networks are generally aiming to be data-agnostic, with video as just one form of data. Over time this may change as we move to more advanced forms of video delivery like immersive viewing formats through virtual reality and holographics.

The next step moving away from the consumer towards the origin is the ISP’s core network. This is the first opportunity for a CDN to become ISP-specific, and because the centralized core network is serving all data delivery needs, offloading video traffic where possible makes sense for the ISP to save core network bandwidth. This is the current focus for Edge Cache placement for the largest OTT service providers. However, too many caches in the core network can create unwanted technical and operational complexity for the ISP. But given the disproportionate amount of total traffic due to video, the trend towards ISP-based edge caching is strong.

This is why the Edge, for the most part, is currently deployed in Internet Exchange  locations – the “meet me room” for the ISPs and the OTT service providers (via the CDN service providers). Even so, processing at the Edge is not the norm. But it’s coming, because it’s based on the solid principle of pull-system efficiency. Edge computing will help reduce bandwidth requirements between the Origin and the Edge, but it will put more pressure on the Edge as it will add an 8th function (see prior article for the first 7 functions of the Edge) of managing just-in-time packaging and encryption. The business trade-off for edge computing is between network cost, CDN cost and server cost. In the end, each OTT network topology will be evaluated on a case-by-case basis to find the optimal approach.

The Appearance Of The iCDN

CDNs are transitioning from being a series of interconnected computers to a series of interconnections. The difference may sound subtle, but it is fundamental. It means that instead of using centralized brains with “dumb” pipes, we will use distributed brains with “actively engaged” pipes. Greater interconnectedness will create greater intelligence, and see the rise of the iCDN.

Today CDNs can be described as the HOV (High Occupancy Vehicle) lanes of the multi-lane internet – they give a faster, less congested, more managed route for data delivery than the unmanaged public internet. As OTT traffic grows, we generally think about building more and bigger HOV lanes, with “exit ramps” closer and closer to the consumer.  This is the natural progression of the pull-system – to place the final content delivery location closer to the point of consumption.

These HOV lanes are being supported by multiple access network expansions, including telco fibre-to-the-home roll-outs, CableLabs’ 10G programme in the cable industry, and 5G in the mobile industry. These major infrastructure changes will improve the customer experience of OTT video, but also pave the way for enhanced video experiences, like immersive viewing, that place new pressures on the network in a continuous cycle of consumers making use of the capacity that we have available to us.

In this context, OTT service providers need to think differently, as their traffic demands outstrip network supply. Deploying Edge Caches closer to the consumer is important, but as noted there are challenges. Leading CDN businesses have recognized this issue and are focused on making more intelligent use of network resources to reduce dependency on network supply. So, what is being done by these leading CDN businesses to create the iCDNs?

First, leading CDNs are interconnecting all their Edge servers, and then distributing content caching and processing across them. This reduces the traditional dependence on a single Edge Cluster or POP, while utilizing performance intelligence gathered from all parts of the CDN infrastructure. This more sophisticated approach is superseding the traditional CDN “acquirer server” method that is oriented around vertical and hierarchical content caching and distribution.

Second, leading CDNs are using performance data from all 5 service domains – the 4 QoS domains of hardware, software, network and stream, and the QoE domain of the client – in order to create a complete customer-centric view. This view becomes even more important as CDNs expand their workload for bigger audiences. All CDNs monitor software, hardware and streams, focusing on the metrics that are directly under their control. Some CDNs go beyond this to add 3rd party network data into a unified QoS view. But only the most advanced CDNs combine QoE and QoS for a complete view of performance.

Third, the iCDN takes data from the 5 domains and applies intelligent algorithms in real-time in order to predict quality issues and take proactive actions to avert them. This proactive approach based on all available information – which must be filtered to avoid data overload and slow decision-making – is the hallmark of the intelligent CDNs and will be how OTT service providers assure QoE as their audiences grow. In summary, the iCDN is the next-generation of CDN platform.

Figure 2 – Distributed Intelligence within CDNs is the future…and already available from leading CDNs.

Figure 2 – Distributed Intelligence within CDNs is the future…and already available from leading CDNs.

The roadmap for CDNs is not just about supporting the pull-system principles with more powerful edge caches that are simply placed closer to the consumer, although this remains fundamentally important because this addresses the traffic volume challenge. The roadmap is also about making the Edge more interconnected and more intelligent in order to be more proactive, thereby also addressing the need to achieve optimal streaming efficiency.

OTT service providers should therefore look at their CDN strategy in terms of the combination of placing Edge caches in the optimal locations (i.e. peering points or inside ISP networks) with the optimal business model (i.e. public, private or hybrid CDN) and ensuring that the CDNs make maximum use of relevant data for intelligent, real-time stream routing. This strategic approach to scaling out content delivery capacity as streaming grows and grows will make a significant difference to both customer satisfaction and business profitability.

Part of a series supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Six Considerations For Transitioning To Cloud Based Video Distribution

There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…