Evolving CDNs To Improve OTT

To attract and retain an audience, OTT services must provide excellent customer experience by delivering content at the highest possible quality. Service must be smooth, uninterrupted and in the resolution required by the customer. This is one of the key ways OTT will continue to attract more viewers and fulfil its potential of being the main-stream method of media consumption.


This article is part of 'The Big Guide To OTT - The Book'

CDNs – Content Delivery Networks – are at the heart of this drive for excellent quality. CDNs were born as the internet grew up, originally designed to manage the movement of files between connected servers. A CDN created a managed pathway for the most efficient delivery of content over the internet.  In essence the CDN was the “high occupancy vehicle lane” of the internet – allowing traffic to travel a faster route because it was managed to allow it.

As video streaming emerged, the HTTP technology that was built for fast file exchange was adapted to accommodate streaming video. For VOD, this was fairly straightforward as it built on similar “file transfer” concepts. For Live video, the challenges have been more fundamental as HTTP was not originally conceived for frequently updated, low latency video delivery.

That said, latency and performance have reached very acceptable standards, largely matching the benchmarks set by satellite, cable and IPTV delivery methods. But that is only a small part of what the CDN is doing to assure an excellent OTT customer experience.

This article looks at the CDN’s complete responsibility, and how they are evolving.

CDN Functions

The functions performed by the CDN are varied.

The first function is to manage ingress and egress of the video content, moving it from the Origin to the final ISP Network that transports it onwards to the IP-connected device. The CDN will contain at least one layer of Cache server – the Edge Cache – to perform this function. Often the CDN will contain multiple layers of Intermediate Cache servers that “buffer” content between the Origin and the Edge Caches.

Related to this, the second function is to initiate and manage streaming sessions. Whether live or VOD, each unicast stream must be monitored and managed. Every stream can be characterized by dozens of stream parameters such as IP address, device type and bitrate.

The third function is to store content. Storage varies from long-term storage of a VOD asset, like a film, to shorter-term storage of a catch-up TV asset, like yesterday’s soap opera episode. Storage also includes the short-term storage of live content that can be rewound, which is often held in memory rather than in storage. This multi-faceted storage function is why Cache storage is high-performance and (relatively) low capacity.

Figure 1 – A CDN contains layers of Caching servers, performing 7 core functions.

Figure 1 – A CDN contains layers of Caching servers, performing 7 core functions.

Related to storage, the fourth function is content management. Managing what stays on storage and what is deleted is a continuous process. Generally, Caches use a simple First In First Out model with oldest content being deleted first. However, as a pull system, if any item on the list is re-requested it will be delivered and then immediately moved to the top of the list. The management of content to be storage-efficient but to also be bandwidth-efficient (i.e., minimize requests to the Origin) is a continuous process.

Load balancing is the fifth function. Normally this is performed across all the servers responsible for delivering the specific content. If, for example, ten Edge Caches are collectively delivering a live event to 100,000 people at an average bitrate of 5 mbps, the 500 Gbps of Cache egress should be balanced across servers. If each Edge Cache can stream 100 Gbps, then ideally they would each be 50% loaded. Naturally this rarely happens perfectly due to stream request location and network topology, but the Caches are continuously working to balance as evenly as possible. At a more granular level, a Cache server can also load-balance internally, typically across multiple CPUs. The Cache software should routinely balance CPU and Server for optimal performance.

The sixth function is platform monitoring. All hardware, software and network should be monitored to detect outages and degradation and then redirect stream requests appropriately. CDNs can be configured for servers to either seamlessly failover the streams they are carrying (by having matching IP addresses) or with a stream restart (by having unique IP addresses per server).

And finally, the seventh function is scale-out management. Expanding streaming and storage capacity involves the integration of new capacity to the CDN, typically while the CDN is in-service. Bringing new Caches online by adding them to the cluster or the overall CDN needs to be a seamless process to the overall OTT service that is being delivered.

CDN Architectures

There are two basic CDN models – public and private. A public CDN is generally multi-tenanted, where server and network capacity are shared. A private CDN is generally dedicated to one OTT service provider, where server capacity and network port capacity are not shared.

Over the last ten years OTT service providers have largely used public CDNs. In this mode, many of the largest service providers have adopted the multi-CDN model where total traffic is balanced over multiple CDN service providers. This provides network resilience, pricing competition and often delivery reach to specific markets where one CDN may have better presence than another.

The largest OTT service providers, most of which are VOD-centric, have largely adopted the private CDN model. Today, the next wave of large OTT service providers, offering a combination of Live and VOD content, are embracing the private CDN model to improve their service latency and control.

But private CDNs are selected not only for improved performance and service control, but also to manage costs more effectively. Public CDNs are generally priced on a “Per GB of Output” basis, which is variable and increases in line with audience size, video bit-rate and consumption time. Private CDNs, on the other hand, are generally priced on a “Per Gbps of Throughput” basis, which provides cost certainty. To illustrate, if an OTT service provider delivers to 10,000 people at 5 Mbps average bitrate then they will egress at 50 Gbps from the CDN. If they do this for 1 hour, they will stream 36,000 GBs. If they do this for 2 hours, they will stream 72,000 GBs. The output is doubled, while the throughput is constant. When an OTT service provider has regular consumption or increasing consumption, the private CDN can offer big cost benefits versus the public CDN.

CDN Performance

One of the challenges for OTT service providers is that CDNs typically only offer service availability performance commitments, when OTT service providers want QoS (Quality of Service) and QoE (Quality of Experience) commitments. Given CDNs are normally multi-tenant environments, are only part of the OTT Ecosystem, and they undergo enormous pressure to perform in the video use case, it is only natural that CDN service providers take this simple SLA position. So how do CDNs look at performance?

First, capacity. Storage capacity is relatively simple and is typically measured in Terabytes (TB) of usable storage deployed across the relevant Cache servers. Streaming capacity is measured in Gbps (Gigabits per Second). The largest OTT service providers regularly stream in Tbps to their peak audiences. We should refer to this capacity as “throughput”. It is normal today for OTT service providers to talk about Petabytes of content delivered. A typical statement for an OTT service provider is to refer to Petabytes delivered in a day, week, month or year. Of course, this indicates consumption level which is clearly what the OTT service provider cares about. But the CDN looks at its performance in terms of throughput because this is the measure of streaming capacity, and when 1 million concurrent viewers are streaming for 1 hour, this is a very different technical and operational situation to 200,000 people streaming for 5 hours, even though the consumption is identical.

Second, quality. QoS typically refers to metrics like average bitrate and cache failover, while QoE includes metrics like rebuffering ratio, start-up time and viewing time at peak bitrate. As a rule, OTT service providers focus on QoE metrics because they reflect the viewer’s actual experience. The OTT service provider deploys software in almost every client to capture QoE metrics and uses this information to switch traffic between CDNs (if a multi-CDN is in place). Typically, the CDN is focused on QoS metrics, but leading CDNs are moving to QoE to meet their customers’ expectations (see the next part of this series). This is changing the balance from reactive client-side performance management to proactive CDN-side performance management.

Part of a series supported by

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.