Monitoring An IP World - OTT - Part 2
In this second instalment of our extended article on monitoring in OTT and VOD, we take a look at the core infrastructure and discuss how to analyze systems to guarantee that video, audio and metadata is reliably delivered through network and CDN infrastructures to the home viewer.
Distribute Servers
One method that will significantly improve efficiency is to move the origin servers closer to the viewers and the edge servers, in effect, provide this solution. Situated in the ISPs (Internet Service Providers), the edge servers provide the multi bit rate streams, manifest and other housekeeping files needed to allow systems such as DASH and HLS to operate. Now, a single encrypted video and audio stream is distributed to the edge servers to reduce the load on the origin servers and the internet backbone.
We must remember that the video and audio being streamed is not a continuous data stream as in the traditional broadcasting sense. That is, the streams are small packets of video and audio that are sent using the TCP protocol to reliably deliver the data to the end receiver. Without this there would be no error correction and data would probably be lost.
One of the consequences of packetizing the data is that it must be buffered throughout its transmission. This adds some latency but more importantly is a source of potential buffer overflow and underflow resulting in lost packets. The packets can usually be recovered through the operation of TCP. It’s not much of an issue if this happens occasionally but is more of an issue if buffer anomalies occur regularly.
Figure 2 – each broadcast service is transcoded to provide multiple bit-rate streams resulting in six more video and audio streams for DASH and HLS type services. If this is streamed over the internet, additional data is unnecessarily streamed resulting in inefficient use of the internet and potential congestion. To avoid this, the transcoding function is placed at the edge servers.
OTT distribution is further challenged when we consider VOD and +1hr services. To overcome network congestion and overloading the origin servers, the assets associated with these services are also placed in the edge servers. The edge servers still request information from the origin servers but as they cache the video and audio their requests are significantly reduced.
Again, it’s worth remembering that the CDN doesn’t just define a high-capacity network link but also includes the storage servers, transcoders and packetizers. Even from this simple example, it can be seen that although the introduction of the CDN has greatly improved the efficiency of the OTT distribution and quality of experience for the viewer, there is a price we pay for this increased system complexity.
Monitoring Necessities
Monitoring brings order to complex systems. Through monitoring we can better understand what is going on deep within a system. This is even more important in OTT as CDNs, ISPs and networks are often provided by different vendors. CDNs share their network bandwidth and infrastructure with several clients. Although data-rate shaping potentially protects clients from the effects of bursty data from other contributors, there is still the possibility that one client may use more than their unfair share of capacity, resulting in lost packets and a break-up of their service.
As we move to OTT it soon becomes evident that monitoring has significantly moved on from just confirming the video and audio meets the relevant specifications. We must now consider the transport layer too including the IP protocols. We have done this in the past as RF can be considered a transport layer, the difference now is the complexity involved at both a system level and data-link level stemming from a plethora of options for OTT protocol types and audio/video codecs.
If a broadcaster starts receiving reports of a poor quality of service in a particular region, then they could justifiably assume that a problem has occurred in a specific feed from a CDN. Placing monitoring before and after the CDN would confirm where the problem is occurring. It might also be the edge servers causing problems, but the broadcaster will be able to quickly see if the CDN to the edge servers is correct or not.
More Than Video And Audio
Analyzing the validity and frequency of the manifest and housekeeping files is critical to making sure a viewer can watch their program. Without the manifest files the viewers device will not know where the variable bit rate streams are and consequently will not know which stream to select resulting in the viewer not being able to watch their program.
Installing monitoring probes deep inside the CDNs would provide reliable feedback of the inner workings of the CDN thus helping the broadcaster quickly find any issues with their feeds. This provides distinct advantages for both the CDN provider and the broadcaster. It’s entirely possible that something could have gone wrong at the broadcasters end and the CDN provider is being presented with data that cannot be displayed on the viewers device. Knowing this would be extremely useful.
Adding centralization to the monitoring further improves the efficiency of the system. Probes strategically placed deep inside the OTT network as well as within the broadcast facility can all be connected together. Not only does this provide a centralized monitoring facility but it also gives the management software the opportunity of comparing the measurements of all the other probes in the system.
Collaborative Monitoring
Centralized aggregation, analysis, and visualization of monitoring data in a distributed system helps broadcasters understand problems that may be occurring as well as issues that have yet to materialize but are in the process of emerging. For example, the data rate of a link between an origin server and edge server may increase even though the amount of streaming content has not increased. This could indicate large packet errors due to the number of resends.
OTT systems have delivered unparalleled levels of service for viewers. To achieve the high quality of service viewers not only expect, but demand, has resulted in OTT systems becoming incredibly complex. This is further exasperated by the number of vendors and service providers involved in an OTT broadcast chain.
To help make sense of this complexity broadcasters must not only understand the intricacies of OTT playout, such as CDNs, but must also invest heavily in connected monitoring systems to help them understand where issues effecting quality of service are either materializing, or about to materialize.
Supported by
Broadcast Bridge Survey
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Operating Systems Climb Competitive Agenda For TV Makers
TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
Broadcasters Seek Deeper Integration Between Streaming And Linear
Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.