Monitoring An IP World - OTT - Part 2
In this second instalment of our extended article on monitoring in OTT and VOD, we take a look at the core infrastructure and discuss how to analyze systems to guarantee that video, audio and metadata is reliably delivered through network and CDN infrastructures to the home viewer.
Distribute Servers
One method that will significantly improve efficiency is to move the origin servers closer to the viewers and the edge servers, in effect, provide this solution. Situated in the ISPs (Internet Service Providers), the edge servers provide the multi bit rate streams, manifest and other housekeeping files needed to allow systems such as DASH and HLS to operate. Now, a single encrypted video and audio stream is distributed to the edge servers to reduce the load on the origin servers and the internet backbone.
We must remember that the video and audio being streamed is not a continuous data stream as in the traditional broadcasting sense. That is, the streams are small packets of video and audio that are sent using the TCP protocol to reliably deliver the data to the end receiver. Without this there would be no error correction and data would probably be lost.
One of the consequences of packetizing the data is that it must be buffered throughout its transmission. This adds some latency but more importantly is a source of potential buffer overflow and underflow resulting in lost packets. The packets can usually be recovered through the operation of TCP. It’s not much of an issue if this happens occasionally but is more of an issue if buffer anomalies occur regularly.
Figure 2 – each broadcast service is transcoded to provide multiple bit-rate streams resulting in six more video and audio streams for DASH and HLS type services. If this is streamed over the internet, additional data is unnecessarily streamed resulting in inefficient use of the internet and potential congestion. To avoid this, the transcoding function is placed at the edge servers.
OTT distribution is further challenged when we consider VOD and +1hr services. To overcome network congestion and overloading the origin servers, the assets associated with these services are also placed in the edge servers. The edge servers still request information from the origin servers but as they cache the video and audio their requests are significantly reduced.
Again, it’s worth remembering that the CDN doesn’t just define a high-capacity network link but also includes the storage servers, transcoders and packetizers. Even from this simple example, it can be seen that although the introduction of the CDN has greatly improved the efficiency of the OTT distribution and quality of experience for the viewer, there is a price we pay for this increased system complexity.
Monitoring Necessities
Monitoring brings order to complex systems. Through monitoring we can better understand what is going on deep within a system. This is even more important in OTT as CDNs, ISPs and networks are often provided by different vendors. CDNs share their network bandwidth and infrastructure with several clients. Although data-rate shaping potentially protects clients from the effects of bursty data from other contributors, there is still the possibility that one client may use more than their unfair share of capacity, resulting in lost packets and a break-up of their service.
As we move to OTT it soon becomes evident that monitoring has significantly moved on from just confirming the video and audio meets the relevant specifications. We must now consider the transport layer too including the IP protocols. We have done this in the past as RF can be considered a transport layer, the difference now is the complexity involved at both a system level and data-link level stemming from a plethora of options for OTT protocol types and audio/video codecs.
If a broadcaster starts receiving reports of a poor quality of service in a particular region, then they could justifiably assume that a problem has occurred in a specific feed from a CDN. Placing monitoring before and after the CDN would confirm where the problem is occurring. It might also be the edge servers causing problems, but the broadcaster will be able to quickly see if the CDN to the edge servers is correct or not.
More Than Video And Audio
Analyzing the validity and frequency of the manifest and housekeeping files is critical to making sure a viewer can watch their program. Without the manifest files the viewers device will not know where the variable bit rate streams are and consequently will not know which stream to select resulting in the viewer not being able to watch their program.
Installing monitoring probes deep inside the CDNs would provide reliable feedback of the inner workings of the CDN thus helping the broadcaster quickly find any issues with their feeds. This provides distinct advantages for both the CDN provider and the broadcaster. It’s entirely possible that something could have gone wrong at the broadcasters end and the CDN provider is being presented with data that cannot be displayed on the viewers device. Knowing this would be extremely useful.
Adding centralization to the monitoring further improves the efficiency of the system. Probes strategically placed deep inside the OTT network as well as within the broadcast facility can all be connected together. Not only does this provide a centralized monitoring facility but it also gives the management software the opportunity of comparing the measurements of all the other probes in the system.
Collaborative Monitoring
Centralized aggregation, analysis, and visualization of monitoring data in a distributed system helps broadcasters understand problems that may be occurring as well as issues that have yet to materialize but are in the process of emerging. For example, the data rate of a link between an origin server and edge server may increase even though the amount of streaming content has not increased. This could indicate large packet errors due to the number of resends.
OTT systems have delivered unparalleled levels of service for viewers. To achieve the high quality of service viewers not only expect, but demand, has resulted in OTT systems becoming incredibly complex. This is further exasperated by the number of vendors and service providers involved in an OTT broadcast chain.
To help make sense of this complexity broadcasters must not only understand the intricacies of OTT playout, such as CDNs, but must also invest heavily in connected monitoring systems to help them understand where issues effecting quality of service are either materializing, or about to materialize.
Supported by
Broadcast Bridge Survey
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Six Considerations For Transitioning To Cloud Based Video Distribution
There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…