Designing IP Broadcast Systems: System Monitoring

Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.

SDI and AES networks are synchronous by design and a desirable side effect of this is predictable latency. This is a fundamental trait of synchronous networks as the bit clocks between the sender and receiver are both phase and frequency aligned, resulting in the delays largely being a function of the cable propagation. Although broadcasters do measure parameters such as clock jitter, in general terms, this is only an issue if something has gone very wrong. Normally, we can assume that the data link layer within the SDI and AES signal is robust and resilient. We cannot make this assumption about IP.

One of the key advantages of IP is that it is data link independent. We can transfer IP packets over many different hardware networks such as ethernet, WiFI or fiber, and the IP packets will transfer across them as they reside within the payload of the layer-2 frame. Specifications such as FDDI comprise the ANSI X3 standards such as X3.182-1993 – single mode fiber physical medium dependent – and provide all the necessary protocols to deliver the data payload to the IP’s destination device. Key to this method is that specifications exist to transfer frames between complimentary layer-2 networks, and also between networks that do not share the same medium. Through this interconnectivity, data payloads are transferred across multiple mediums thus leading to the transfer of IP packets across multiple platforms.

The layer-2 data link method provides untold opportunities as broadcasters are no longer limited to a single transport medium such as SDI or AES. However, with this flexibility comes great complexity and the anomalies in each medium need to be understood and accounted for. Fundamental to this is that many of the data link infrastructures employ asynchronous networks. For example, ethernet is asynchronous but FDDI can be configured to be synchronous. Although asynchronous networks are generally more flexible, they suffer from variable and indeterminate latency, whereas synchronous networks are highly determinate and maintain levels of latency that work within tight variance time constraints.

It is worth remembering that television is still a synchronous system. Even if we were to remove SDI and AES, the video and audio are synchronous. There are no moving pictures in television, just a series of still images played back very quickly to give the illusion of motion. And these images must be played back at highly specified frame rates with little variance otherwise the motion will appear jerky and will cause the viewers a certain amount of discomfort, to the point where they may well switch channels. Audio is similar but any sample discontinuities in the time domain will result in audible squeaks and pops, again causing the viewer discomfort and even some distress.

So, in more generalized IT terms, we have a highly time sensitive service being distributed over a time variant asynchronous transmission platform. And this is why we need monitoring.

Figure 1 - With no jitter the packets are evenly gapped as expected in an ST2110 video output, however, after being transferred through the network, the packets may well be temporally shifted causing packet jitter. In the top diagram, the distance between the packets X is the same, but in the lower diagram with jitter, the time distance between X, Y and Z is different.

Figure 1 - With no jitter the packets are evenly gapped as expected in an ST2110 video output, however, after being transferred through the network, the packets may well be temporally shifted causing packet jitter. In the top diagram, the distance between the packets X is the same, but in the lower diagram with jitter, the time distance between X, Y and Z is different.

The quality of the video and audio for a broadcaster is of primary importance and part of achieving this is making sure all the data packets arrive in good time at the receiving device. In doing so, the frames of video and audio can be assembled so that they may then be displayed and played on the television with high levels of precision. One way of achieving this is to monitor the packet loss as this will have a direct effect on the video and audio, however, a more subtle issue that may arise is through packet jitter and the effects it has on the quality of the video and audio.

In SDI and AES networks we are used to measuring clock jitter, but not packet jitter. This is mainly due to these networks being synchronous, but we know that if the clock has excessive jitter, then we will lose bits of data in the transport stream. In uncompressed video and audio networks, a loss of one or two bits every few hours are of little consequence as it is unlikely that anybody would see them. However, in a compressed network the manifestation of a lost bit can be severe due to the forwards and backwards predictors used specifically in video compression. And this concept becomes even more of a problem when we look at distribution at a packet level.

Packet jitter is an inevitable consequence of asynchronous networks as the packets must be temporally shifted so that space can be made to insert other packets often using memory buffers. How much the packet is shifted is often driven by the algorithms in network switches and routers, or congestion on the network. But to maintain the synchronous time constraints of video and audio sampling, the packets must be reassembled within a specific time period. The action of jitter may well move some of these packets out of this period meaning that they cannot be decoded in time and result in video and audio distortion.

Measuring packet jitter is essential to maintaining reliability within the broadcast infrastructure. Understanding whether the packet jitter is within specification requires an understanding of how the decode buffers operate in the receiving device. Generally speaking, having large buffers will provide the most reliable outcome as any large variance in time will be soaked up by the buffer. Packet re-ordering may need to be employed but this is usual within IT terminal equipment as the IP packets may need to take different routes to their destination. However, large buffers also increase latency, which is often undesirable.

Another consequence of making buffers too large is that the receiver will not know whether a packet is suffering from excessive jitter, or it has been dropped or lost in the network. To determine this, receivers often employ timers that will flag an error if a packet falls outside of the receive-window. But if the timer is too short then it may not be lost and could be just subject to jitter. If it’s too long, then the receiver could timeout too often resulting in variable and undesirable latency.

Although broadcasters occasionally measure clock jitter, this is often the exception as opposed to the rule. In synchronous SDI and AES networks we generally assume the data link layer or transport layer as it is also known, is stable, and if it’s not then the problem is localized to a device or single link.  In IP networks we cannot make this same assumption. The very nature of asynchronous systems means that the network is dynamic and highly changeable, especially in the time domain. Therefore, broadcasters need to dig deep into the network and understand what is going on. Monitoring provides us with a view into the depths of the network and turns the apparent chaos of the asynchronous system into predictable order.

Part of a series supported by

You might also like...

Future Technologies: Artificial Intelligence & The Perils Of Confirmation Bias

We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with a discussion of the critical topic of training AI models and how this is potentially compromised from the outset…

Delivering Intelligent Multicast Networks - Part 1

How bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.

NDI For Broadcast: Part 1 – What Is NDI?

This is the first of a series of three articles which examine and discuss NDI and its place in broadcast infrastructure.

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.