Analysis and Monitoring of IP Video Networks to Ensure High QoS
IP networks must avoid excessive traffic peaks that can cause buffer over-flow and degrade performance. Proactive prevention is key.
Migration towards ST 2110 and ST 2022-6 video networks for production and content delivery is picking up pace as the advantages of IP versus traditional SDI over coaxial cable carriage become more evident. The key drivers of IP include the introduction of more flexible and scalable business models based on virtualization and cloud technologies, along with the economies of scale and speed of technology development that stem from the use of commercial off-the-shelf (COTS) IT equipment.
While these benefits are compelling, the migration to IP video networks nevertheless poses significant technical challenges for broadcast engineers. While SDI over coaxial cable was designed as a dedicated link for synchronous, point-to-point delivery of constant high bitrate video, IP infrastructures are typically asynchronous in nature, and this characteristic presents major issues for real-time video delivery due to the potential for network congestion, latency, and jitter.
Sources of video network congestion
To achieve a high Quality of Service (QoS) with IP video, the network traffic flow should avoid excessive peaks that can cause over-flowing of switch buffers. In reality, the inherent burstiness of IP networks plus bandwidth constraints can result in unmanaged traffic levels, which can create packet congestion and latency as router ports become blocked due to buffer exhaustion. This type of packet congestion can be exacerbated in multi-hop infrastructures, with the different paths taken by signals potentially causing further variations in network latency.
These sources of network congestion and latency will delay the arrival of video packets and, in turn, potentially lead to significant jitter problems. In general terms, jitter is a deviation in signal periodicity. In the case of an IP video signal, jitter is a deviation from the expected packet arrival periodicity. Excessive deviations in Packet Interval Time (PIT) — also known as Inter Packet Arrival Time (IPAT) — can lead to packets being stalled, and to loss of packets at the receiver.
Ultimately, if it is not addressed, jitter can seriously impact QoS for broadcasters. This is particularly true for a low-latency system that requires a small receiver buffer size. Therefore, in broadcast video networks, it is vital to ensure that excessive deviation past the expected interval is not occurring, as this risks stalling the signal (due to receiver de-jitter buffer underflow). Broadcasters also must prevent too many packets from arriving with smaller-than-expected intervals, as this can overflow the receiver de-jitter buffer and lead to packet loss.
Both excessive deviation and packet overflow lead to video impairment and, in extreme cases, a loss of the video signal. However, with the ability to monitor and diagnose network congestion, along with associated jitter problems, broadcasters can maintain a healthy video network that supports reliable video delivery.
Network congestion monitoring and diagnosis
Jitter can be measured through observation of variations in the Packet Interval Time (PIT). Analysis of the PIT distribution of a video signal will provide an indication of its health, and warn the engineer of any broadcast critical network congestion.
By plotting a PIT histogram, the broadcast engineer can gain a real-time view of how network congestion is affecting a video signal. Measurement of the PIT mean, as well as minimum and maximum values, offers instant network analysis at-a glance.
In a “perfect” network, a video signal would have constant periodicity, without jitter, and all PIT values would be the same. In a network with very low jitter, the engineer would expect to see a normal distribution, with the vast majority of PIT values in and around the signal period (the expected interval arrival time). However, the reality of congestion in networks typically yields a broader distribution of PIT values around the expected nominal value.
Hence, a healthy video signal will have a distribution peak centred around the expected PIT. Due to the individual characteristics of a network, some significant jitter might be tolerable, but a high occurrence of jitter at the extremes would potentially lead to video signal impairment or loss. An impaired video signal will have a packet distribution characterised by a high occurrence of extremely long or short PIT values and/or by a distribution mean different from the expected signal period.
In addition to performing real-time jitter measurements, the engineer can track PIT variance over time to gain a longer-term monitoring perspective. Logging this data can provide vital information on the health of a network. For instance, a deterioration could be indicated by increased maximum PIT and a steadily rising mean. A PIT logging tool can also provide historical information on network congestion health at the time of an on-air incident.
However, it’s not enough to analyse a video network when there’s a problem. Broadcast engineers need to stress test their facility as their IP network evolves, and as new devices are added. A packet profile generator tool allows an engineer to analyse the video network for vulnerability to congestion and jitter by stress-testing the response of the facility to IP video signals transmitted under a variety of network conditions. The packet profile generator can flag network congestion issues before they become a real problem.
A packet profile generator displays a histogram showing the generated signal’s PIT. With this information, it is possible to adjust the timing to simulate network-introduced packet interval timing jitter. The engineer can use this capability to create custom profiles for testing and then also save network distribution profiles for rapid re-use at a later time. In conjunction with IP video packet analysis tools, the packet profile generator provides a powerful capability for network stress testing and fault diagnosis.
Conclusion
IP video networks have created a new set of test and measurement challenges for broadcast engineers, especially with respect to avoiding network congestion. However, new IP signal generation, analysis and monitoring tools simplify traffic analysis and network testing, thereby empowering broadcasters to avoid serious jitter issues that can jeopardise broadcast Quality of Service.
Neil Sharpe is Head of Marketing for PHABRIX.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Six Considerations For Transitioning To Cloud Based Video Distribution
There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…