Delivering Intelligent Multicast Networks - Part 1

How bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.


This article was first published as part of Essential Guide: Delivering Intelligent Multicast Networks - download the complete Essential Guide HERE.

Signals distributed using packet switched networks are far more efficient than their broadcast circuit switched counterparts, but the protocols needed to make packet switched networks operate at their highest efficiency are more aligned to sporadic and intermittent data bursts found in the transactional environments of office and webserver type applications. Streaming high value media over these networks provides new and interesting challenges requiring solutions built on top of the existing IT methods.

Multicast replicates the action of a broadcast distribution amplifier but at the switch layer so that packets are duplicated where they are needed instead of at the transmitting device, such as a camera or microphone.

In theory, this reduces network congestion, but the PIM (Protocol-Independent Multicast) protocol that governs multicasting does not consider the available network bandwidth. Therefore, if adequate bandwidth planning is not enforced, congestion could still occur, which leads to packet loss resulting in picture breakup and audio distortion.

Circuit vs Packet Switching

IT datacenters traditionally rely on automated routing of packets through a network. This is one of the areas where the fundamental thinking between broadcast and IP networks diverges. Broadcasters use circuit switched networks for distributing SDI video and AES audio. These provide fixed latency and guaranteed bandwidth at the expense of being static and lacking in flexibility. For example, broadcasters can only distribute a compliant SDI signal across the SDI network that is compatible with the relevant SMPTE standard, or group of standards that the network has been designed to work with. IP networks provide incredible flexibility, but at the expense of variable network latency and congestion.

Network congestion is a relatively new concept for broadcasters but is not unique to IP networks. An MCR matrix that runs out of output ports or tie-lines to other routers, is technically speaking, congested. That is, we cannot pass anynmore signals through that part of the network without the loss of data. It soon becomes apparent to the broadcast engineer when an SDI or AES network is suffering from congestion as there is no destination for the signal. The same is not true for IP networks.

The ability to automatically route IP packets between sources and destinations is one of the network’s greatest strengths in that a centralized controller is not needed. The source and destination addressing in the IP packets allow the routers to build a table of where next to send the packets so they can reach their final destinations. This method regularly updates the tables and is called dynamic routing, it is the opposite of the static routing used by broadcasters with circuit switched SDI and AES networks.

Figure 1 – The API interface allows an SDN controller to monitor and switch flows throughout the entire network, thus providing a high-level view of the network without having to resort to looking up IP addresses.

Figure 1 – The API interface allows an SDN controller to monitor and switch flows throughout the entire network, thus providing a high-level view of the network without having to resort to looking up IP addresses.

Path Routing Optimization

There are many advantages to using dynamic routing such as the ability of the routers to automatically choose a better path if the network topology changes, perhaps due to a failed component, or even a part of the network infrastructure being replaced. Network congestion can be detected so that the routers find alternative paths. And load balancing is achieved so that links between routers can be optimized. Again, this is a divergence from how broadcasters typically operate as SDI and AES circuit switched networks have the benefit of guaranteed bandwidth for the video and audio signals. As IP networks multiplex many services within each link, the bandwidth cannot be guaranteed.

Load balancing is new to broadcasting as we’ve always achieved this by default because it’s a function of a packet switched network, not a circuit switched one. Although the capacity of the circuit switched network is guaranteed, it does lead to much waste when circuits are not being used as they still provide the full bandwidth regardless of whether they are transferring a signal or not. This is not necessarily true with packet switched networks as the links between the switches and routers are optimized so that their capacity is better utilized.

The art of designing packet switched networks is a balance between resilience, congestion, and capacity. Reducing capacity will reduce cost but may negatively impact redundancy and congestion, but increasing capacity may improve resilience and congestion but at the expense of cost. This is one of the reasons that when trying to understand the intricacies and anomalies of packet switched networks, we soon enter the very interesting world of probability theory, where nothing is certain, and everything is a compromise. A bit like engineering.

Determining Connectivity Data Rates

This leads to all kinds of interesting questions for IP networks that broadcasters have traditionally taken for granted, such as how does the sender know what rate to transmit the IP packets at? The sender does have some knowledge of its direct link connectivity due to the configuration of the NIC. In the case of an ethernet network this could be 10Gbps, that is the maximum at which the sender can fire packets into the network (minus some overhead). But the sender has no knowledge of the bandwidth of the rest of the network, especially when we start to multiplex in other flows across multiple links.

TCP has a flow control function built into the protocol and is ubiquitous within enterprise networks and the internet. Broadcasters tend not to use TCP/IP to distribute audio and video due to the unpredictable latency it creates, consequently ST2110 and AES67 both stipulate the use of the UDP protocol. UDP/IP does not have flow control built into it and relies on the sender not flooding the network using “greedy” strategies. ST2110 does employ a form of rate shaping where the UDP/IP packets are evenly distributed so it reduces the risk of flooding the network.

Ethernet routers and switches have the advantage of being able to measure the bandwidth of the flows going into them and so can provide metrics of the bandwidths of each flow. They also know the link capacity as these are established during the port’s configuration. These two combined provide vital information for the improved efficiency of the network.

Multi-Path Resilience

To maintain resilience, networks provide multiple paths to destinations so that if a link fails then another path is available to deliver the packets. But the question is, how does the router know which path to use? If we assume a non-centralized routing system, then who or what decides whether the packet is routed along Hop-A or Hop-B?

In an SDI/AES broadcast network we often employ main and backup topologies so that if circuit-A fails then circuit-B will take over. This could be an automated change-over or the intervention of an engineer who manually makes the switch. In IP networks, the router will have a routing table with two entries, both with the same source and destination IP addresses, however, the routing metric may be different so that the hop associated with the best metric is the primary route. But there is another method that takes advantage of the metrics being the same and is called ECMP (Equal-Cost Multipath). This has a further advantage as it provides load balancing.

Supported by

You might also like...

IP Security For Broadcasters: Part 1 - Psychology Of Security

As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

If It Ain’t Broke Still Fix It: Part 2 - Security

The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…