Solving IP Routing For Broadcasters: Part 1 - Enabling Software Control

SDI and IP differ fundamentally in their approach to data transport as SDI is circuit switched and IP is packet switched. This provides interesting challenges for us as we start to consider what it means to route IP signals.


This article was first published as part of Core Insights - Solving IP Routing For Broadcasters

In traditional broadcast SDI, AES and analogue systems, we relied on crosspoint matrices to provide one-to-many signal routing to connect inputs and outputs together. Crosspoint routing was reliable, but it was restricted in its operation.

Expanding a crosspoint is a challenge. In the case of a 512 x 512 SDI frame, when either the number of inputs or outputs was reached, the router would be at full capacity. Expanding it required a great deal of compromise and potential routing blocking. The two options were either to completely replace the router with a bigger one or provide a second router and connect them together.

SDI Router Limitations

Replacing the router is clearly expensive and very disruptive for the broadcast facility. The whole router would have to be disconnected, the new one installed, then all the signals reconnected. Anybody who has tried to remove hundreds of SDI coaxial cables from a highly dense backplane knows this is a complex and time-consuming task.

This lack of expansion capability usually leads to routers being heavily over specified during the initial design phase leading to increased capital expenditure. It’s almost impossible to plan capacity years ahead making the concept of future proofing SDI infrastructures incredibly difficult. Although broadcast facilities have been able to cope with this philosophy in the past, the need for flexibility and scalability has encouraged them to look at IP.

IP Solution

One of the benefits of transitioning to IP is that we can ride on the crest of the wave of innovation from the IT industry. Switch vendors have been working on flexible and scalable designs since the first devices were originally conceived back in the 1980s. By definition, packet switched networks are dynamic in nature as each datagram has its own source and destination address, consequently, flexible and scalable infrastructures are a given.

One of the parameters that defines ethernet switches is the backplane bandwidth. The backplane is historically the speed of the fabric that connects the ports on the line cards for routing. In a non-blocking environment, there is sufficient bandwidth on the backplane to route all the inputs and outputs of every port. For example, a switch with thirty-two 400Gbps ports will have a backplane bandwidth of 12.8Tbps (12,800Gbps), that is 32 x 400G.

Figure 1 – Expanding SDI routers is deceptively difficult. Not only do we have to embark on massive infrastructure costs when expansion is required, but we inadvertently create signal blocking. This diagram shows how the tie-lines limit input and output connectivity. If all the tie-lines are being used, it would be very difficult to route camera-1 from studio-1 to the production switcher in studio-2.

Figure 1 – Expanding SDI routers is deceptively difficult. Not only do we have to embark on massive infrastructure costs when expansion is required, but we inadvertently create signal blocking. This diagram shows how the tie-lines limit input and output connectivity. If all the tie-lines are being used, it would be very difficult to route camera-1 from studio-1 to the production switcher in studio-2.

Multicasting Distribution

IP does not use physical distribution amplifiers to duplicate signals but instead uses multicasting. A fixed IPv4 Class D of addresses (224.0.1.0 to 239.255.255.255) is reserved for multicasting. Every source device providing a streaming service is assigned a multicast address so that the multicast streams can be available throughout the network. For example, if the primary video output from studio-1, camera-1 is assigned IP address 230.0.1.100, then any receiver in the network can opt to receive this.

Receiver devices include production switchers, multiviewers, monitors, video disk recorders, etc. It is the method of making sure these devices can receive the multicast signals that is signal routing in the broadcast application of IP. To achieve this, two methods are available to us, IGMP (Internet Group Management Protocol) and SDN (Software Defined Networking).

The whole point of IGMP is that the router will only forward IP packets to the receiver devices that need them. For example, if the cameras for studio-1 were on ports 1 and 2 of the router, and the production switcher was on port 5, then it’s entirely possible that the video multicast streams on port 1 and 2 will be forwarded to port 5. And if the multiviewer was on port 6, then it’s likely that none of the multicast streams from port 1 and 2 would be forwarded to port 6 as the multiviewer would not need to receive the camera video streams.

IGMP Control Efficiency

IP IGMP is the traditional control method of making multicast streams available to the receiver devices and operates by the receiver devices opting-in to accept specific multicast streams. A server running the IGMP protocol will reside on the network with each device being able to access it. The multiviewer will access the IGMP server and request studio-1’s production switcher program output, the IGMP server will then instruct the switch to forward the multicast packets to the relevant port on the switch. If multiple devices request the same video stream, then the IGMP server will instruct the switch to duplicate the IP datagrams and forward them to the relevant ports.

Although IGMP creates an efficient system as only receiving devices that require the multicast stream are switched to their ports, its major drawback is that there is a noticeable delay from initiating the multicast “join command” to the multicast stream being forwarded and becoming available to the downstream device.

IGMP Control Latency

A software management tool is needed to keep a record of the multicast stream allocation as there could easily be thousands, or even tens of thousands of video and audio streams in a broadcast network. Maintaining a spreadsheet for assigning and remembering the video stream to IP multicast streams is just not viable. Also, the management software can simply send an IMG request to join a required stream.

Before a receiver can join the multicast, the software management tool must establish if there is enough bandwidth on the link the receiver is connected to. For example, if the sound control room loudspeakers are connected to the router through a 1Gbps ethernet connection on port 9 and the sound console is on port 10, the management software will need to establish if enough bandwidth is available from port 10 to port 9 on the router. This can be achieved by interrogating the routers API but the software runs the risk of becoming vendor specific resulting in scalability limitations.

Figure 2 – Leaf-spine switching topology provides both resilience and scalability. Each device, such as cameras, microphones, and sound consoles are attached to one of the leaf’s and the connection to the spine facilitates routing to other leafs. To maintain the optimum network design, each studio should have its own leaf. For example, if studio-1 used LEAF-1 then all the cameras, the production switcher and multiviewer for studio-1 would share the same non-blocking switch, this would reduce the overall network traffic but still provide the option of routing the devices to other studios.

Figure 2 – Leaf-spine switching topology provides both resilience and scalability. Each device, such as cameras, microphones, and sound consoles are attached to one of the leaf’s and the connection to the spine facilitates routing to other leafs. To maintain the optimum network design, each studio should have its own leaf. For example, if studio-1 used LEAF-1 then all the cameras, the production switcher and multiviewer for studio-1 would share the same non-blocking switch, this would reduce the overall network traffic but still provide the option of routing the devices to other studios.

SDN Control

The second method of routing control is to use SDN (software defined networking). As the broadcast industry continues to embrace IT technologies and working practices, the adoption of SDN and software defined infrastructures is becoming more common place.

SDN is a development of the software manager and IGMP server architecture as it has the ability to interface directly to the router, or multiple routers, facilitating some subtle but very important additions.

In SDI routers, we are familiar with switching between sources to give near instantaneous video changing on a monitor. Due to the inherent delay in IGMP, visible latency between switching of several seconds can be experienced when joining and leaving IGMP streams. For example, if a monitor is switching between the video output of camera 1 and 2, the monitor will need to leave the multicast feed of camera-1 before joining the multicast feed of camera-2. This is a feature of IGMP.

Switch-Cut

To avoid these latencies, the SDN can establish a multicast feed for camera-2 and forward the IP packets to the monitor, and then switch to camera-2’s multicast stream. Although this will speed up the switch to the monitor, double the data bandwidth is required on the link during the transition as the two camera multicast streams are simultaneously active. SDN can manage this bandwidth allocation.

As we transition to IP, there’s a lot of legacy SDI equipment still in use. Even in a greenfield site not all equipment is IP enabled and will need some form of SDI interface, there may well be SDI routers in the infrastructure. SDN will provide a method of controlling all the routers, whether IP or SDI to provide users with a consistent interface.

SDN enables a higher view of the network where much of the low-level functionality is abstracted away from the user to allow them to focus on their work. This includes maintaining an inventory of all the connected devices and their attributes including video and audio codec types, and compression bit rates, etc.

Routing signals in IP infrastructures is not as straight forward as it is with the more familiar SDI environments. The potential gains with software management systems based on an SDN approach are beyond our wildest dreams. Flexibility and scalability are built in at the beginning, and we only have to deliver the system we need now, not the one we may think we will need in ten years’ time, which invariably will change anyway.

Supported by

You might also like...

IP Security For Broadcasters: Part 1 - Psychology Of Security

As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

If It Ain’t Broke Still Fix It: Part 2 - Security

The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…