Comms In Hybrid SDI - IP - Cloud Systems - Part 2
We continue our examination of the demands placed on hybrid, distributed comms systems and the practical requirements for connectivity, transport and functionality.
Other articles from this series.
Shifting Infrastructures
Broadcast workflows are nothing like what they were, and robust connectivity has changed the landscape more than anything. As access to connectivity continues to evolve, broadcasters have more options to move data around, and that connectivity means that every production can be designed to meet production needs in the most efficient way possible.
Technologies like private 5G, mesh networks, LEO satellites, dark fiber and even public internet can all provide effective transport, and these things are making a big difference, especially to distributed production.
Remote and distributed production has long been a discussion point for broadcasters, but world events in 2020 forced the issue and people latched on quickly. Now it’s on the table for every outside broadcast. With no requirement to ship expensive equipment to a venue and staff it with multiple engineers and professionals, remote production has opened the door to more content, while hybrid models with distributed audio processing across multiple sites is much more common.
It’s all about using the right tool for the job.
Monitoring
For audio, where distance equals latency, remote production tends to mean remote control of a processing engine at the venue. The mixer might be in a different location (it could be a studio, another OB truck, or their home) but the audio processing is at the venue. A big part of why this is important is down to comms, and it’s the biggest challenge for remote broadcasting as production teams become more geographically distributed.
Every broadcast uses in-ear monitoring (IEM) where on site production personnel need to hear themselves and production comms in real time. Talent also use interruptible foldback (IFB), which are mix-minus feeds from the studio consisting of a full programme mix, minus their own voice.
Distance makes this difficult due to the time it takes to move those signals around, so remote control of on site processing (often referred to as “edge” processing because of where it is located) is highly effective.
But distance is not the only latency that comms needs to plan for.
Latency
IP decouples the audio from video so that each is treated independently, which means that every audio signal needs to be resynchronized relative to one another. With sources coming in from many different locations, time-alignment can be complex, and IP systems have to be adept at managing sync and latency.
How signals are being delivered plays a role; are they on dedicated fiber links, on public internet, Wi-Fi, satellite, or 5G cellular data? Some of these won’t be deterministic. How does that change things? Some timings may drift depending on the route taken, and introducing more switch hops introduces even more latency.
The conversion of analog signals to data packets also takes time, and the analog to digital (A/D) conversion to transport the signal needs to be reversed at the other end with a similar digital to analog (D/A) conversion. Network design will play a role in this too, as total delay will depend on the combination of equipment the signal passes through.
Even the structure of the ST2110 data packets has an influence, with packet times having an effect on bandwidth and latency due to how long everything takes to packetize. Forward-thinking network designers can also influence these.
Sync & Timing
In all these cases, tracking timestamp within data flows is critical to realigning them later. ST2110 uses the Real-Time Transport Protocol (RTP) to help align signals with variable latencies. It does this by using buffers to compensate and align signals which arrive from multiple places.
RTP is time aligned with PTPv2, a Precision Time Protocol used by ST2110 which provides timing for every device on an IP network by syncing to a grand master clock (GMC). The GMC can be either a dedicated device or any device on the network, and every networked device is assigned a leader/follower relationship. This covers everything, from vision switchers to cameras to intercoms.
All this means that rather than embedding the audio with video signals, ST2110 processes audio and video independently of external timing processes, synchronizing everything relative to these timestamps instead.
It’s a different way of working, but it also gives broadcasters the ability to do things in new ways, helping to usher in the possibility of cloud production.
Hybrid Cloud
In practical terms, cloud production isn’t the same as traditional production – there are no big audio processing engines in the cloud which operate the same as an on-prem or edge processer. Not yet.
In most cases, cloud production is a hybrid of remote production methods, where audio signals might be sent into the cloud rather than a remote operations center (ROC), and latencies can be influenced by carefully choosing where the cloud processing location is. Like choosing to use edge processing for IEM at a venue, network designers can influence latencies for control by using cloud facilities which are closer to the control.
But the point is that it is all hybrid. The cloud extends the number of options available to broadcasters to tailor the production for the most efficient use of facilities: on-prem, outside broadcast, remote production, or a combination of all three. Hybrid models are a big part of today’s broadcast landscape and the cloud will increasingly be seen as an additional DSP resource which can be accessed when it is appropriate to do so.
Endpoints
Intercoms have to work within this rapidly evolving environment, dealing with hybrid systems which change depending on the production, and with inherent latencies in increasingly remote and distributed workflows.
The requirement to have reliable comms is unchanged, and as we have established, intercoms are the silent partner in broadcast environments; they should be invisible to the customer. The customer just needs them to work.
All this means that broadcast intercoms have to tick a number of boxes.
- They must be flexible to work with different network designs.
- They must be upgradeable to adapt to changes in technology, and to integrate into evolving infrastructures, such as changes in hardware or orchestration systems.
- They must be scalable to account for changes and flex with requirements.
- They must be versatile enough to deal with different signal and codec types, and handle them all in a transparent manner.
The same design considerations still exist, such as identifying endpoints and configuring permissions on each one, such as who needs to talk and listen, and who just needs to listen. Modern comms networks need to be able to pivot, and comms networks are no longer just about choosing the right hardware for today’s job; they need to adapt to evolving requirements without the customer having to invest in additional equipment or services.
Software-defined Hardware
Software-defined hardware is a way that vendors are helping meet these challenges, promoting flexibility to adapt to whatever ecosystem is working today, as well as helping to future-proof systems for tomorrow. Modular systems which can be adapted to meet changing needs are also more commonplace, with hardware panels which can combine multiple features like intercoms, router control panel and audio monitors on a single device.
As well as being able to quickly adapt to different environments, modular software and hardware takes up less rack space and consumes less power and can also reduce the number of switch ports on a network.
It’s all about working harder as well as smarter.
IP Does A Lot Of The Work, But So Does Everyone Else
Thanks to IP standards like ST2110 and NMOS, most IP equipment delivers interoperability, and most vendors supply bridges to maintain the relevance of their incumbent equipment.
This gives broadcasters a choice of orchestration systems to manage the delivery of media data over IP networks. Orchestration systems provide a single point of control for managing and configuring IP broadcast systems, and the enormous benefit of the industry’s adherence to ST2110 and NMOS means that there are a number to choose between.
Intercoms has to play the same game, keeping across all the recommendations, to make it easy for broadcasters to integrate, automate and centralize their operations along the same lines.
But is also demands more open collaboration with other technology companies, across all the broadcast disciplines. Organizations like the Alliance for IP Media Solutions (AIMS), whose sole aim is to foster the adoption of industry standards to enable the shift to IP, are integral to this. Encouraging regular interops between a range of manufacturers, they build relationships and encourage conversations to explore how systems work with each other in real environments.
Scaling Up
IP is already delivering the goods, but it’s not an easy path to tread and every single implementation item has to be planned and tested individually.
It enables cloud integration, and it emboldens remote working; it enables broadcasters to quickly scale up a network infrastructure without physical rewiring; it creates efficient systems which are designed around production requirements; and it leverages existing network infrastructures and COTS equipment, reducing costs and simplifying installation.
We’re all in it together, and we can all reap the benefits together.
But to make it all work, we all need to work together too.
Supported by
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.