Routing Live Uncompressed Production Video on IP Networks

Since the dawn of television, studios around the globe have relied on coaxial cables to connect the broadcast world: cameras to recorders to mixers to routers to encoders. But studio coax may be on the brink of extinction. Like their colleagues in headends and uplinks, video engineers are swapping 75-ohm coax for IP cables.

IP already dominates the world’s traffic; it manages, routes, and connects virtually every audiovisual and data service. So why has baseband video escaped for so long?

It hasn’t.

In the past few years, contribution networks from stadia to studios and other venues have been increasingly modernized, and coaxial replaced with direct IP connections. At the edge, we still have our tried and tested coax, but almost immediately we convert to IP for routing from point to point and multipoint. Now, even that last bulwark may be about to crumble. The industry is poised to upgrade those last few feet at the network edge to all IP, and we will see the end of coax as we know it.

History Repeats
A decade or so ago, cable headends were rolling out a new service called “Video on Demand” (VoD). This new class of narrowcast video required massive bandwidth from video servers to quadrature amplitude modulation (QAM) equipment. The first deployments used asynchronous serial interface (ASI) coax links. When scaling those connections to meet the demands of even the most modest headend, however, it very quickly became obvious an alternative was needed. Waiting in the wings was Gigabit Ethernet: commoditized in the datacenter world, easily cabled, switched and routed using enterprise iron. In a matter of months, ASI was abandoned in VoD systems.

Over the next few years, the rest of the headend infrastructure was converted from a point-to-point linear chain of ASI gear to all-IP switch and routing infrastructure. It wasn’t long before headend engineers and equipment makers exploited this new architecture to develop novel services. New architectures like switched linear broadcast simply would not have been viable in legacy environments.

Today, the same trend is unfolding in the broadcast studio. As more video services migrate to IP, the lure of new efficiencies and operating models that are possible in an end-to-end IP broadcast environment becomes too tempting to ignore.

Why now?
The industry has been discussing the migration to all IP for a long time. Standards have been written, demos been done, and equipment makers have produced a seemingly endless range of widgets and gadgets to help us make it a reality. So why is this change finally happening now?

A nexus of three events is propelling the IP shift to a tipping point:

  1. New standards such as the Society of Motion Picture and Television Engineers (SMPTE)-2022-1, 2022-6, and others, are breaking down the industry’s reliance on proprietary techniques and systems. As we have seen time and again, robust common standards are a strong catalyst to industry change.
  2. Ultra-high definition (UHD) video (4K and 8K) is here. To support these enormously demanding services with today’s infrastructure, we are bundling dual and quad coax together. Clearly, this is only a stopgap as we look for a better solution. Do we go with another “to-be-finalized” standard, or do we leverage a ubiquitous and rich set of existing standards?
  3. 10-Gigabit Ethernet is now where 1-Gigabit Ethernet was just a few years ago; the technology is available at commodity pricing. A universe of compatible equipment exists at a range of price points to accommodate ever-tightening budgets. Additionally, the industry has already defined 40-Gigabit-per-second (Gbps), 100-Gbps and 250-Gbps Ethernet standards, ensuring a long future.

When deployed as part of an end-to-end IP environment, such connectivity provides multiple advantages including; efficient routing, link aggregation, multipoint distribution and time synchronization.

Unlocking the Potential of End-to-End IP
SMPTE-2022 is playing a central role in the industry transition to the all-IP studio by providing a suite of standards concerned with the transmission of compressed and uncompressed video over IP. SMPTE-2022-6, in particular, provides a wrapper that encapsulates uncompressed baseband video like SMPTE 292M (and other signals), so that it can be carried in IP frames.

Once these signals can be carried as high-bit-rate IP traffic, studios can take advantage of a wide range of existing, standardized IP mechanisms to optimize media transport. (Table 1 offers some examples.)

Table 1. IP Protocols

StandardDescription What is does
SMPTE-2022-6High Bitrate Media TransportProvides a container format to encapsulate existing uncompressed video signals
RTPReal Time ProtocolAdds timestamps and sequence numbers for error detection and clock recovery
UDPUser Data ProtocolAdds port numbers so multiple streams can be carried to same destination and be easily separated
IPInternet ProtocolAdds additional address and routing abilities to span global networks
MACMedia Access ControlFundamental to “Ethernet,” provides basic addressing and endpoint differentiation

When deployed as part of an end-to-end IP environment, these mechanisms can provide a number of advantages, including:

  • More efficient routing: As indicated in Table 1, IP has many routing functions. In a classic serial digital interface (SDI) router, you explicitly define the complete route. Like building a railroad, you must use hard-coded endpoints and set all the switches along the route. IP can work like that, but it can also behave more like a post office. In this model, you drop the message off at the local point with an address, and it gets routed hop-by-hop to the destination in the most expedient manner possible. The benefit of this latter approach is that the network can change size and shape without having to reengineer the complete chain. And, by employing IP bandwidth reservation techniques, you can be confident the service will work while gaining the benefits of a self- healing network and seamless protection.
  • Link Aggregation: With HD-SDI coax, you need to convert into another platform (Synchronous Optical Networking [SONET], Asynchronous Transfer Mode [ATM], or Ethernet, for example) to combine circuits together. Ethernet can carry multiple signals on a single physical connection, and “off-the-shelf” iron today can natively aggregate from 100 Megabits per second, 1 Gbps, 10 Gbps, 40 Gbps, 100 Gbps, 250 Gbps, and beyond, with the same tools and protocol set end to end. With this native IP ability, you can build and extend very asymmetric network topologies as necessary. Alternatively, HD-SDI networks tend to look like large rectangular blocks with bundles of tie lines.
  • Efficient Multipoint: IP networks are extremely well optimized for point-to-multipoint distribution. This feature is perfect for video distribution, as endpoints can be added and removed with simple, standard join and leave commands. Performing the same operation with coax requires using cross-point router ports and digital-to-analog converters. The elegance of the IP solution is the self-optimizing topology and the ability to natively branch flows without impacting other users.
  • Time Sync: SMPTE (as well as the Audio Engineering Society [AES] 67 standards body) are working with the Institute of Electrical and Electronics Engineers (IEEE) 1588-2008 standard to provide time sync across networks. This allows an in-band replacement of traditional black burst/ tri-level sync for tightly controlling timing at all endpoints.


No Need to Fear IP
Part of the hesitation to use IP is based on the misconception that just because you are using IP, you must converge your networks – or even worse, that your mission-critical traffic is now going to behave like the “best-effort” traffic delivered over the Internet. Fortunately, neither of these concerns is accurate.

IP is inherently flexible; there is no reason you can’t build a completely isolated IP domain for your live video, just like your HD-SDI cross-point matrix is today. The benefit of IP is that you can choose when to expand and converge services. One common model in IP studio environments today is the mixing of large asset transfers with contribution links, so when feeds are not live, the bandwidth is reused to move assets around, and vice versa. This use case is especially useful in scenarios where the file transfers are for backup/archive, and so can operate on a best-effort basis.

What about AVB?
Audio Video Bridging (AVB) often gets brought up in discussions of the studio environment of the future. AVB is a suite of standards developed by the IEEE specifically for carrying time-sensitive data across networks. Its origins are in consumer, automotive, and some professional audiovisual applications. The standard includes some capabilities similar to IP, such as time synchronization and bandwidth reservation.

The drawbacks with using AVB today, however, are its limitations with regard to layer-2 switching, which effectively constrain deployments to islands. Extending to layer 3 is a necessity for most medium-to-large real-world deployments. Work is ongoing to address the layer-3 limitations of AVB, however, the solutions chiefly involve enabling tunneling between islands. The key difference between this approach and that of SMPTE and AES67 is that the latter approaches begin natively at layer 3. It remains to be seen if AVB – with some extensions – will be deemed “good enough” for professional audio and video.

Looking Forward

In addition to those described previously, expect other IP standards to be adopted into the suite of recommended features for broadcast equipment. Part of the attractiveness is the sheer number of complementary standards and protocols in the IP stack that can be leveraged “off the shelf.”

What new and exciting things will a move to pure IP end-to-end allow us to do? Here are some possibilities:

  • Self-propelled production assets could route themselves through the production environment.
  • Two-way rich, metadata could flow from the camera to producers to consumers, and all the way back.
  • The edges of the contribution and production networks could blur into a single mesh.
  • A mixing of uncompressed and low-latency “light” compression in the live production environment could enable remote editing / production in a seamless, borderless way.End-to-end IP environments could blur the line between post-production and distribution networks, moving from the linear playout model to more cloud-based asset delivery systems with linear channel assembly at the edge.

All of these opportunities, and others we have not yet imagined, become possible in an end-to-end IP environment. It is not a question of if we will see them in the studio, but when. As with similar digital transformation trends, it will happen faster than we expect. 

The rapid growth of IP connectivity and with increasing speed makes the ability to move video over IP a proper application.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.