Orchestrating Resources For Large-Scale Events: Part 2 - Connecting Remote Locations

A discussion of how to create reliable, secure, high-bandwidth connectivity between multiple remote locations, your remote production hub, and distributed production teams.


This article was originally published as part of a Themed Content Collection: Orchestrating Resources For Large-Scale Events.

With a major international tournament where dozens of broadcasters around the world require access to content, the likely viable solution is distribution of shared feeds from a single set of cameras and audio sources to each broadcaster. A host broadcaster assumes local responsibility for camera provision and control, dictates the transmission format, and controls local telco provision. The broadcasters remote production hub may receive as many as 40 camera feeds etc from each stadium but probably won’t have control over them.

When it comes to coverage of multiple venues for a national league on game day, the decisions revert to the broadcaster and their production providers, and that’s what we discuss here.

Bandwidth On A Grand Scale

The amount of bandwidth required will dictate connectivity requirements, and that in turn depends on quality required and other components such as codecs. To transmit full uncompressed video requires a bit rate ranging from around 2 Gbps for 8-bit color depth with 4:2:2 chroma subsampling, up to as much as 8 Gbps for RGBA color at 16 bits per channel. 40 cameras for a high profile soccer game would therefore require between 80 Gbps and 320 Gbps of bandwidth. Our production hub systems might need to handle four stadia simultaneously. That sort of bandwidth is only available over direct fiber connections and compression will usually be required. This raises other issues, such as quality and also latency, since compression imposes a computational delay. When deployed in lossless mode to maintain quality, as may be required in remote production, the latency is greater still.

Intra codecs have evolved to reduce latency by compressing only within each frame to exploit spatial redundancy only, without taking advantage of temporal redundancy in the similarity between successive frames. This led to codecs such as JPEG 2000 and AVC-Intra, with recent focus on trimming latency to the bone for contribution of live events from the field, especially fast-moving sports. This led to development of the JPEG-XS codec, exploiting parallel processing to accelerate execution and with precise control over bit rate to enable remote use of available remote connectivity. JPEG-XS has enjoyed rapid adoption for remote contribution over the last few years as broadcasters have been attracted by the fact that it imposes very small delays in the order of a single frame. It offers compression ratios up to 10:1 for typical images, but can be higher in some cases, enabling use over professional video links in production scenarios that might have previously used uncompressed data.

There are other considerations beyond bit rate and compression that may have a bearing on choice of connectivity method, primarily redundancy and error correction. Contribution flows compressed by codecs such as JPEG-XS can be carried inside MPEG Transport Streams, or increasingly SMPTE ST2110, which specifies carriage, synchronization, and description of elementary essence streams over IP for real-time production, as well as playout. A key benefit of SMPTE ST2110 is support for diverse paths to provide redundancy as insulation against packet loss and individual path failure within the IP fabric.

That is all very well, but such a fabric offering path redundancy may not be available in the field for remote contribution. In that case the only way of providing some protection at least against transient errors and varying transmission conditions, without imposing too much latency, is to employ FEC (Forward Error Correction). That inserts additional bits to provide a level of inbuilt redundancy within the data, such that IP packets can be recovered fully without loss in the event of some corruption, providing it is not too much.

All this has an impact on the decision over what form of connectivity to employ, if there is a choice. The use of FEC may require a slightly higher bit rate than the alternative of packet retransmission to achieve redundancy at the stream level, trading that for lower latency.

Planning, Control & Comms

Other considerations include choice of software tools for planning, configuring and managing the connectivity, beyond the codecs and handling of functions such as bonding. Various mixing and production functions are enabled by improved connectivity and lower latency, the most obvious being camera mixing. There is no standard approach to remote camera control in remote production; some prefer to keep control at the venue within an OB etc, others require control from the hub gallery, many are a hybrid. All approaches require comms between venue and gallery crew. In remote production connectivity then, it is increasingly hard to separate connectivity from the associated functions that depend upon it and in turn dictate the requirements.

Available Solutions

Broadcasters and content producers face an increasingly bewildering range of options for connecting their remote sites for distributed production, but the choices can be narrowed by considering the specific locations and requirements. It is increasingly a case of horses for courses with the sites that require connectivity ranging from modern well-equipped stadia to small remote settings beyond the reach of cellular connectivity, never mind fiber or other fixed line options. Available options now extend way beyond the satellite connectivity that used to serve SNG vans which are now often regarded as the expensive nuclear option in the absence of any viable alternative, rather than the default.

For a long time, leased lines and managed services over the internet or even private networks have been available in some locations, but the big change quite recently has been the emergence of cellular communications in the 5G era as a serious option not just for uploading video at relatively low quality but for serious contribution from the field. This brings huge benefits in many scenarios, enabling high bit rate and low latency connectivity to be set up on demand at short notice in some remote areas where direct fiber is not available and where remote broadcasting is required only temporarily.

The optimum choice then depends on the facilities available, the quality required and of course budgetary factors. When broadcasting from a major modern stadium, connectivity will usually be provided on tap with direct fiber connections providing all the bandwidth needed. Smaller and older sports facilities might have leased lines available, which were once the mainstay of corporate communications but now a relatively low cost and low bandwidth option, although still benefiting from high levels of availability and guaranteed bit rate. Those properties are important for remote broadcasting where erratic connectivity cannot be tolerated, or at the very least requires a reliable backup.

Internet connectivity through broadband connections may well furnish that back up in many less well-endowed smaller remote facilities that may be quite new to remote broadcasting in the streaming era, apart perhaps from rare occasions. In sporting leagues that may happen when a small club is drawn against a big one in a knock-out competition after a good run.

Increasingly mobile networks are providing back up as well as primary remote connectivity at the same time. This can happen through use of networks from multiple service providers, in the hope that if one fails another will still be available. Then in normal times when all networks are running the available capacity may be aggregated through bonding or other means, so that in the event of failure the uplink bit rate might be reduced, but at least contribution can continue from the field.

Bonding was first deployed for remote contribution from the field as an alternative to satellite uplinks in the 4G era starting from around 2010. Then, as now, one caveat is that to bond a significant number of circuits requires taking out data plans with more than one mobile operator. However, there are providers of managed services that handle the bonding, and those relationships with cellular operators, on behalf of the end customer, acting as connectivity brokers effectively.

Future Possibilities

The role of cellular in remote broadcast connectivity is poised to increase further as 5G-NR (New Radio) rolls out, bringing higher capacities and bit rates (up to 10 Gbps Uplink), greater levels of availability measured in percentage uptime or sustained levels of performance, and lower latencies. NR refers to the completely revamped radio air interface developed for 5G bringing greater efficiency and high capacity over a given amount of RF spectrum. This ushered in 5G, but further advances are coming during the 5G era, notably Standalone (SA) operation where the core network behind the radio cells is in turn reshaped for higher bit rate and, importantly for some broadcast operations, lower latency. It is only when 5G SA is rolled out that the full promise of ultra-low latency operation on an end-to-end network basis, as opposed to just within the radio access layer, is delivered. This is relevant for some remote broadcasting functions of growing importance, such as switching between alternate cameras around a sports field.

Under 5G SA, another option is emerging, or will do so over the next few years, called network slicing. The idea is that the overall capacity of a 5G network is split up into multiple components, each of which can have different levels of service measured in average bit rate, consistency of this bit rate, and latency, which again can be within specified bounds. In this way a network can be shared more efficiently between multiple types of user varying in their need for QoS (Quality of Service), without over provisioning by giving too high a performance where it is not needed. This may avoid the need for bonding in some cases and allow broadcasters to specify slices flexibly, varying the precise parameters as requirements dictate. There may be times or situations where they need particularly low latencies. In one demonstration by Italian broadcaster RAI, a 5G slice was provisioned offering a guaranteed 60 Mbps uplink, leaving another 50 Mbps slice for lower priority services, such as casual internet browsing. There was also the option to bond these two slices together to yield 110 Mbps, but with less guarantee over the last 50 Mbps.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

2024 BEITC Update: ATSC 3.0 Broadcast Positioning Systems

Move over, WWV and GPS. New information about Broadcast Positioning Systems presented at BEITC 2024 provides insight into work on a crucial, common view OTA, highly precision, public time reference that ATSC 3.0 broadcasters can easily provide.

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Designing IP Broadcast Systems: Addressing & Packet Delivery

How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.