Making Remote Mainstream:  Part 3 - Practical Challenges And Costs

In parts 1 and 2 of this three part series we discussed the benefits Remote Production has over traditional outside broadcasts, and the core infrastructures needed to make this work. In the third and final part of this series, we look at the challenges and costs associated with making live sports work effectively on Remote Production models to employ less equipment and crew on site, and to logistically cover more events using an IP infrastructure.

Following numerous successes, At Home productions have significantly increased over the past two years, enabling producers to cover multiple venues for the same “Tier-1” event and also many “Tier-2” sporting events that would overwise not be covered to due to cost (and lack of advertiser support). With each new remote project, new lessons are being learned and systems infrastructures tweaked to make the most of available resources.

“It comes down to what your inventory of equipment is and navigating the environment that you have to produce in,” said Chris Merrill, Director of Product Marketing at Grass Valley. “These opportunities continue to expand as time goes on. The number of remote productions being done today versus three years ago is significantly higher. I expect that, with the increasing demand for content, that trend will only continue.”

Indeed, when it comes to remote production methods, there’s no “one size fits all.” But the end game is the same for all. Maximizing resources and reducing/minimizing costs.

On its website, Grass Valley has identified three typical remote production workflows that are supported by the company’s wide portfolio of live production products (cameras, switchers, servers, replay systems, modular processing products, etc.) and have been successful for different reasons—including geography, budgets, and bandwidth availability. They break it down to an uncompressed model, a compressed model and a distributed workflow. [Of course, there are more than only these three options and a myriad of ways that people split up their resources.]

Uncompressed Production

The uncompressed method is considered the most ideal, due to the higher signal quality, but it does increase the cost of sending signals (that’s 12Gbps for 4K) back and forth between a hub facility and the remote site. Camera feeds are sent straight from the camera head over IP to some sort of production hub. In this scenario you send only the camera to the venue and the signals are sent back to a base station at the hub facility via fiber.

Figure 1 – Uncompressed production – all processing is done at the studio, only the camera heads are at the venue. The bandwidth requirement is high as uncompressed video is streamed directly to the hub giving minimal latency.

Figure 1 – Uncompressed production – all processing is done at the studio, only the camera heads are at the venue. The bandwidth requirement is high as uncompressed video is streamed directly to the hub giving minimal latency.

This requires consistent 3Gbps and higher bandwidth, which can be tough to get in the last mile (from the “home” production facility to the remote site). So, if you’ve got 10 cameras, that’s a lot of required bandwidth, which is not typically realistic in today’s budget-conscious world. Most stadiums don’t have those types of connections, anyway, so it’s that last mile that is the most challenging. Even with its big budgets, producers of the Olympics are challenged each time with procuring available bandwidth.

Certainly there are places where that type of high-data-rate bandwidth is available, but it’s not common—requiring you to secure a satellite or dark fiber connection. However, this uncompressed method has been used in Europe, where there’s a lot more public support for higher bandwidth. In the U.S., users tend to hire that bandwidth for the specific time period required. Therefore, due to bandwidth availability, uncompressed remote production is often easier and less expensive to produce in Europe and Asia than it is in the U.S. or South America.

“In general, the infrastructure is good in most of Europe, ok in the U.S, and more difficult in other locations,” said Christer Bohm, Business Development and a co-founder of Net Insight. His company makes networking equipment with built in encoding/decoding used to transport video, audio and data (file and transaction types for control). “Reliability is more of an issue outside Europe and the U.S., meaning that back up and redundancy need to be addressed.”

A replay server can be located on site and serve as backup in case of lost contact between the remote site and the studio. In general, there always needs to be redundancy and backup to handle problem situations. Normally, there are double uplinks employed, but when that is not available redundancy connections can be managed by other lower bandwidth technologies—such as 5G and the Internet.

Compressed Production

Compressing the signals before they are distributed to the hub facility means lower data rate (and less cost) requirements. Signals are sent into an encoder at the remote site and then decompressed at the hub facility. This method introduces a bit more delay, due to the compression/decompression process, but this can be up to a second depending on the compression used.

To compensate for the delay, Precision Time Protocol (PTP) technology is used to synchronize the signals. Most viewers won’t care about the delay. The challenge is for the production people, who are looking at monitors that are not synchronized. The monitors at the venue are often ahead of the hub, but if there’s a round trip from the hub facility, then the monitors are behind. So, there’s this challenge for production people that they have to learn to get used to. 

Figure 2 – Compressed production – all processing is done at the studio, only the camera heads are at the venue. The bandwidth requirement is reduced by up to 90% depending on the compression system used, this results in slightly higher latency but is usually acceptable for the operation.

Figure 2 – Compressed production – all processing is done at the studio, only the camera heads are at the venue. The bandwidth requirement is reduced by up to 90% depending on the compression system used, this results in slightly higher latency but is usually acceptable for the operation.

“For most programming no one really cares,” said Merrill. “But it does become a complexity issue for major live events. This compressed method is often used for Tier-2 events only because of the delay problem.”

This Remote Production method was used at last year’s 2020 FIS Ski World Championships in Sweden. The action was captured with 80 Grass Valley HD cameras and a production switcher on the mountain in Åre and the signals were sent back (and forth) to Stockholm, about 600 km (372 miles) away—with redundant 100Gbps connections—for final processing.

Distributed Production

In the Distributed Production model, producers are taking some of the physical equipment to the venue and performing the processing at the remote site, but are leaving the control elements at home. For example, a production switcher frames lives on site, but the panel is remotely located at home. Replay systems could be set up this way as well.

The advantage is that you are still leaving people at home, but you are able to process more quickly on site. The control signals require much less bandwidth than full video signals. That makes it easier to send signals back and forth. This also reduces lag time in the stream and cost.

Net Insight’s Bohm agrees that getting the right infrastructure to connect to the stadium is among the biggest challenges to remote production. He said they see it as a multi-dimensional problem that includes bandwidth (1, 10 or 100G), latency (which equipment can be centralized and what needs to stay on stadium), type of production (number of cameras, slomo or not, archive, etc.,) and frequency of the event—like producing a related series of events or one major event like Olympics, FIFA, etc.

Figure 3 – Distributed Production – venues have the least equipment and staff possible with the crews and main processing equipment residing in the central and distributed production hubs. The production hub can switch between venues to make the best use of resources.

Figure 3 – Distributed Production – venues have the least equipment and staff possible with the crews and main processing equipment residing in the central and distributed production hubs. The production hub can switch between venues to make the best use of resources.

“Facing these challenges, I can choose the parameters that are most important to the project at hand,” said Bohm. “For example, I have 10G access, 25 cameras, and need to operate centrally. I need low latency data and J2K compression (uncompressed cannot handle all cameras and with MPEG4/HEVC delay is too big to centralize some functions). So there is always a trade off in how a setup is done. In our experience there needs to be flexible technology to adapt to various scenarios.”

OB Vans Have A Remote Future

Related to series vs. events, Bohm said there are projects that are ideal for Remote Production models and others that are best produced traditionally. Things that affect that decision include infrastructure to location, frequency of events, investment in central location (equipment, manpower, etc.) vs. OB vans.

“We have customers that have changed their production set up so they can only do Remote Production and have thereby saved a lot of money, but there are others with brand new OBs and for them it might not make sense,” said Bohm.

It should be noted that remote production does not translate into a future with no OB Vans on site. In fact, they have a place in a remote production and many veteran production companies are all working towards this “hybrid” model. The Distributed model described above requires an OB Van. The truck is sent on site but the equipment on it is controlled remotely.

“There are many mobile production companies that are looking at hybrid models that meet their customers’ production demands and yet allow the benefits of less personnel on site, more standardization, etc.,” said Grass Valley’s Merrill. “There’s certainly a place for OB Vans in remote production.”

Cost Is THE Issue

Regarding connection costs, standards-based IP connectivity is significantly more accessible and cost effective than satellite links, and much more versatile too, allowing many devices to share the available connectivity regardless of their specific payload type. The best way to reduce it is to look to reduce the overall bandwidth, either through data compression, or by assessing how many feeds really need to be returned to HQ – you don’t need to return every camera feed in full HD if you can switch remotely using lower resolution previews.

When looking at the cost of connectivity, it should be offset against the overall saving of Remote Production. With traditional outside broadcast you have millions of dollars’ worth of broadcast truck and high value production and engineering staff spending most of their time travelling and setting up rather than producing content. With Remote Production, the expensive equipment and staff are at HQ where they can deliver far more efficiently. 

Remote Processing In The Cloud

The cloud is another off-site processing technology that is being experimented with, but there are issues of cost relating to getting content into and out of the cloud.

“The challenge with cloud processing is there’s not—today—the ability to process everything for a live event,” said Merrill. “I can get stuff into and out of the cloud, which can be expensive, but the whole processing piece is not available. I can’t actually switch a production in the cloud. I have to bring it down again to switch, which is expensive. So, it’s more of a transport mechanism right now, but we will most likely get there in the future.”

Remote audio production is also on the rise, as various radio shows have begun to add cameras to their studios and television productions need stereo— and, increasingly, multi-channel sound—to accompany their images. Dave Letson, VP of Sales at Calrec Audio, said that the challenge of round trip latency for audio monitoring has held back full Remote Production for some time, but products like its RP1 audio mixing core, or engine, that can be controlled as though they are part of a host mixer at the hub facility make it easier. The RP1 core enables staff to mix the audio from the home studio just as they would if on location.

The two main audio challenges for At Home production models are latency for monitoring and IFBs, and control. In fact, audio poses a very specific challenge; announcers need to be able to hear themselves, their co-announcers and guests, and sometimes other ambient sounds in real time. Too much latency in the monitoring signal path makes it very difficult for talent to do their job, as the time it takes a signal to travel to a broadcast facility and back over a long-haul connection is too long.

“The main differences are in relation to latency with specific regard to in-ear monitoring,” said Letson. “Control lag of the mixer which is dynamically controlling the remote mix is negligible, although the latency of the network connection will directly affect how responsive the remote mixer is. In general, audio uses much less bandwidth than video and in that respect is less cumbersome, although the challenge of monitoring latency (not control latency) is unique to audio.”

Letson said this same challenge does not apply to video because talent rarely need to see themselves in real time, but they often want to hear how they are sounding on air, with EQ and dynamics applied. Neither is it a challenge that applies to other sounds; sounds other than one’s own voice can be more forgiving of latency as the sound already incurs some natural delay getting from the source to the ear, which will be offset if that same sound in the monitoring is picked up from a mic much closer to the source.

Mix Before Transport

The solution to real-time audio monitoring for At Home Production is to handle the monitor mix locally, at the venue, so that the local sources don’t make the long-haul journey at all. Program mix-minus feeds can be returned from the broadcast facility, mixed with the talents’ own voices at the venue, and sent to their IFBs, in real-time, avoiding the round-trip delay for the local sources.

The other challenge is more universal; control. The need for a local monitor mix has led to many At Home productions posting an audio operator at the event, but many modern audio mixing products offer remote control via IP which means the monitor mix can be set up and controlled remotely from the broadcast center, as well as on-site via a browser-based interface.

Having the monitor mix as part of the program chain, rather than splitting mics off into a standalone monitor mixer, means one person can control all of the mixes directly from the surface of the main mixer at the facility, in exactly the same way they control their local sources and destinations. They can freely adjust the remote mic gains, fader levels, routing, send and bus output levels from the comfort of their own familiar surface.

The ability to also control audio using a web-based GUI over standard IP means that it can be connected to the mixer whenever it is required. A virtual surface at the venue allows an on-site technician to check local mics and monitors; they can set up the routing for the mic and monitors, and also to and from the IP interface.

The Riot Games’ 2019 League of Legends World Championship final in Paris used Calrec’s RP1 technology for the English language feed for viewers in North America, Europe and Oceania. The company’s European facility in Berlin used At Home technology as part of its design. Calrec came up with a remote production solution for IFB for the on-air talent, which was processed onsite by Calrec’s RP1 to avoid the delay that would occur if it had been routed to Berlin and back and networked around the Paris site on a DANTE IP network.

Audio Without Video

Another consideration to factor in is the additional bandwidth required when transporting audio separately to the video (such as when using SMPTE 2110), just as it does for remote-control data connections, although these requirements are negligible compared to that of the video feeds that accompany it. Audio bandwidth is negligible compared with video, but that can be reduced by remotely controlling sub-mixes to return, rather than all the individual audio sources.

“A broadcast audio mixer facilitates broadcasters’ choice for output signal by providing I/O options for analogue, AES3, MADI, SDI, AES67 and more, so all signals (mics, monitors, as well as return feeds), can be transported without needing extra boxes and interconnects,” said Letson. “Broadcasters can choose between a variety of backhaul transports between the venue and the facility.”

In addition, SDI is still popular for many reasons and it can be processed with IP codecs, so passing SDI feeds through the audio mixer and having it embed its audio output at the remote site reduces the connections to the codec (saving cost). It also provides a convenient way to keep audio in sync with video.

“The industry is in a major transition to IP now, so production companies are still becoming familiar with At Home infrastructures and how to use them most efficiently,” said Grass Valley’s Merrill. “There’s this huge demand for content and there’s no way you can send everybody out to every site. It just does not work. That’s why remote production makes so much sense.”

Part of a series supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…