Future Technologies: Private 5G Vs Managed RF
We continue our series considering technologies of the near future and how they might transform how we think about broadcast, with whether building your own private 5G network could be an excellent replacement for managed RF.
Other articles in this series:
In live production, keeping the audio latency low for earpiece foldback is essential to stop performers hearing a delayed version of the foldback, especially where RF microphones and earpieces are utilized. By reducing latency and keeping it deterministic, 5G is improving how audio is processed for performers during live events which in turn improves the immersive experience.
RF is a fundamental component for broadcasters, however, over the years, the pressure on bandwidth allocation has increased as governments throughout the world have been selling off large chunks of RF bandwidth to telcos for their mobile services. This has not only reduced the amount of bandwidth available for program transmission to the home, but has also impacted live productions, especially audio, both in the studio and on location.
RF Foldback Latency
During a live performance, the musicians will need to hear the foldback mix of the rest of the band, as well as their own instruments and voices. Although this can be easily accomplished using cabled earpieces, the trend for performers to move around the stage uninhibited by cables has meant that RF solutions have been employed. This is in addition to radio mics and instrument transmitters such as those used on guitars.
As the radio frequency spectrum continues to be squeezed, broadcasters are finding it increasingly difficult to accommodate the tens, and often hundreds of RF audio sources and destinations that need to be transferred over RF signal paths. Audio compression is often used to reduce the number of RF carriers required, but this adds latency which can cause live performing musicians a lot of problems. For example, a performer singing with a handheld radio mic and using a radio earpiece will need near instantaneous audio in their foldback mix, otherwise the performance will be greatly compromised due to the delay in their hearing. Typically, 4ms is considered the maximum a performer can tolerate. If 1ms is used for compression, 1ms for de-compression, and 1ms for processing, then less than 1ms is available for RF transmission.
OFDM Processing
If we consider the propagation of electromagnetic waves travelling in the air then 1ms should be ample, however, this assumes a single carrier with little processing. In a stage environment where performers are actively moving around within an environment of many reflective surfaces, then a single carrier approach will be suboptimal as dropout will occur due to multipathing and reflections. A solution to this is ODFM where the data is spread across multiple carriers with the expectation that some carriers will increase and decrease with respect to the others resulting in a resilient system that delivers loss free data, or in this case, audio samples.
As with all things engineering, there is always a compromise, and the price we pay for highly resilient RF data delivery in volatile environments is latency. To improve the resilience of the data delivery, the TTI (Time Transmission Interval) is increased within the OFDM network which compromises the latency. The TTI is how long the data symbol exists on the RF carriers before the next data symbol is sent.
Vendors do provide their own ODFM solutions for radio mic and earpiece use-cases, but there are several disadvantages to using these. Firstly, the number of microphones and earpieces used for a large performance can easily go into the hundreds of devices. Not only does this put a massive strain on the number of frequencies required, but it also makes RF spectrum planning incredibly challenging as multiple pieces of equipment must be configured independently of each other as the time division multiplexing that further divides up the RF spectrum is limited to each vendor. Secondly, RF spectrum allocation in different countries is not a trivial job and any performance on tour will need meticulous RF spectrum planning for every piece of equipment.
Centralizing RF Through Private 5G
One solution to this problem is to use a centralized RF approach and here broadcasters can learn from telcos by using cellular technology. 4G LTE is a well-established technology and makes provision for private networks. This allows live performance organizers to build their own infrastructure to provide the necessary audio RF links to the performer’s microphones and earpieces. However, the latencies are non-deterministic and can easily stretch to 10ms, which is unacceptable for live performers using RF foldback. But 5G now provides an alternative through the operation of private networks.
The 3GPP (3rd Generation Partnership Project) 5G specification provides three types of services; enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communication (URLLC). All three have specific use-cases but URLLC is ideally suited for live performance production due to its high reliability and very low latency, typically 1ms. Furthermore, the latency is deterministic. By keeping the latency to 1ms, the application involving a performer with a radio mic and radio earpiece can be provided using a private 5G network.
5G is designed to be flexible, dynamic, and to meet the demands of a varying number of applications from autonomous vehicles, to video and audio streaming, and adapts to each type of application as required. To help achieve this, the OFDM carrier spacing and TTI can be varied so that a balance of bandwidth and low latency is achieved.
Optimizing RF Resource
Built on LTE technology, 5G essentially operates a combined TDM (Time Division Multiplex) and FDM (Frequency Division Multiplex) network. Using the 4G LTE public networks as an example, each mobile phone listens to the cellular network around it and determines where the radio-frames start and begin. Each of these frames is subdivided into slots and the mobile phone determines where the TDM slots are located, negotiates with the network controller on which slot to receive from and transmit to, and then sends and receives data. As the RF modulation scheme is also OFDM, the TDM slots provide a method of frequency and time slicing the RF resource so that multiple phones can use the same frequencies. 5G builds on this to vary the OFDM carrier spacing and adds further granularity to the time splicing of the slots. These two initiatives combined provide the ultra-low latency and high resilience that URLLC in 5G provides.
Although broadcast vendor devices do provide OFDM and TDM in their RF solutions, they do so in isolation and different vendors cannot integrate with each other. They do not make optimal use of the available RF resource. Private 5G delivers a centralized method of resource allocation and control for each of the devices on the RF network, and in the case of live performance, this will be many radio mics and radio earpieces. The 5G private network can be configured to apply prescribed bandwidths and deterministic latencies to the performer’s equipment so that their monitoring is consistent, and their performance is optimized.
Simplified Spectrum Planning
Another major advantage of using private 5G is that the frequency spectrum planning is much easier as only a smaller range of frequencies will be required. Although this still requires international spectrum planning when on tour, the number of frequencies required is greatly reduced and the system configuration is simplified as the available resource is optimized by one central resource.
Private networks can be built using 5G technology as the applications are not limited to public networks and national communications infrastructure providers. Building your own private 5G network to facilitate radio microphones and earpieces certainly solves a lot of problems when working on location or in the studio. And there’s no reason why a private 5G network could not be built within a broadcast facility to allow all the studios to share the scarce RF spectrum resource and provision roaming microphones and earpieces.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.
HDR Picture Fundamentals: Camera Technology
Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.