Cloud-Native Audio Mixers - Current Developments In Virtualized Broadcast Audio Mixing
As the wider broadcast industry picks up the pace with virtualized, cloud-native production systems we take a look at what audio vendors currently have available and what may be on the horizon.
When COVID-19 turned everything topsy-turvy two years ago and broadcasters scrambled to adopt technologies that would allow them to keep shows on the air during lockdown, one thing soon became apparent. Cameras, graphics, video editing and other production workflows could be moved to the cloud and allow producers and creatives to hook up a couple of displays on a desk or kitchen table and work from home. Meanwhile, A1s had to rely on existing REMI setups, bringing a desk, equipment racks and other hardware into their homes with few, if any, cloud-based options available.
Two years on, there are now several virtualized (cloud-native) audio mixers available with more on the horizon. That’s great timing, because a recent survey of broadcasters worldwide found that 89% intend to adopt cloud technologies in the coming year. That said, only slightly more than one-quarter of respondents deemed cloud tech a priority, although about two-thirds reported having already virtualized post production and file-based production. The survey, by U.K. researcher OnePoll, commissioned by Sony Group’s Nevion brand, questioned 250 senior technology decisionmakers across Australia, Asia, Europe and North America.
Working in the cloud — which may be public, on-premises or hybrid — offers a variety of benefits, of course. The associated hardware is non-proprietary, which presents users with more options when it comes to designing systems using commercial off-the-shelf (COTS) computing equipment. Cloud computing and virtual software are well established (Amazon Web Service, AWS, launched in 2006, for example), which means that broadcasters need not worry about reliability. Further, on-demand cloud hosts such as AWS, Azure and Google handle maintenance at their end and constantly improve and upgrade their services.
Cloud deployments are scalable (and multiple instances of a virtual audio mixer can be run simultaneously), providing infrastructure on demand and enabling users to scale up or down in response to production requirements, while paying only for what they need on a given project. Deployments can also be temporary, which allows broadcasters to experiment or spin up a service for training purposes, for instance. Plus, functionality in the cloud is not strictly defined as it is in a hardware product, enabling further experimentation and customization opportunities.
Speaking of cost, capital expenditure may be reduced, because there is less hardware and less maintenance, and real estate and its associated costs (HVAC and so on) can be minimized. As for operating expenses, transport, travel, accommodation, overtime and other costs are significantly reduced by not sending hordes of staff around the country or the world. The savings must be offset against likely service contracts associated with cloud workflows and cloud hosts, but they can generally be tailored to meet business requirements.
As with any virtualized (cloud native) production system for distributed operational teams, the network between the individual operator and ‘the cloud’ needs to have sufficient bandwidth, stability and reliability. Whether that is viable over the public internet or requires a managed network of some sort is a subject we cover a lot at The Broadcast Bridge.
One common thread for several vendors seeking to unlock new cloud broadcast workflows has been NDI, a technology developed by NewTek (now part of the Vizrt Group) that debuted at IBC in 2015. NDI is royalty-free and enables video and audio sources to be shared bi-directionally across IP networks in real-time. Indeed, as noted by one cloud mixer vendor, ST 2110, being multicast, simply can’t run through AWS. Amazon has been promoting its alternative, AWS CDI (Cloud Digital Interface), since late 2020.
If a developer claims to support 2110 with its cloud product, it’s more likely using an RTP stream that resembles 2110/AES67. That said, a variety of ST 2110 or AES67 audio sources will likely need to be introduced to any cloud system via an appropriate device or service for conversion or transcoding.
One of the first cloud mixers to market was Grass Valley’s AMPP Audio Mixer. Grass Valley began development of the Agile Media Processing Platform (AMPP), a Software-as-a-Service solution for remote and distributed productions, around 2017, initially for video. One early adopter was game developer Blizzard Entertainment, which began using the AMPP Audio Mixer for eSports events in February 2020. The AMPP audio mixer handles up to 16 output mixes for mix-minus and auxes, eight subgroup mixes and eight VCAs, plus a main stereo out. Audio may be introduced via NDI, SDI, 2110 and AES67 and is encoded to Opus (which is what Zoom uses) then handled uncompressed in the cloud.
The mixer runs on an HTML5 interface so may be browser and touch screen operated. But audio engineers typically prefer having real knobs and faders under their hands, so early adopters have reportedly opted to use the AMPP mixer with compact USB/MIDI hardware control surfaces, a common commodity in music production studios.
In the early days of the pandemic, people like Dave Van Hoy, president of Advanced Systems Group in Emeryville, California, began putting together cloud audio setups for clients using Reaper, a DAW developed by Cockos that runs on Linux (as well as PC/Mac), which allows it to run in the cloud. Reaper handles a multitude of file and metadata formats and there is a wide range of plug-ins available for processing, panning, streaming, metering and so on, allowing users to effectively build their own system and configurations. In the U.K., Sky TV has also built out workflows using Reaper over the past couple of years.
The software provides support for OSC and a variety of hardware control surfaces and may be used in conjunction with Sienna’s NDI processing engine, which can convert AES67 to NDI. Reaper’s proprietary ReaStream plug-in enables streams between a Sienna infrastructure and Reaper, or other VST compatible DAWs, via unicast UDP protocol.
More recently, Van Hoy has been working with U.S. mixing console manufacturer Harrison on a cloud mixer aimed squarely at broadcast applications. Since the mid-‘80s, Harrison has transitioned from making analog desks under digital control to massive, multi-operator, fully digital film consoles — found on some of Hollywood’s biggest mix stages — to now making their technology available via software, initially through its Mixbus product. A collaboration with Van Hoy has taken that software a step further.
Mixbus VBM (Virtual Broadcast Mixer) is the company's cloud solution for enterprise-scale live broadcast and corporate communications and is optimized for audio input and output using NDI. Harrison also claims that its flexible plug-in architecture allows the connection of other streaming formats “present or future.”
Mixbus VBM provides a broadcast-friendly architecture with features and functions, such as processing and routing, comparable to large-format hardware broadcast desks. For example, there are four dedicated Program buses, four virtual Lobbies that allow producers and talent to communicate with each other outside of the Program, an automatically generated mix-minus for each contributor, and a master monitoring system for the console operator.
Other broadcast-friendly features include an IFB compressor on every mix-minus that ducks the talent's program feed when a producer talks to them and comprehensive monitoring functionality and metering. The screen layout is fully customizable, and the software offers dedicated support for the iCon QCon ProX 9-fader expandable control surface.
Back in 2012, Telos Alliance introduced the Axia iQ AoIP console, later relaunched as the first AES67 AoIP desk. The company’s new IQs is the software version which may run on the AE1000 application engine — Axia’s universal server — but is also available as a Docker container, enabling it to be deployed virtually in the cloud or a server farm. The system is subscription-based for maximum flexibility in terms of features, configuration and scale, and is HTML-5 browser-based, so accessible from any device. The features are perhaps more radio-friendly, but it may be suited to other applications, offering four Program buses; automatic mix-minus on each fader, plus talkback functions (the system lets operators talk to phone/codec sources) and automix of on-air mics.
In June 2022, Waves Audio launched the Cloud MX audio mixer, a software installation and license management application that is deployed on AWS and is NDI-compatible. The basic configuration offers 64 I/O and tons of aux, group and matrix buses, mute groups, and on and on.
Waves is known for its plug-ins, of course, so there is an abundance of processing available. Chains of up to eight cloud-licensed plug-ins may be instantiated on any channel for custom and personalized configurations. The Cloud MX system is supplied with a basic plug-in package, upgradeable to two other packages; the Premium pack adds over 150 cloud-licensed plug-ins, including the Dugan Speech plug-in. The system supports up to four touchscreens as well as tactile mixing with the Waves FIT controller and/or Mackie/MIDI controllers.
Exciting Times Ahead
It seems inevitable that as the rest of the industry goes cloud-native so will audio and it is exciting to see the beginning of that journey taking shape. Currently these systems probably lack the capacity and comprehensive feature set required for mid to large scale production, but these systems are likely to grow, and we are seeing signs that the established broadcast audio console vendors will also bring to market systems with the capacity and capability to handle full scale production using cloud-based systems.
In July 2022 Audiotonix announced that it is at proof-of-concept stage with development of technology that will reportedly provide the backbone for Calrec and Solid State Logic, two of its brands, to further develop cloud based mixing solutions based on their respective production workflows and feature sets. Their patented x86 CPU optimal core processing (OCP) technology for audio processing algorithms, is being rewritten to run in virtualized Linux on public cloud services. The idea is that multiple surfaces and software remotely connect to each other for distributed control positions over the internet. Details of initial trials are not public and no timeline for availability is mentioned.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.