Auto-Mix As One Element On The Way To Object-Based Audio

The world of broadcast audio is about to reach new levels as the industry embraces the future with Next Generation Audio (NGA). While precisely what features will be offered remain unknown, several 3D Immersive formats are already under development and soon will find their way into broadcast production and distribution.

Unlike the world of constrained channel based coding, these new NGA codecs will support more channels and/or object based audio coding. For the consumer, there will be two major benefits - a greater sense of involvement or immersion and a degree of personalisation.

Immersive 3D audio is undoubtedly one aspect of Next Generation Audio. In contrast to that approach, personalized audio can be incorporated into traditional and standard channel based formats, in the same way as stereo or mono. The key to enabling such features is Object-Based Audio (OBA).

Inside Object-based audio

Object Based Audio (OBA) will give users the option of personalising their experience by selecting from a number of audio sources and controlling the level and maybe even the position in the mix. With OBA, an “object” is essentially an audio stream with accompanying descriptive metadata. The metadata carries information about where and how to render the object in a mix that is being reproduced.

That might sound complicated, but in fact it is very straightforward. Two versions of a commentary with one mono FX stream is already an OBA format, and having the ability to choose between one commentary track and the other is already personalised audio. This system works fine as long as both commentary tracks reach the recipients home as separated audio channels and are not mixed into the audio bed.

In common parlance, Auto-Mix means balancing dynamic input levels so they have equal power output at the summation point. This can also be described as conference auto-mixing, where un-used microphone channels receive less gain and therefore noise and crosstalk is automatically reduced. 

Figure 1. Jünger’s audio technology, enables separate feeds to be  automatically mixed into a program feed as shown by this diagram.

Figure 1. Jünger’s audio technology, enables separate feeds to be automatically mixed into a program feed as shown by this diagram.

Another Auto-Mix method is A/B crossfade, in which a crossfade from source A to source B is automatically performed in response to a pre-defined trigger. If a sequence of audio elements is being used to create the audio programme, then a typical procedure would involve sequentially switching the sources – for example, presentation, clip, promo, presentation, clip and so on.

A real mix is the result of Auto-Voice-Over mixing, in which one audio element is laid over the audio bed. This kind of Auto-Mix can be triggered by the producer or by an automation system that takes a level controlled ‘voice’ input and lays it over the audio bed in a process known as ‘ducking’.

The question one needs to ask is which of these Auto-Mix methods is most relevant.

Jünger Audio D*AP8 digital audio processor.

Jünger Audio D*AP8 digital audio processor.

Choosing your tools

One of the major challenges for the production industry will be to create OBA production strategies. This means completely rethinking how a final mix is created, because with OBA, it will be performed at home by the viewer rather than by a mixer in a post facility.

Keep in mind that as soon as a post house mixes objects (different language commentaries, for example) into the audio bed, they are gone and are no longer available for personalisation by the viewer. To give viewers the chance for personalised audio, the production workflow must change and deliver separate ‘unmixed’ channels so that the home receiver and decoder can finish the final mix. This is very different to how we currently mix for surround or for standard stereo audio.

Take the next step

A first step is education. Help the audio production staff understand this new way of working whereby they no longer create a final mix. Metadata is key to successful implementation.

Review current workflows. As content is created or added to a mix, be sure the accompanying metadata survives. That metadata is the key to any object based audio tracks surviving postproduction for delivery to the broadcast transmission facility.

As consumers seek out new customisable and immersive audio environments, broadcasters who can supply them will benefit. Content and program production facilities can plan now for the necessary tools to enable these new features. A related benefit of implementing the changes will be a faster and more cost-effective production workflow.

Peter Poers, Managing Director, Jünger Audio.

Peter Poers, Managing Director, Jünger Audio.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…

Designing An LED Wall Display For Virtual Production - Part 2

We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.