Audio For Broadcast: Traditional Signal Flow
We explore the typical signal flow from source to playout within common broadcast studio workflows. How does the audio get from here to there and what needs to happen to it along the way?
All 16 articles in this series are now available in our free 78 page eBook ‘Audio For Broadcast’ – download it HERE.
All articles are also available individually:
Widespread connectivity has enabled modern broadcast infrastructures to be created anywhere and everywhere. Remote, distributed, or at-home broadcasting; whatever you call it, broadcast workflows are constantly adaptable and endlessly flexible.
But while the origination, processing and management of signals may have all shifted, the fundamentals of the signal flow haven’t changed at all.
We’ve previously discussed how broadcast networks transport audio signals from on-prem or remote I/O boxes to the audio mixer, and how monitoring on the studio floor is managed through combinations of foldback monitors, in-ear monitoring and a variety of comms channels.
Although the geography may be displaced, every source still ends up at the mixing console to be managed, processed, mixed, and distributed to wherever it needs to go. The television production ecosystem still needs to keep everyone in touch with each other and audio is still the mechanism which enables this to happen.
Audio signal flow is the roadmap which gets all these things done. But while signal flow always starts and ends in the same place, the route it takes is never the same twice.
Get With The Programme
The reason is simple; the physical flow of each individual source can never be standardized because it should always be sympathetic to the broadcast. Will the production need any foldback? Does the talent need to hear Replay inserts or interview questions? Are there any singers who need to hear a full mix of the band? Do people on set need help hearing each other on stage? Where are the I/O boxes located? How many programme outputs are being generated? Are individual stems being recorded for repurposing in the future?
Every single element has an effect on how the signal is managed, and different television genres will have different requirements.
News
For example: local news provides live reporting and regional newsroom coverage every single day, with incoming sources from external correspondents who might all need a mix minus bus (multiple outputs of the full mix sent to multiple people minus their own input). With multiple correspondents in the field, these must integrate with the talkback frame so that each source associates with the right mix minus.
Management of those signals needs to be clear and straightforward, especially when breaking news can change everything in an instant; 24-hour news channels need to maintain these workflows across shift changes, so the signal path must be extremely well-defined.
Sports
Sports broadcast workflows share many of the same requirements, but the signal flow will be different again. Although likely to share a reliance on external correspondents, how the Audio Control Room (ACR) manages the signals will depend on how an event is mixed.
A remote production environment will require individual signals coming into the ACR to be mixed in the studio, but bigger events might have an outside broadcast truck sending a full presentation mix to the studio to add commentary or analysis. There may be a combination of both, while audio capture may be different too; fast moving sports like Moto GP or F1 use an automated technique called audio follows video, which associates a camera with a mic capture. These shots are often too fast for a human operator to deal with and are triggered automatically when the vision mixer switches in the shot. Again, the overall signal flow is unchanged in that the signal still arrives at the console, but the element is not switched in by the audio operator.
Light Entertainment
Live audience shows introduce even more complexity to the signal flow.
For example, an audience requires a public address (PA) system which is often the responsibility of the broadcast mix engineer in addition to programme output. An audience might even have an independent PA desk inserted into the signal flow between the audience and the main mixer; the same might apply to bigger musical acts where the artists might have more complex foldback requirements.
In these environments signal workflows can include stagebox splits where an I/O box on the studio floor splits each studio mic to separate locations, enabling the same signals to go to a PA desk as well as the broadcast desk for the on-air mix. Shows like Eurovision, where musical acts are cued by the television production, might have splits going the other way, with feeds coming out of the broadcast desk and added to mic splits coming in the other direction.
These examples are not exhaustive; there are hundreds of ways the broadcast and the genre influences the signal flow, but the point is to be sympathetic to its needs and to adapt signal flow to meet them.
Signal Flow Through The Console
Irrespective of a signal’s roadmap, every single one ends up at the audio console (whether that’s a physical console or it is processing software in the cloud; but that’s a whole other story). The console processes, cleans up, organizes, manages and feeds them wherever they need to go, in whatever configuration they need to be in, from start to finish, from input to output. Or, more realistically, outputs.
From the channel input, the signal passes through a series of processing functions such as EQ, filters and pan controls. These enable the audio operator to tweak the signal, such as removing noise from particular frequencies or panning the signal to an appropriate area in the soundfield.
But what if the signal needs some more processing which the console can’t handle? Some noise reduction, for example, or echo cancellation, or you just want to use a specific EQ you love the sound of?
Outboard Processing
Well, you can. An “Insert Send/Return” enables the operator to plug in external processing at various points in the signal path which extracts the signal to do that specific job, before returning the signal into the chain for any additional processing, such as dynamics.
The “Insert Send” sends the signal out, and the “Insert Return” returns it to the processing chain, while the faders (sometimes called “sliders” or “those cool things that look like they are from Star Wars”), control the level of a single channel or a group of channels (we’ll get to groups in a bit).
An “Auxiliary Send” or ‘aux’ works slightly differently. With an insert we are sending the signal from a single channel to a processor, an aux system allows the signal from multiple channels to be sent to the same destination (either for processing or to build a foldback mix). An aux send also actually splits the signal in two; the original signal will continue through the channel while the other will be routed out of the console for external processing and could even be brought back in again as a new signal. A pre-fader aux send might be used to feed foldback outputs to ensure that talent can still hear what they need to, irrespective of whether the fader is on or off, while a post-fader aux send might be used for additional processing.
In fact, aux sends are often useful for parallel processing, where an effect is applied to one signal and blended with the original to a greater or lesser degree to create the right balance between processed and unprocessed sound.
Order Please
Just because we know where all the adjustments can be made, it doesn’t mean we are free to apply them wherever we like. In the same way that the signal route must be sympathetic to the broadcast, process ordering must also benefit the workflow and the demands of the signal. Most consoles provide the ability to apply processing at various points in the channel, such as pre-fader, post-fader, pre-EQ, post-EQ, and so on. Each of these will have a bearing on how the signal is processed, and an understanding that the order affects the signal output is paramount.
We previously discussed how the output of a signal which has had EQ applied before compression will differ to a signal which has had EQ applied after a compressor. The same principal applies here, but with the added complication that any additional processing plays a part, and anything applied pre-fader or post-fader may have a similar devastating affect downstream of that signal when levels are adjusted.
Management
Speaking of levels, with the exception of isolated stems which are output directly into Digital Audio Workstations (DAWs) or Media Asset Management (MAM) systems for repurposing, signals are seldom left alone.
In a hectic live environment, sound supervisors don’t want to be searching the whole desk for individual channels like an overworked concert pianist. Come showtime, they want to be riding as few faders as possible, and this is where subgroups come into their own.
Group and main busses provide more efficient control of multiple sources, allowing signals to be processed and grouped together for an operator to control them with less faders. An often-cited example is a drum kit, which will be covered by multiple microphones which might all be grouped together to provide level control of the entire mix on a single fader.
A Voltage-Controlled Amplifier (VCA) group is another way to achieve a similar result and is also widely used in broadcast signal flows. Unlike subgroups, VCAs do not sum signals together and so don’t use any valuable processing resources. They operate more like remote controls for groups of faders, and although an operator can’t apply any processing to them, they provide a quick and easy way to manage multiple signal levels.
Signal Flow
Signal workflows are as unique as the production needs them to be, and an overview can only scratch the surface of what might be required. The key to a successful signal flow is to understand the requirements of the output in relation to the limitations of the inputs, and to apply the right techniques to bridge that gap.
Luckily, that is exactly what the audio desk is designed to do.
Supported by
You might also like...
Designing IP Broadcast Systems
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.