Audio For Broadcast: Noise & Signal Repair

Understanding where noise creeps in and how to minimize it are key audio skills but sometimes, inevitably, modern noise reduction tools are a lifesaver.

Shhhhh!

As we have already learned, noise reduction is a lot of work for just one pair of ears.

Understanding the dynamic range of a signal and recognizing the frequencies within it is a critical part of shaping its output. We’ve already talked about how to refine the effects of noise interference by applying dynamics and EQ, techniques which can be developed and honed over time and which are useful tools to control noise and ensure compliance.

Here’s some more good news; there are other real-time tools that you can employ which will do a lot of the heavy lifting for you, straight away, with no messing around.

It’s A Kind Of Magic

You don’t have to be an engineer to understand the benefits of automatic noise reduction. Anyone who has tried noise-cancelling headphones knows what a huge difference it makes. Turning on that switch on a tube or a plane and hearing all the extraneous noise automatically sucked out of the air is like listening to a magic trick. 

Automatic Noise Cancelling (ANC) on headphones is pretty simple – tiny microphones sample the ambient noise and your headphones invert the phase to cancel out any outside noise. Voila. Instant immersion.

While automatic noise reduction for live broadcast isn’t that simple, there are a variety of hardware and software plug-ins which take the edge off, but it is probably best practice to give yourself a head start.

Adding Noise

As we know, lots of things add noise. Environmental artifacts like aircon, traffic, wind and lighting; electrical interference from power lines; even the equipment adds noise, like mic self-noise and cable interference.

Some of these things are unavoidable, and efficient planning and filtering can help mitigate some of the repeat offenders, but they all add to what is known as the noise floor. The noise floor is the accumulation of unwanted noise which is inherent in the signal; the higher the noise floor, the more difficult it will be to distinguish the quieter elements of the audio we actually want to hear.

Analogue equipment introduces far more noise into the signal chain, from its own electrical components or from artifacts which are added as it travels down the input cables. Digital processing is far more forgiving, but the truth is that all electronic equipment produces some noise and every piece of equipment and additional process in the signal chain will add more and more of it.

The noise floor is always going to be part of the signal and sensible audio engineers aim to keep the noise floor as low as possible from the start. So now is probably a good time to talk about gain staging.

Gain Staging

Gain staging is about achieving the highest possible signal-to-noise ratio on any given input. In other words, minimising unwanted noise while maximizing dynamic range and headroom.

It involves setting the input and output levels of each device in the signal chain. This means making sure there is enough headroom at each stage to ensure it doesn’t distort further down the line. Combining multiple signals together can also push the limitations, so leaving enough headroom on each individual one is an important consideration.

Due to its nature, gain staging for analogue equipment is more about hiding the noise floor by boosting the input to maximise the signal, and the tendency has always been to set the input levels on analogue equipment higher to minimise the amount of inherent noise which can be heard.

In a digital system, engineers tend to use a different approach. Although a digital signal will likely have a lower noise floor, digital audio also has an absolute level of 0dBFS (decibels relative to Full Scale – we covered this in more detail when we looked at metering in part one of the series). While pushing the limits of an analogue signal can be a stylistic choice, at 0dBFS digital audio runs out of quantizing levels, distorts quite horribly and should be avoided at all costs.

Noise Reduction & Signal Repair

Even with the best start, signals will still degrade; mic clipping, wind noise, lossy compression, poor microphone technique, analogue-to-digital conversion, crowd noise, electrical interference, the relentless pace of modern life…even in the most controlled environments we are never actually in control.

Real-time noise reduction and signal repair products give engineers an opportunity to actively reduce or eliminate such noises during live broadcasts at the touch of a button. Each will depend on the characteristics of the noise and the desired output, so we’re back to asking what the broadcaster is trying to achieve and using the right tools for the job.

They may also affect the overall tonal balance, but while there is always a trade-off between how effective noise reduction is and the effect on the overall signal, this can often be mitigated by using the right equipment in the right part of the signal chain.

Available as both hardware and software plug-ins, these products can solve specific issues. Dedicated hardware processors tend to be more common for real-time noise reduction and signal enhancement in live environments as they offer very low-latency processing.

Meanwhile, software plug-ins for DAWs are used mainly in post-production to clean up for TV or film.

Let’s start with some simple fixes.

All About The De

Many signal repair functions have features which have helpful names which describe exactly what they do, which is nice. De-essers, de-poppers, de-noise, de-rustle and de-wind functions all do exactly what you think they might do.

Software like this works by learning where in the frequency spectrum the problem exists and isolating those frequencies with the goal of providing greater intelligibility to voices.

De-essing simply reduces excessive sibilant sounds found in vocals by attenuating frequencies in the sibilant range. It does exactly the same thing as a notch filter to reduce sibilance around 5kHz to 8kHz, but it does so at the push of a button.

Similarly, de-popping can reduce the intensity of “p” and “b” sounds caused by bursts of air hitting the microphone, while algorithms to remove clicks and crackles can remove noises caused by electrical or digital errors in the audio signal.

In audio post environments, many of these tools are bundled together in software packages which can be accessed in a DAW and remove noise, clicks, hums, sibilance and other artifacts automatically by analysing an audio file and applying a combination of fixes.

Keeping It Live

Noise reduction for live production has the same motivation to preserve dialogue, but it usually occupies a different place in the signal chain. It works by using a combination of algorithms and hardware components to detect noise and apply appropriate filters.

Rack-mounted units are physically located in the ACR or in an outside broadcast truck, although 12v portable units can reduce ambient noise at source on location with a reporter.

Either way, real-time noise reduction hardware is usually placed before the dynamics/EQ processing to minimise the amount of manual remedial work. If there is any automix functionality being applied to the channels – such as might be used to automatically duck microphones on a panel/discussion show – it would also be applied beforehand so that each channel is as clean as possible prior to joining the automix bus; it is far easier to clean the channels before this process rather than it having to react to frequently changing output levels.

It also needs to be lightning fast; not only can it be used to clean up signals for the mix, but it might also be used for in-ear monitoring (IEM); for example, to remove mic spill on an IEM feed for backing vocalists. For these reasons, near-zero latency is key. 

Sometimes you might want some background noise; a cheering crowd may add some colour and atmosphere to the broadcast. To enable this, most noise reduction units will have separate attenuation and bias controls to provide more control. An attenuation control allows the engineer to control how much the noise is attenuated, which allows them to keep some noise in if it adds to the broadcast.

Conversely, a bias control affects how much influence the tool has on the signal, and many manufacturers will also provide the opportunity to apply these controls across specific frequency bands to allow fine tuning around each voice.

Adapting To The Environment

This technology is improving over time, with algorithms able to calculate changing levels in background noise and apply noise attenuations at different frequencies to optimise suppression. The ability to adapt to changing environments and apply changes in real time is beneficial wherever it is not possible to control the wider environment (so, you know, everywhere).

Live dialogue noise suppression is not more important than it used to be, but it is more common. With more OTT and OTA channels, podcasts, digital channels and other on-demand services we’ve never had as much content to choose from. More content in more locations with more people and more distractions.

We can’t control our environments, but with some knowledge about how audio works, and some help from clever tech, we can at least exert some control over our content.

Supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.