Audio For Broadcast: Analog Vs Digital
The basic principles of the science of how sound works, tackled from a contextual perspective, through a discussion of the evolution from analog to digital audio technology.
All 16 articles in this series are now available in our free 78 page eBook ‘Audio For Broadcast’ – download it HERE.
All articles are also available individually:
This is the story of how broadcasters made the shift from analog to digital audio. It is arguably the most emotive shift in broadcast production in recent years, and it has transformed the audio equipment that is used in modern broadcast workflows.
Some people will tell you that once upon a time everything made sense and that everything was analog. Processes were linear; signals would go in, they would get tweaked and grouped, and signals would go out. Everything was routed with physical patch bays and signals could be traced by following an actual cable.
All sound is analog. Sound is just the vibration of air particles; it is a continuous process which allows the ear to pick up every slight change in pressure with no bandwidth limitations. Bandwidth is the range of frequencies covered in a continuous band, and an average human can perceive from 20Hz to 20kHz. But analog audio isn’t bound just by what we can hear – for example, dogs can hear up to 60 kHz – and analog manipulation of those sounds is just as unconstrained.
Recorded and broadcast sound converts the vibration of air particles into electronic signals for transport and manipulation, and for many decades the broadcast production chain used analog circuitry to provide a perfect representation of the sound that was captured. Microphones change the sound into an electronic signal, the audio console manipulates that signal across the same frequency range, and then outputs it again.
In the early 1980s the introduction of hybrid broadcast consoles provided the ability to digitally control analog signal paths – memories, snapshots and automation enabled audio engineers to be more flexible, and while the signal path was still analog, the seeds were sown. Digital audio for live broadcast wasn’t far away, and as digital television became a focus for broadcasters in the 1990s, digital tools became more common.
There Are Only 10 Types Of People In The World
There’s a huge amount of technical theory on digital audio, but let’s keep things very simple.
Unlike that unconstrained analog audio, digital audio is an approximation of the sound rather than the full, flowing, continuous range. Sound information is sampled at fixed points on the sound wave, and to ensure a faithful representation the sampled audio bandwidth has to be restricted.
At any moment in time the value of a digital signal can be measured precisely, and digital audio is created by periodically sampling the incoming analog signal. How often these samples are taken over time is referred to as the sample rate. A recording at one hertz means one sample is taken per second, and sample rates are measured in kHz; CD’s use a sample rate of 44.1 kHz, broadcast digital audio tends to operate at 48 kHz, and HD audio is commonly assumed to be 96 kHz (but at any rate must be higher than 44.1 kHz).
Digital systems are binary. They store information in strings of zeros and ones, and it is the job of an analog to digital convertor (an A/D or ADC) to covert that signal. The maximum length of the string for each sample determines the total amount of information that can be stored for the sample – which is referred to as the bit depth or word length. A 16-bit sample can contain 65,536 digits, whereas a 24-bit sample can contain 16,777,216 digits. A 24-bit sample can therefore contain a far higher resolution representation of the audio signal. The main effect of this in digital audio is that a 16-bit sample has a maximum dynamic range of 96dB whereas a 24-bit sample theoretically has a maximum dynamic range of 144dB. In reality, audio converters commonly found in today’s technology cannot achieve 144dB dynamic range; 120dB is more realistic with a good quality converter.
Bit depth should not be confused with bit rate, which refers to the number of bits transmitted per second.
Whatever is left out at this stage can’t be added in later, which means any digital signal is only as good as the A/D conversion from the original analog sound.
In order to recreate the signal it must be passed through a digital to analog convertor (a D/A or DAC) at the other end. The Nyquist Principle from Swedish engineer Harry Nyquist states that if you sample at twice the maximum frequency of the signal being sampled, the DAC will render an output waveform identical to the input waveform. So, if a sampled audio system is required to carry signals up to what a human can hear - 20kHz - the sampling rate must be at least 40kHz.
This explains why digital audio has to have a restricted bandwidth, and also why those poor dogs aren’t enjoying broadcast content as much as they could be.
Digital Broadcast Consoles
Digital audio provided an opportunity for sound designers to achieve much more in broadcast, and early adopters began to install digital broadcast consoles into audio control rooms in the 1990s.
They were flexible, they benefited from cumulative software updates, they were more powerful, and they had huge I/O matrixes. They also simplified installation costs by using less cabling, and once signals had been digitised, they remained in the digital domain throughout the production chain which made for easier integration with digital video systems.
Once in that environment, Digital Signal Processing (DSP) is used to manipulate those signals in real time. By this point the signals are all numbers – ones and zeros – and the DSP manages those all mathematically.
The good thing about DSP is that is it endlessly adaptable, and its firmware can be programmed to do very specific jobs which enable designers to develop new features and build more value into a product in a relatively short space of time.
For many years most commercially available consoles used the same off-the-shelf floating-point chips for DSP, and as capacity increased so did the number of chips required to process channels. As broadcast mixes became more complex more chips were required, with more PCBs, more backplane activity, and greater potential for on-air failure.
As broadcasters prepared for HD, the onset of televised 5.1 surround sound multiplied the number of required audio channels further still. Now, for every two-channel stereo input, broadcasters needed to provide a six channel 5.1 input, with a stereo downmix for legacy formats.
Open The Gates
Conventional DSP systems were limited by the number of signals passing between DSP cards, fighting for space on the backplane along with I/O. They were limited by backplane speed and took up more space.
Broadcast was grateful adopter of Field Programmable Gate Arrays (FPGAs), and broadcasters still use them for DSP processing today. FPGA’s are blank chips which provide a canvas to create processing structures to perform very specific tasks. FPGAs meant that processing power could be tailored to exceed the channel numbers which were possible with traditional DSP chips. It was a total step change, a paradigm shift.
It also changed the way people thought about DSP for audio; FPGA’s don’t come with any limitations on bit-depth, and provided some manufacturers an opportunity to select the number format (the bit depth) to meet the level of performance which was required by the function.
We’re Going To Need A Bigger Desk
This increase in capacity in turn drove the design of broadcast audio consoles, as the worksurface became the bottleneck and the ability to control and manage such a huge number of channel inputs became the limiting factor. Digital architecture gave broadcasters the ability to provide more immersive, more involving and more content, and hardware UI design adapted to fit these bigger workloads.
For large-scale broadcast audio processing, FPGAs are still the most efficient way to do things, using either on prem or edge hardware. But as cloud workflows become more acceptable and the benefits become more tangible, expect things to change again.
Analog audio still has its fans, and as any visit to an online audio forum will show you, they are vociferous. The irony is that most modern output is a combination of the two – even if something has been recorded and mastered in a fully analog workflow, it’s most likely being streamed and listened to in a digital format, at whatever bit rate it has been converted to.
As consumers we’ve traded quality for convenience, and digital audio has allowed all that to happen.
Supported by
You might also like...
Designing IP Broadcast Systems
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.