Understanding the basics of IP Networking, Part 2
Today's entertainment technology would not be possible without IT-centric networking. Image courtesy DK Technology group
In this look at the potential use of IT solutions in broadcast applications, John Watkinson turns to key issues of bandwidth, latency and compression.
IT packet switchers are superficially like broadcast routers in that stuff comes in and stuff goes out, but that’s as far as it goes. Broadcast routers know something about broadcast signals; IT switchers wouldn’t know a broadcast signal from a hole in the ground. IT based networks have to deliver data of all kinds, so they do that by being totally agnostic about what the data represent. That is a key reason IT equipment is less expensive than broadcast equipment. It has to do with market size and the economy of scale. The broadcast market is not the driver for the way IT equipment is built. It never has been and never will be. IT equipment is what it is and if we intend to use it in broadcast applications we have to take it as we find it and discover ways to work around its uniqueness.
That’s not necessarily a bad thing. The IT market decided that the majority of hard drives would be of a certain physical size. Then someone had the bright idea of assembling them into arrays which offered distinct advantages.
Broadcasters follow, not lead the IT industry.
IT networks require the data to be fitted into packets of exactly the same size so from a content standpoint they all look the same to the network. In that way anybody’s packet can follow anybody else’s packet down a cable and provided the packets are labelled or numbered people only get the packets they were expecting and nothing else.
Any given link can only send one packet at a time, so packets belonging to everyone else have to wait. Packet multiplexing, that is by definition subject to interruptions, is inconsistent with the constant bit rate required by digital audio and can only be made to work using buffer memory. The buffer at the sending end fills up and the buffer at the receiving end outputs data whilst somebody else’s packets are being sent. This only works if both buffers are half full, so they can tolerate the greatest swing of packet transmission rate. The bigger the buffers, the more irregularity that can be absorbed before death by egg timer occurs. But the presence of the buffers causes delay, or latency. It’s a real swings and roundabouts issue. Where the lowest latency is required, small packets and small buffers are needed.
Audio data differ from generic data in that audio data need to be reproduced with a precisely defined time axis. If the sampling rate is wrong the audio pitch gets changed. If the sampling clock jitters, the quality deteriorates. IT knows nothing about this. If using IT equipment to deliver audio data, the next question has to be how is the correct sampling rate clock to be recreated at the destination? MPEG Transport Streams have that technology. They can recreate a remote clock using Program Clock Reference signals. Unless the audio data is transferred in no real time to a storage device, something like that is required in an audio network.
Multiplexing allows several signals to share a single data stream and then be properly separated at the destination.
Another vital point to grasp is that a multiplexed data stream has finite bandwidth. Even if the multiplexing is ideal the bandwidth of the data stream is reduced by the need to send addresses and labels and error checking codes. The bandwidth that is left has to be shared between the different people hoping to send data. In that sense it resembles a freeway. During the finals of the Super Bowl, you will see no traffic at all except the odd patrol car. On a sunny weekend it will be jammed with people going to the beach. So IT networks are statistical, which means that under some circumstances they may choke.
Clearly that is not acceptable for a broadcast installation. If you go off air that’s a disaster. If production is stopped you have people who are being paid to sit around. Steps have to be taken to make sure there is always capacity available so that those packets are never held up. If it’s important enough, your network has to be completely under your control so you can decide what information is sent through it and when so it never chokes. That also allows the best level of security which is another word IT doesn’t understand. Another approach is Quality of Service (QoS). With QoS all packets are not equal. Packets about rusty old pick-up trucks are held up while packets in a black limo with motorcycle outriders sweep by.
Today's compression technology allows for smaller transport streams, yet enables virtually indistinguishable outputs.
The amount of required bandwidth can be reduced by using compression. But, that too raises important issues. Firstly, compression works by prediction. The decoder tries to predict what some attribute of the audio waveform will look like. If something novel comes along, that prediction will fail. However the encoder also contains a decoder, so the encoder knows exactly how the decoder failed and can send correction data. The decoder adds the correction to its failed prediction and out pops the audio. If the prediction error was sent in its entirety, the decoded signal will be identical to the encoded signal and the result is lossless.
In practice, lossless compression does not achieve a very high compression factor and instead, not all of the prediction error is sent so that the decoded signal is not an exact replica of the original signal. One of the early tenets of digital audio was that generation loss could be eliminated by cloning the data. Lossy compression brings us right back to the analogue days of generation loss. Every time a signal passes through a lossy codec, it gets a little worse. This is progress?
Another problem that rears its head with compression is that it requires extraordinarily good loudspeakers to be used for monitoring. The reason is that cheap loudspeakers act like lossy compressors in that they remove some of the information in the signal. In a typical audio scenario, the production process was performed with high quality speakers and then the result was losslessly stored and delivered. It was not necessary to have quality speakers to check the router because it was enough to know the signal was there. With the use of compression in the network the signal might be impaired and with cheap speakers no-one will hear it until it is too late.
It should be obvious that the better the ability of encoder and decoder alike to predict, the better the compression factor that can be achieved. However, prediction requires the system to be able to look ahead. As we lack clairvoyance software, the look-ahead has to be done by delaying the signal. It follows that high compression factor goes hand in hand with high latency. Compression and real time are mutually exclusive. Another point that needs to be made is that the saving on IT router cost by the use of compression may well be eclipsed by the cost of all the codecs. It’s important to look at the big picture.
IT equipment may be less expensive than a single-purpose broadcast solution--but it’s a mixed blessing. There is no single solution to everyone’s problems. Factors affecting choice include the physical size of the network, what signal damage risks it runs, what probability of failure is acceptable. Also, audio network designers need to consider the level of security. Is real time operation required, if not, how much latency is acceptable? Finally, what sound quality is required and does a sampling clock need to be remotely reconstructed? Such are some key questions that need to be considered far before purchase decisions are made.
You might also like...
Expanding Display Capabilities And The Quest For HDR & WCG
Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.