Understanding IP Broadcast Production Networks: Part 7 - Timing
How the introduction of PTP addresses the critical challenges of timing in IP networks and brings additional flexibility to broadcast infrastructure.
All 14 articles in this series are now available in our free 62 page eBook ‘Understanding IP Broadcast Production Networks’ – download it HERE.
All articles are also available individually:
Broadcast has timing intrinsically built into the signal paths. For example, analog PAL and NTSC have field and line sync pulses to synchronize the scanning process in cathode ray tubes. Color sub- carrier bursts synchronize the flywheel oscillator to lock the color demodulation frequency. And SDI and AES have bi-phase modulated clocks built into their signals so that the receiver clock can lock to the sample clock.
Clock synchronization is extremely important in both synchronous and asynchronous digital television systems. The problem we are trying to solve is to keep the encoder and decoder sample clocks at the same frequency and in phase. If we do not do this, then one clock will run faster than the other resulting in either too many or too few samples reaching the decoder.
Lost samples of data in uncompressed signals will cause an instantaneous audio splat or loss of a video pixel. In compressed systems, the effect could be much worse as forward and reverse compression can result in a prolonged error.
Broadcasters have gone to great lengths to provide master clock referencing for both audio and video in the form of master sync pulse generators.
Although Ethernet uses bi-phase modulation to encode its data and clock signal, the clocks are not synchronized between network interface cards (NIC’s) so we cannot use this as a form of global synchronization.
GPS has been used in the past to lock encoders and decoders; however it’s proved impractical when the signal path moves away from line of sight of a satellite.
Precision Time Protocol IEEE-1588 has been developed by the IEEE to address the issue of network timing. PTP was designed as a standard for many different industries and as it can provide sub-microsecond accuracy, it lends itself well to broadcast television.
PTP works in a master slave topology. One server or customized device is nominated as the master clock, and all other devices within the subnet synchronize to it forming a network of synchronized servers.
Although the protocol can run on any router without modification, some configuration work must be done to provide the timing packets with the fastest and shortest delay path in the network. Network engineers achieve this by setting the quality of service (QoS) in the routers for specific types of packets by using a form of rate shaping that gives priority switching to the timing signals.
The time difference between the master and slave clocks consists of two components; the clock offset and the message transmission delay. To correct the master clock, synchronization is achieved in two parts, offset correction, and delay correction.
The master clock should be a very accurate generator capable of providing 1GHz clock samples, either locked to GPS or deriving its clock from an oven-controlled oscillator in a similar way to the broadcast sync pulse generator. Established manufacturers of SPG’s are now including PTP clock outputs on their products.
In a similar way to Unix time systems, PTP uses the concept of an Epoch clock. This is an absolute time value when the clock was set to zero, and the number of 1GHz clock pulses that have occurred since provides the current time, these are converted into human readable time with software to provide year, month, day, hours, minutes, and seconds. The Epoch (or zero time) for PTP was set at midnight on the 1st January 1970.
As PTP uses a 1GHz master clock, the granularity of the slave clock can be accurate to 1nS. The clock should be thought of as an event clock or presentation time clock rather than an absolute pixel count.
Software timing is notoriously unpredictable hence the reason manufacturers have kept to hardware solutions for time critical processing such as video playout. When PTP masters create timing packets, and slaves receive them, the timestamp should be inserted within specially designed network interface cards at the Ethernet layer. If it was inserted by the software stack, then jitter would occur due to the unpredictable interactions of the operating system and software stacks.
When sending a video frame, some frames will arrive ahead of their display time and some behind. Buffers smooth this out and the internal presentation software will make sure the frame is constructed before the next field pulse comes along. In effect, the frame pulses are synchronized by the PTP, so the frame rate of the receiver is locked to the encoder.
The benefits of this method of synchronization go beyond video and audio playout. PTP now provides us with a predictable event clock so we can trigger events in the future instead of relying on centralized cues. If a regional opt-out of Ads was to occur in a schedule at 19:26:00hrs, the remote playout servers would be able to switch the program at 19:26:00hrs to play the regional Ads within a timeframe of 1nS. If the schedule database is correctly replicated to all the regional playout servers, we no longer have to rely on cue tones and in-vision prompts to provide opt outs.
PTP protocol allows for master and slave devices to be daisy chained together so a slave device can become master for another subnet. In this way, we can have entire LAN’s and WAN’s synchronized together to allow broadcast devices to accurately switch and mix between sources.
In traditional analog and SDI studio’s there tended to be just one timing plane for the video; the production switcher. If multiple production switchers were to be used, then video synchronizers would have to be employed to provide another timing reference. PTP removes this need as the timing plane is essentially the same throughout the entire network as all slaves and masters become synchronous.
A new timing dimension has been brought to broadcast television.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…