Practical High Dynamic Range (HDR) Broadcast Workflows - Part 2

There is a school of thought that suggests increasing the brightness through the contrast control on a television will give a higher dynamic range. However, this doesn’t necessarily increase the contrast ratio. Quantization noise is the enemy of dynamic range and increasing brightness in a system with a low bit depth makes quantizing banding obvious. When banding occurs, the brightness must be turned down to remove it or, the bit depth should be increased.



This article was first published as part of Essential Guide: Practical High Dynamic Range Broadcast Workflows - download the complete Essential Guide HERE.

Immersive Enhancing with Wide Color Gamut (WCG)

HDR also embraces wider color gamuts. Rec.709 is based on the sensor and monitor capabilities of the 1960’s when color television was first mooted and was later developed for HD Colorimetry around 1993. Technology has now advanced and much wider color gamuts are available, leading to the Rec.2020 color space. The greens are particularly extended, resulting in an enhanced viewing experience. This is particularly useful for sports events played on grass.

Making HDR work in real-live broadcast workflows always requires compromises and to provide greater choice, there are essentially two versions of HDR, both defined in BT.2100; Hybrid Log Gamma (HLG) and Perceptual Quantization (PQ). Both have their merits and provide optimized solutions for two different workflows.

HLG was designed to provide the fastest possible migration from SDR to HDR for broadcasters and it satisfies two fundamental requirements for them; it is scene referred and it is backwards compatible with SDR.

Non-linear HVS Characteristics

The De Vries-Rose law demonstrates that the threshold of visibility of banding becomes higher as the picture gets darker. In other words, in the low lights, it becomes increasingly difficult to see banding. Weber’s law tells us that the threshold for perceiving quantization in the highlights is approximately linear.

The original CRT’s were relatively dim with an output of 100 cd/m2 and the gamma curves chosen for them closely match the De Vries-Rose law. Although banding didn’t exist in the early days of broadcasting as the signal processing was in the analogue domain, noise certainly did, and it is the noise that gamma correction helps minimize.

This gives us two distinct curves, gamma for low lights and shadows, and logarithmic for highlights. The term “hybrid” in HLG is the joining of these two functions to provide the best of all worlds for both SDR and the extended highlights of HDR.

Scene referred is the working practice broadcasters have been using since the inception of television. It makes the fundamental assumption that we have no control over the viewers television and have to provide an image that can be displayed on many different types of screen, but all fairly similar.

Figure 1 – SDR and HDR HLG are similar up to about 0.6 of the input luminance level, after which, the HLG curve continues to sympathetically compress the highlights. This helps maintain backwards compatibility with SDR.

Figure 1 – SDR and HDR HLG are similar up to about 0.6 of the input luminance level, after which, the HLG curve continues to sympathetically compress the highlights. This helps maintain backwards compatibility with SDR.

Images are shaded (racked) for a reference monitor in the studio and it is assumed this will give the best image for home viewers. SDR monitor luminance outputs have increased over the years and levels of 100 cd/m2 to 200 cd/m2 are readily available. Although the average picture levels will have changed and there will be some differences in relative areas of grey, higher output televisions still provide a picture similar to that of the studio reference monitor.

HLG HDR has a relatively linear curve in the 100 – 200 cd/m2 range and is similar to that of the SDR Rec.709 gamma specification, HLG then extends the Rec.709 curve to capture specular highlights. Consequently, it provides some backwards compatibility between HDR and SDR images. Again, this is a compromise and there are some differences, but in general, the system works well. This is particularly important for live events as all the production equipment is mostly backwards compatible with existing workflows.

HLG Advantages for Live

This has even more advantages in the studio or outside broadcast truck as existing SDR monitors can be used to display the HLG images. There will still need to be an HDR reference monitor with appropriate waveform and vectorscope monitoring, but standard SDR monitors can be used by the rest of the crew.

PQ takes the view that the viewers display device has its own unique properties and display characteristics. The particular application for this type of system is high-end productions and movies as the whole system has its roots in cinema. Companies such as Dolby have developed PQ to maintain the artistic content of the creatives who made the production while at the same time maintaining compatibility with other types of viewing device.

For example, if a movie was edited on a 1,000 cd/m2 reference monitor in the edit suite but was to be viewed on both a 500 cd/m2 and a 2,000 cd/m2 television at home, why not just apply the HLG techniques discussed earlier and maintain a good compromise between the two? The key is understanding the definition of “artistic intent” in this context. That is, if the DoP wants the face of the actor to be at 70 cd/m2, then it should be 70 cd/m2 on both screens.

PQ Movie Quality

This beautifully illustrates the true power of HDR. Although both screens show the actors face at 70 cd/m2, the 1,000 cd/m2 television has significantly more “luminance headroom” than the 500 cd/ m2 television to display specular and point highlights, and this is the true power of HDR. It’s not about cranking up the metaphorical “luminance volume”, but it is about providing the best immersive experience. Furthermore, increasing the brightness on a 1,000 cd/m2 is going to have a seriously detrimental effect on the viewer.

Figure 2 – The HDR workflow follows this model to help describe the interaction of the various stages. OETF is optical electrical transfer function (camera to video signal), EOTF is the electrical to optical transfer function (video signal to display) and the OOTF is the optical to optical transfer function, that is, the behavior of the whole system.

Figure 2 – The HDR workflow follows this model to help describe the interaction of the various stages. OETF is optical electrical transfer function (camera to video signal), EOTF is the electrical to optical transfer function (video signal to display) and the OOTF is the optical to optical transfer function, that is, the behavior of the whole system.

If, in the example above, the sequence was shot at night after it had been raining, there would be very narrow highly intense specular highlights as streetlamps reflect off the still rainwater. On the 500 cd/m2 television these would reduce in clarity and become more of a blob than a specular highlight, but on the 1,000 cd/m2 television, they would be shown in all their glory, further enhancing the immersive experience of the movie.

PQ Metadata Improvements

The principle reason for metadata is to give the end display device the necessary information it needs to convert the PQ image to meet the level of brightness it can achieve. It can use an industry preset, but this is a significant compromise and negates the whole reason for having PQ in the first place.

Display devices use complex algorithms to convert from one brightness level to another and they work both globally across the entire screen, or locally on specific areas of the image. In a display referred device, as used in PQ, the act of converting from the master HDR range to the display HDR range is called tone mapping.

Tone mapping is an incredibly complex task and a great deal of academic research has been conducted into this as it isn’t unique to broadcasting and has many applications in industry and medical imaging. Transferring from one HDR range to another is not a simple task of applying a linear transform.

The specular highlights, for example, demonstrate localized conversion, the display would need to adjust the area within the locality of the highlight without affecting the rest of the image to provide the optimum image. On other occasions, the whole image will need to be adjusted.

Converting between video formats such as the raw camera output, or S-Log, takes place within the production as opposed to the display (as with PQ). Look Up Tables (LUTs) provide the method of converting between luminance dynamic ranges as well as color spaces and two fundamental types are in common use; 1D and 3D.

Figure 3 – Although color is traditionally represented using the CIE color gamut, this is a diagonal slice of the cuboid representation. The cube contains the luminance information and demonstrates how changing the luminance without reference to the chrominance can cause out of gamut errors. Image supplied by Telestream.

Figure 3 – Although color is traditionally represented using the CIE color gamut, this is a diagonal slice of the cuboid representation. The cube contains the luminance information and demonstrates how changing the luminance without reference to the chrominance can cause out of gamut errors. Image supplied by Telestream.

Respect Color Gamut

1D LUTs provide a basic luminance and chrominance conversion but 3D LUTs improve on this to make sure the correct color space is respected.

Great care must be taken in any conversion process as the true color space, whether Rec.709 or Rec.2020, is a cube. The flat triangular CIE representation we often see is a diagonal slice across the color space cube. In effect, the luminance component has been removed. Converting between HDR and SDR isn’t just about scaling the luminance, it also embraces the color transform. If this is not performed correctly, out of gamut errors can easily arise.

If a live production is simultaneously providing an HDR and SDR program, the color spaces of both Rec.709 and Rec.2020 must be monitored independently, but this is currently limited to DCI-P3 as Rec.2020 monitors are not yet available. It’s not enough to look at just the HDR output. Several methods of monitoring are emerging to improve monitoring, especially when making simultaneous HDR and SDR programs with dual color spaces.

Optimized Workflows

There is no suggestion that either HLG or PQ is better than the other, but they do have different use case applications where they appear to excel. That does not mean a broadcaster cannot use a PQ system for live sports, or they cannot use HLG for a block buster movie (the designers of HLG also argue they maintain the artistic intent), but each seems to have its own particular strengths and merits.

The immersive experience provided by the extended dynamic range and color space of HDR is taking broadcast television to a new level. However, as HDR is a relatively new technology and many viewers still require the SDR transmission, broadcasters are required to simultaneously provide both SDR and HDR feeds when making HDR programs, not only for live events, but also in post-production. The key to making high quality programs in both domains is to understand how HDR works in combination with the human visual system, and understand the optimal use of high quality monitoring in both workflows.

Supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Six Considerations For Transitioning To Cloud Based Video Distribution

There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…