Practical High Dynamic Range (HDR) Broadcast Workflows - Part 1

HDR is taking the broadcasting world by storm. The combination of a greater dynamic range and wider color gamut is delivering images that truly bring the immersive experience to home viewers. Vibrant colors and detailed specular highlights build a kind of realism into broadcast productions that our predecessors could only ever have dreamed of.



This article was first published as part of Essential Guide: Practical High Dynamic Range Broadcast Workflows - download the complete Essential Guide HERE.

To completely understand how we can leverage the benefits of HDR we must look deep into the HVS (Human Visual System) to gain insight in to exactly what we’re trying to achieve. At first this may seem obvious as we want to improve the immersive experience, but television, like all things engineering, is a compromise. Consequently, understanding the trade- offs between what we can achieve and what is required is critical to delivering the immersive experience.

The HVS is a complex interaction of the physical sensors in the eye, the visual cortex, and the psychology of how we perceive moving images. The HVS responds differently to still and moving pictures, and to static and dynamic range.

Then there is the color to consider. Although a greater dynamic range can be achieved in the luma domain, to deliver the optimal viewing experience, we’ve expanded the color gamut to greatly extend the greens as well as improving the reds and blues. If done correctly, the pictures will look outstanding.

Making HDR work in production workflows, whether live or pre-recorded, requires us to reconsider our established and proven working practices. Although program makers will want to move as quickly as possible to delivering HDR productions, there are an unimaginable number of televisions already in people’s homes that are not HDR compatible. This leads to maintaining backwards compatibility and the challenges associated with it.

Two dominant HDR systems are evolving; HLG (Hybrid Log Gamma) and PQ (Perceptual Quantizer). Both have their advantages, and both work well in broadcast workflows. However, each has its own idiosyncrasies and tends to lend itself to a particular method of working. Again, understanding the HVS helps us decide which system is better for our particular use-case.

“Scene referred” and “display referred” are terms and concepts that have existed in television and film since their inception, but it’s only recently that as broadcasters, we have had to consider the differences between them and the consequence of the true impact on broadcast workflows. This has led to the concept of metadata and a whole new vocabulary has crept into the broadcast community to help optimize images for different types of television.

HDR has not only forced us to rethink our approach to workflows but also how we monitor signals. Peak white isn’t as obvious as it was in the days of standard dynamic range and gamut detection is more important now than ever, especially as we’ve moved to a much wider color space.

These articles take us on a journey of understanding to discover what exactly we are providing with HDR and why. The practical aspects of the HVS are considered along with the requirements of the broadcast HDR workflows.


HDR is gaining incredible momentum in broadcasting, but the revolution isn’t just about higher dynamic range in the luma, it also embraces a much higher chroma space to deliver outstanding vibrant colors, more presence, and a deeper immersive viewing experience. Although the pictures may look outstanding, creating them requires a deeper understanding of the underlying technology and the systems to monitor.

To create a more immersive viewing experience, broadcast innovators have been attempting to replicate nature as much as possible and bring the outside scene into our homes. This includes a greater increase in the difference between the highlights, and the details in the shadows with smoother blacks that produce the dynamic range in the image. Although we are far from truly replicating nature, increasing both the luminance range of HDR and the associated color gamut of DCI-P3 and BT.2020, delivers the optimal viewing experience.

NITs And Candela’s

Traditional standard dynamic range (SDR) uses fixed signal voltage levels to define peak white and black, but as we move to HDR we tend to think more in terms of light levels. The term NIT is a non-SI unit but has been adopted by some in the television and broadcast community to refer to the SI measurement of luminance. One NIT equals one candela per meter squared (1 NIT = 1 cd/m2).

Cathode Ray Tube (CRT) television typically had a brightness of 100 NITs (100 cd/m2), OLED has a maximum of 600 – 700 cd/m2 with modern LCD and QLED screens easily reaching 1,000 cd/ m2 to 1,500 cd/m2, and the new Sony 8K monitors have been reported to reach 10,000 cd/m2 (although this is not yet commercially available). However, great care must be taken in interpreting these specifications. Sometimes, vendors do not always specify whether the maximum brightness value refers to the whole screen or just parts of it.

This is particularly interesting when we look at what HDR is supposed to do as opposed to what we can make it do. Technically, it would be possible to display alternate black and white stripes of 0 cd/m2 and 1,000 cd/m2 on a monitor. However, the intense brightness of the 1,000 cd/m2 bar and the contrast it provides compared to the 0 cd/m2 bar would likely cause discomfort for the viewer.

Instead, it is the specular and transient highlights that display the 1,000 cd/ m2 (and beyond) levels. It is perfectly possible to provide a display that can light large parts of the panel with high level display, but this may create discomfort for the viewer and require potentially huge power supplies making significant demands on a home-owners electricity supply.

Human Vision System (HVS) Requirements

The concept of dynamic range has two components; the ability of a system to replicate a wide difference between the highlights and the lowlights, and the effects on the human visual system. A home SDR television set can display approximately 6 f-stops and professional SDR about 10 f-stops. Each increase in f-stop is a doubling of brightness. In this instance a display with 6 f-stops gives a range of 64:1 and a display with 10 f-stops gives a range of 1024:1.

Figure 1 – Human luminance detection is formed by the scotopic (rods), mesopic, and photopic (cones) receptors of the eye. The mesopic region uses combined rod and cone response to give a cross over between them.

Figure 1 – Human luminance detection is formed by the scotopic (rods), mesopic, and photopic (cones) receptors of the eye. The mesopic region uses combined rod and cone response to give a cross over between them.

Research has demonstrated the human visual system can adapt to a range of 10-6 to 102 cd/m2 for the scotopic light levels, where the rods dominate, and 10-2 to 105 cd/m2 for the photopic light levels, where the cones dominate. The mesopic area covers the overlap between the scotopic and photopic light levels from 10-2 to 10 cd/m2. This gives a complete range of 10-6 to 105 cd/m2 or 10,000,000,000:1, approximately 33 f-stops.

The HVS is only able to operate over a fraction of this range due to the various mechanical, photomechanical and neuronal adaptive processes that move the range to facilitate the appropriate light sensitivity, thus allowing the HVS to be maximized under any light conditions. This reduced range is referred to as the steady-state dynamic range.

One reason for this reduction in dynamic range is that even with 33 stops of range, the brightest objects have a much higher luminance than the top of our range.  For example, the sun has a luminance level of approximately 109 cd/m2. An example of how our HVS automatically adapts is seen when we look out of a bright window and then into a dark room. Our HVS is quickly and automatically adjusting between the two scenes to give the perception of a much higher dynamic range.

Our static dynamic range may only be 11 stops, but this automatic adjustment effect gives the perception of the range of 14 stops, or even 20 stops in the right lighting conditions.

Supported by

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.