Expanding Display Capabilities And The Quest For HDR & WCG

Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.

For decades, it was normal for cameras to be configured according to the same standards as consumer displays. Those consumer displays could vary quite widely from the standard, for reasons of shop-front competitiveness, viewability in realistic circumstances, and cost-related engineering compromise - but in principle, a single standard was used throughout.

Now, there are at least as many ways for cameras to encode brightness and color as there are manufacturers. In distribution, though, a few approaches to HDR have emerged as clear leaders, alongside the one or two legacy approaches which handle SDR. So, while HDR production facilities will often involve proprietary solutions, there are far fewer options for the pictures output by a broadcast facility.

The hardware required to make HDR available for home viewers has advanced hugely. Even so, it’s often the differences between displays, especially those based on different technologies, which demand distribution standards which can react to the differences between them. For the first time, pictures are distributed in the full expectation that not every display has precisely the same capabilities - and not because of miscalibration, but as a fully-understood and expected part of their construction.

Display Basics: LCD

The majority of current consumer displays are based on liquid crystal panels. The technology is related to the black-on-silver segmented numerals of wristwatches and calculators from the 1970s. Wristwatch-style displays are transreflective devices, using light reflected from the environment, while TVs are transmissive devices, effectively a filter on top of a continuously-illuminated backlight. The big difference is that each tiny segment in a video LCD is built on a transistor deposited on a sheet of glass using Thin-Film Technology - hence TFT. That transistor hugely increases speed and contrast.

The brightness limits of a TFT-LCD display are set by its ability to absorb the energy of the backlight. The operating principle also relies on polarized light, in much the same way that two pairs of polarized sunglasses become opaque and black when rotated against one another. Polarizing the backlight costs at least 50% of the light output at all times. Finally, black levels are affected by the fact that the LCD effect itself is not ideal; the maximum density of the LCD panel is not total.

Display Basics: OLED

In principle, many Organic Light Emitting Diode (OLED) displays are also TFT devices, with each pixel activated by a thin-film transistor. The operating principle, however, is entirely different: instead of blocking the output of a backlight, each pixel generates its own light when activated. This means, in principle, that OLEDs can achieve practically zero black level, which is very attractive for a display intended to reproduce the high contrast of HDR.

The development process of OLED was somewhat protracted and the technology is not as mature as LCD, to the point where Sony shuttered its high-power OLED manufacturing facilities. The organic material used to create the LED emitter of each pixel was shown in early designs to be somewhat prone to degradation, particularly when run at a high output, to compete with LCD. OLED remains generally less bright, given each pixel must generate its own light while an LCD can be made brighter, up to thermal limits, simply by using a more powerful backlight.

With these limits in mind, some designs, particularly by LG and manufacturers using LG’s panels, use a white-emitting subpixel alongside the red, green and blue-emitting subpixels. The white emitter is used when absolute maximum brightness is required, which affects the accuracy of any image content which is both bright and saturated. Even so, products using this approach were probably the best consumer displays ever made at launch. Even more recent designs, particularly by Samsung, use new, lower-degradation, blue-emitting OLED elements, capped with fluorescent materials to convert their output to red and green as required.

Quantum Dots

These fluorescent materials are often based on quantum dots, so-called because their color is dependent on the size of sub-microscopic particles which can be tuned at manufacture to produce any color required by the designer. Quantum dot OLED displays are available in both the professional and consumer markets, and generally offer improved lifespan, brightness or color purity.

Dual-layer LCD

In the face of improving OLED displays, manufacturers have begun to offer designs which layer two LCD panels together. Placing a single monochrome black-to-white pixel behind each group of red, green and blue subpixels allows a large contrast range, with black levels rivalling OLED. To date, this technology has remained too expensive for consumer displays, but is sometimes seen in reference displays for color grading and vision engineering.

Better Signals For Better Displays

Distributing pictures to satisfy these advanced consumer displays has become a new challenge, particularly in a world where increasing broadcast bandwidths - increasing costs for both OTT and OTA distributors - is not ideal. The priority, then, has been packing more picture into similar amounts of data. Better codecs help, but optimizing the way we encode brightness and contrast is key to handling HDR images without requiring bandwidth that distributors would prefer not to fund.

Brightness is represented as a number, but doubling that number has never doubled the absolute amount of light, in terms of photons per second, which comes out of the display. The mathematics used have often been intended to match the signal level to how bright the display looks, as opposed to how bright it is, though only approximately.

Better Brightness

HDR standards are more accurate. For broadcast distribution, SMPTE has developed the Perceptual Quantizer. PQ is based on a detailed analysis of human vision, such that each digital code value represents an extremely similar apparent increase in brightness. PQ is designed to encode light levels between zero and 10,000 nits using 12 bits, or 4096 levels. 10,000 nits is far beyond the capability of common displays and would be dazzling in most practical situations. 10-bit PQ is widely viewed as adequate for display technologies up to a few thousand nits. PQ is standardized as SMPTE ST 2084 and deployed in consumer and broadcast technology as part of HDR10.

Legacy displays are largely designed for SDR pictures, and will not produce a watchable image from HDR signals using PQ brightness encoding. The desire for backward-compatibility led the BBC and NHK to collaborate on the development of Hybrid Log-Gamma. As the name suggests, HLG combines characteristics of the gamma encoding used in conventional SDR (for darker picture details) with the logarithmic response of some modern camera formats (in brighter areas).

HLG signals have reasonable appearance on SDR displays, but contain enough information for HLG-aware HDR displays to produce true HDR images. Because HLG images may also use a wide color gamut, displays expecting conventional HD may still suffer compatibility problems, rendering images with reduced saturation and hue shifts. HLG is standardized as the International Telecommunication Union’s Recommendation BT.2100.

HLG has most often been seen as more suitable for live broadcast, while PQ has mostly seen service in video-on-demand delivery of single-camera drama - though both systems have seen various applications.

Further improvements are offered by Dolby Vision and HDR10+. These standards are both a reaction to the reality that different monitors have different abilities to display contrast. These standards are based on PQ, but include information - metadata - about the expected contrast and brightness range of the material. That lets a display process the image for the best possible results given that particular display’s knowledge of its own capabilities.

Even More Color

Historic color standards were largely based on what could be built, rather than what might be ideal. It had long been recognized that this left broadcast television with a noticeable lack of certain real-world colors.

Figure 1 – The widely used CIE 1931 human perception diagram with REC.709 and REC.2020.

Figure 1 – The widely used CIE 1931 human perception diagram with REC.709 and REC.2020.

Cameras, other production equipment and consumer displays implementing HLG, HDR10, HDR10+ or Dolby Vision are all likely to be able to handle images prepared according to the ITU’s Recommendation BT.2020. The standard defines red, green and blue primaries which are monochromatic - a single wavelength of light on the outer edge of a CIE 1931 diagram.  It is formally impossible to generate entirely monochromatic light, although technologies such as solid-state lasers, which are sometimes used in cinema projection, can get close.

Future Ideas

As we have seen, because of complex biological behaviors of the human visual system, the shaded area of the CIE 1931 chart which represents all visible colors is not a triangle. The range of colors displayable by a system using three primary colors, however, is a triangle. The three primary colors of an electronic imaging system which can be physically constructed must be real colors and therefore inside the visible area. It is, therefore, impossible to build a display with three primaries which is capable of reproducing all visible colors.

Displays capable of more color might therefore require more than three primaries. A common proposal is to add a cyan or turquoise primary. This allows engineers to choose a green primary which leans toward the orange-yellow part of the chart. Normally, this would improve the rendering of warm colors (including crucial human skin) while compromising the ability to show hues in the teal range. Adding a cyan emitter would greatly alleviate that. Systems with up to six primaries have been proposed. This would still not reproduce every visible color, but it might get very close.

Satisfying The Distribution Channel

All of these improvements require more information from camera systems. That requires more from lenses, cameras and every other item in the chain of equipment that creates a broadcast. Making the right choices in production will always rely on an understanding of what the viewer will see, and how, so the white heat of development for the mass market creates a viewer base that will forever be a moving target.

Supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…