HDR Picture Fundamentals: Brightness

This article describes one of the fundamental principles of broadcast - how humans perceive light, how this relates to the technology we use to capture and display images, and how this relates to HDR & Wide Color Gamut

From the very earliest days of television, it’s been normal for the image on a display to not exactly match the real world in front of the camera. A video image of a sunrise does not cast a shadow; we can stare straight into that sun without danger. High Dynamic Range and Wide Color Gamut have produced even more capable pictures, but the way television systems change the brightness of a scene to accommodate technological limits still has its roots in Philo Farnsworth’s early experiments, and in the very dawn of photography.

How We See Brightness

Television images are created to satisfy human eyes, so the way they work necessarily depends on how the eye works. Higher light levels look brighter, but the relationship is not simple. The amount of light falling on an object might be measured in lux (lx). There are a lot of photons in a lux. 1lx represents eleven or twelve thousand photons per second striking an area of one square micron, depending slightly on the color of the light. Assuming we could actually see 1lx, if we double that to twenty-four thousand photons per second, the light looks brighter. Double it again, to forty-eight thousand, and it looks brighter again - but not four times brighter than the original.

The difference between, say, 1 lx and 2 lx looks, to us, very much like the difference between 2 lx and 4 lx. The former represents an increase of 24,000 photons per second. The latter represents an increase of 48,000 photons per second. It’s a bigger jump - but both look like very similar increases in light level. This is why photographic f-stops look like a series of evenly-spaced increases in light, but actually represent a light level doubling for every f-stop. The series of numbers in f-stops - 4, 5.6, 8, etc. - represent diameters of a circle which doubles in area with every increase.

So, in mathematical terms, human eyes have something like a logarithmic response to light.

Making things look brighter and brighter, then, soon requires enormous amounts of energy. High Dynamic Range is, in part, about high peak light levels, but pushing further can require huge amounts of light. A five f-stop increase requires thirty two times the energy. Technology has to generate a large amount of light to persuade the human eye, which is why the early incarnations of OLED technology, for instance, sometimes had to work very hard to display good HDR images.

Crucially, High Dynamic Range is about contrast more than it’s about brightness. The very word range refers to the difference between the maximum and minimum light levels. LCD displays are based on OLED displays, which in particular, can achieve minimum levels very near to zero, making their dynamic range huge. The human visual system has a greater capacity to distinguish differences between darker ‘shadow’ tones than it does to distinguish differences between brighter tones.  Some compression algorithms make good use of this by compressing brighter areas within which the HVS cannot distinguish differences. Human eyes have very high sensitivity compared to cameras, but can be challenged by high contrast within a single frame - a display showing even a small area of high brightness can make dark shadows within the scene invisible. This all makes it difficult to put accurate numbers on the audience’s subjective perception of HDR pictures, and lots of work has been done on the analysis and adjustment of brightness and contrast for HDR.

For much of the history of electronic television, though, absolute brightness was not well standardized.

The Growth Of Television

Early experiments in electronic television were plagued by insensitive cameras and dim cathode ray tube (CRT) displays. Surviving photographs often show excited test subjects squinting desperately against the barrage of light which was necessary for developmental television. The monitors of the time clearly could not match their output to the real scene. We might presume that the very earliest images had brightness characteristics which were defined by Farnsworth simply adjusting the controls until the image looked subjectively reasonable on that early equipment.

Things became more complex when television pictures were transmitted. The noise and interference of radio communications would have added speckle and grain to shadow areas of the image. The solution was to increase the brightness of dark areas during encoding, much like a curves adjustment in a modern photo editing application, then decrease their brightness again when decoding at the display.

The result was a reduction in visible noise (compare preemphasis or companding in audio). The mathematics used to do this was a simple power law, in which the input value - the brightness of the image - is raised to a value lower than one to brighten the shadows. The lowercase greek letter gamma γ was used to represent that value, leading to the term gamma encoding which persists to this day.

Early systems simply relied on the behavior of a cathode ray tube display to perform the corresponding darkening of the shadows. CRTs are less efficient at low power levels, producing less light than expected given the current in the electron beam which causes the phosphor behind the glass to glow. That made the electronics of an early television much simpler and aligned well with the behavior of the human eye, a convenient coincidence which defined how television would work for decades to come.

Figure 1 – The red line shows the gamma correction added to the signal to correct for the non-linear response of the monitor (green line). This also helps boost the dark re-gions during transmission to reduce the apparent visible noise.

Figure 1 – The red line shows the gamma correction added to the signal to correct for the non-linear response of the monitor (green line). This also helps boost the dark re-gions during transmission to reduce the apparent visible noise.

Although modern OLED and LCD displays don’t have the same behavior as CRTs by default, they simulate the performance of CRT displays. This was part of how simple backwards compatibility was achieved during the transition away from consumer CRT televisions. Television standards including the International Telecommunications Union’s Recommendations BT.601 and BT.709 define the same value for gamma, with some small adjustments for technical convenience. Standards for computer imaging, including the HP-Microsoft collaboration sRGB, use very similar numbers, often close to 0.45 at the camera and around 2.2 (the reciprocal of 0.45) at the display. Since the standards are expressed in terms of the display device, the value given is often near 2.2.

Very few practical imaging systems actually use gamma encoding which is as simple as an exponent. Some, for instance, define mathematics which quickly reduces the brightness of dark shadows to absolute black, to reduce the visibility of random noise. The word “brightness” describes the human experience of light level, and so simply brightness encoding might be a better way to describe encoding systems.

Standardizing Brightness

This type of brightness encoding was widely discussed throughout the twentieth century. However, absolute brightness in terms of maximum and minimum light levels was not formally standardized until 2011, when the ITU released Recommendation BT.1886. In practice, manufacturers and calibration specialists had developed de-facto standards, but 1886 sought to codify that practice in a formal document. Again, that had been defined largely by the behavior of cathode ray tubes in precision reference displays, which led to peak brightness values in BT.1886 of about 100 candela per square metre, or 100 nits.

Many home TVs greatly exceed this value. The standard expects the display to be in dim surroundings, as in a grading suite at a post production company, and a 100-nit display will often look hopelessly dim in the domestic circumstances of a lounge with normal home illumination, let alone near a window in daylight. Home TVs typically achieve about 300 nits; computer displays, designed for use in perhaps brightly-lit offices, may be twice that.

Cinema, conversely, is comparatively dim. Relying on a white projection screen demands the darkest possible surroundings, as any stray light will illuminate the screen and compromise contrast. Illuminated fire exit signs to each side of the screen (often in bright colors) are the bane of auditorium designers. Still, those dark surroundings mean that only very modest light levels of 48 nits are standardized by the Digital Cinema Initiative to replace 35mm film projection.

High Dynamic Range in cinema is perhaps only a little over 100 nits, meaning that cinema HDR is similar in peak white level to conventional TV (though it is not similar in black level and therefore contrast). Perhaps the most important realization is that this long history of Standard Dynamic Range imaging systems often look acceptably similar to the eye, especially when not compared side-by-side, even when showing the same material. The human visual system, again, has a huge ability to accommodate different circumstances, which has always complicated display design.

For comparison, domestic HDR displays, which we’ll discuss later, often peak at around 1000 nits; specialist designs may approach 4,000. HDR systems use very different approaches to brightness encoding, even including intelligent systems which may modify the image to create the best possible display.

Crucially, the same things still provoke standardization efforts: the sheer capabilities of display technologies. CRT displays are a legacy item now. LCD displays are effectively active filters, selectively blocking a backlight created using (invariably) LEDs. Peak brightness is limited only by backlight power consumption - which is largely a thermal limit in engineering terms, but also encounters regulatory issues around the power consumption of consumer electronics. OLED displays have struggled to achieve very high brightness, but enjoy vanishingly low black levels.

No matter which display technology is in use, though, High Dynamic Range is a term which refers to contrast and, in practice it is often found alongside improved color standards which facilitate a wider range of brighter, deeper hues. While they might seem separate, color and brightness inevitably interact in complex ways, and an understanding of both is essential for anyone interested in either.

Part of a series supported by

You might also like...

Audio At IBC 2024

Great audio is fundamental to any great broadcast and professional audio remains one of the busiest areas of the show both in terms of number of exhibitors and innovative new technologies on show. IP and cloud developments seem set to…

Encoding & Transport For Remote Contribution At IBC 2024

The technology required to get high quality content from the venue to the viewer for live sports production remains an area of intense research and development, so there will be plenty of innovation and expertise in this area on the…

UHD & HDR Video Workflows At IBC 2024

As we head for Amsterdam we re-visit the key theme of technology that eases the burden of achieving effective workflows that simultaneously support multiple production and delivery video formats.

Is AI “Just A Tool”?

People often say that AI is just a tool. But it’s not. That’s a fundamental mistake and likely to be wrong by several orders of magnitude.

Production Control Room Tools At IBC 2024

The demand for ever more sophisticated production rolls on relentlessly… thankfully broadcast technology vendors will have an impressive array of systems to keep busy creative teams ahead of the curve at this year’s IBC Show.