HDR Picture Fundamentals: Color
How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.
Historically, monitors and cameras were built to display colors defined by what was technologically practical, not what was ideal. More recently, improved technology has given engineers more leeway to pay attention to what the human visual system can actually do. That’s led to some big improvements which are often associated with HDR, but the fundamentals remain much as they always have.
How We See Color
The concept of color refers mostly to a human experience, rather than to a fundamental principle of physics. Humans can see electromagnetic radiation with wavelengths from around 380 to 700 nanometers. A light source emitting radiation around the lower end of the range looks red; one at the higher end looks blue. A light source might emit light across that whole range, in part of it, or in several different parts of it at once. A physicist would call that a spectrum. Different spectrums might look different to humans, and we call that experience color.
At the same time, the human eye can’t tell all spectrums apart. The retina only uses three different types of color-sensitive cell (plus a brightness-only type). As a result, it can only detect broad categories of light. That’s why TVs can show yellow. They don’t have any yellow emitters; instead, they emit red and green light simultaneously, which looks like yellow to humans. The spectrums are different, but the color is the same, a principle called metamerism. There are creatures on earth whose eyes have more than three types of color-sensitive cell, which might not see a ripe yellow banana on TV as the same color as one in reality.
Because of this, it’s sometimes said that humans have eyes with red, green and blue sensitive cells, just like a TV camera has red, green and blue sensitive electronics. That’s not quite true; the eye has cells sensitive to long, medium and short wavelengths, but the sensitivities are very broad. If we were to look at light representing those sensitivities, we would see a reddish-orange, yellowish-green and a slightly bluer green, all dull and powdery colors. This tells us that color perception in humans is very dependent on processing in the brain (that’s something camera engineers might recognize at matrixing, which is a closely analogous process).
Generating Color
Even before the invention of electronic television, much of the above was known. The earliest experiments in color video were by John Logie Baird and used two primary colors, filtering alternate frames in red and a teal color. This technique was also used in early sequential film projection systems. The results could be convincing on human skin, which require mainly red and yellow hues, but struggled to differentiate grass and sky. It was always understood that full color images would require at least three primary colors.
Soon, Philo Farnsworth’s electronic systems, based on monochrome cathode ray tubes, were expanded to reproduce color. Sequential color systems used a monochrome display behind a rotating filter wheel. That allowed designers to pick any shades of red, green and blue that could be made into a transparent filter. However, it created rainbow-colored fringing around moving objects and was big, heavy and noisy, especially for large displays. Generating color using a cathode ray tube alone meant finding phosphors which would glow in useful shades of red, green and blue, a requirement which affects us to this day.
The primary colors eventually chosen for cathode ray tube displays were not ideal; they were just the best available. The red leant toward being orangish, especially when driven at high power. That’s why some three-tube CRT projectors had magenta-tinted lens elements for the red tube. The green was a rather unsaturated grassy color, sometimes corrected with a cyan-turquoise lens to remove yellow. The blue was a reasonable color, but not very efficient, requiring high power and with a short life.
Cathode ray tube monitors used tiny stripes or patches of phosphor, masked so as to be individually activated by the electronics inside the tube. They could not reasonably use filtration to improve the primary colors. As such, those less-than-ideal choices became embedded in television standards such as the ITU’s Recommendation BT. 601 and then 709.
Gamuts
The range of hues which can be displayed by a video system is referred to as its gamut. Most people understand that a color display uses red, green and blue primaries; the gamut is defined by which shades of red, green and blue are chosen. For example, no display can create a deeper green than the green light produced by its green-emitting channel, with the other two channels turned off, or a deeper cyan than the combination of its green and blue channels, with no red.
This has real-world consequences. Particularly, the under-saturated green makes it difficult for displays built to those standards to render blue-green hues. The prototypical real-world example of this is tropical ocean over white sand, which should appear deep teal or turquoise, but on conventional video displays often just looks blue.
A gamut is most often expressed using the CIE 1931 diagram. It can be thought of as a sort of distorted color wheel, much like a color picker in a computer graphics program. Desaturated colors, including white, are in the middle, while the edge represents the most saturated colors the eye can see. The colored part of the diagram represents all colors which can be seen by humans. This means that any point on the colored part of the chart could represent any of a number (an infinite number, formally) of different spectrums of light, but which would all look the same.
The CIE diagram has useful properties. Lights of any two colors can be represented as two points on the chart, and any color on the straight line between them can be created by mixing those lights in different proportions. Likewise, three lights of different colors can be mixed to create any color inside the triangle they form on the chart. The chart itself is roughly triangular because humans, similarly, have those three types of color-sensitive cells, though the outline is smoothed and distorted by the color processing performed by the brain.
That’s how color gamuts in film and television are typically represented, often specifying three primary colors using their coordinates on the CIE chart (lighting increasingly uses this approach to specify individual colors, as well). A gamut is not fully described, though, without a white point, which specifies the relative power levels of the red, green and blue channels. Switching on all those channels at the highest possible levels should create a substantially white light. Different technologies use different white points; the one used in most cinema, for instance, is greener than the one used in most television.
The CIE 1931 diagram is old - the number is the date - and has some less-than-ideal properties, too. Particularly, distances across the chart do not represent the same visual change in color in different regions. Later standards include the CIE 1976 chart, which alleviates, though does not completely solve, that concern, though it compresses the vertical axis significantly and makes certain things hard to see. Because of its good general usability, the CIE 1931 chart is widely used, such as in test-and-measurement devices for new color standards.
New Technologies
Video displays based on cathode ray tubes and phosphors were limited by the available phosphor colors. In principle, a TFT-LCD display can use any primary color which can reasonably be printed onto the display as a filter. OLED displays are made of light emitting diodes which, by their nature, emit very deep and saturated colors. With regard to the most recent (in 2024) quantum dot based OLEDs, the primary colors can be tuned to almost any desired wavelength by varying the size of the quantum dots used.
Improving the range of displayable colors, or expanding the gamut, means deeper, more saturated primaries. Here, however, we encounter an inconvenience: because the shaded area of a CIE 1931 chart is not triangular, no three primary colors can possibly cover all of it. This is a very real-world problem, particularly regarding the selection of a green primary. Choosing a bluish-green will create a gamut with improved rendering of teal and turquoise colors, but sacrifices yellows. Choosing a yellowish-green, conversely, will sacrifice rendering of yellow and orange.
Moving beyond the limits of previous standards is therefore an enticing prospect. That’s especially true in the context of HDR, where wide color gamut lets us render subjects which are both bright and very colorful without the distracting clipping or color inaccuracy.
In some ways, wide color gamuts should not require much more of us than the traditional approach. Pixels in a digital image are still described using three numbers. The need to produce both wide color gamut and conventional images simultaneously, as well as to deal with differing approaches among manufacturers, involves broadly the same complexities as the introduction of HDR. Different manufacturers like to do things in different ways, and different distribution standards may demand different workflows. Working around those concerns, though, lets manufacturers and broadcasters offer viewers equipment and services with visibly richer and more engaging images.
Part of a series supported by
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.