Expanding Acquisition Capabilities With HDR & WCG
HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.
It’s easy to get the impression that High Dynamic Range and Wide Color Gamut make TV production more complicated. Previously, crews could work without taking too much notice of what shades of red, green and blue were used to create color, or how brightness worked. Now, productions often need to output both HDR and Standard Dynamic Range, and concepts like color gamut and brightness encoding can seem like new ideas
The good news for time-pressured engineers is that the underlying concepts remain the same: HDR and WCG are not fundamentally any more complicated than traditional approaches. Vision engineering HDR may even be easier, given the reduced need to pack huge ranges of contrast into the limited image of conventional television. There are still the same red, green and blue primaries and ideas of brightness, designed to create more capable pictures. Some proprietary systems add extra complexity, but in principle, HDR and WCG pictures work in much the same way as they always have - just more so.
More Contrast Means More Data
HDR means more contrast, and that means finding ways to describe a larger range. Even in conventional imaging, the relationship between signal level and light level was never simple. Doubling the number did not double the sheer number of photons, so television has always been very used to non-linear brightness encoding. Historically, that was done both to suit legacy cathode ray tube displays and to improve the performance of radio broadcasting, but traditional approaches are not ideal for HDR.
Eight-bit numbers are often used in distribution, describing brightness as one of 256 levels. Ideally, each increase in signal level should represent an increase in apparent brightness which is barely perceptible to humans. Dividing brightness any more finely than that wastes digital information. Dividing it more coarsely might create a system incapable of smoothly-graduating color and brightness. Adding more contrast, as with HDR, requires that each increase in signal level represent a larger increase in brightness. In the worst case, this might mean visible banding (properly called quantization noise) in images such as a subtly graduated sky.
Handling Higher Contrast
HDR standards generally solve this using two techniques. First, they simply use more bits, although broadcast camera systems have long been 10-bit and this is often enough. Field recording might use 12 or exceptionally 16 bit encoding, providing 4,096 or 16,384 values respectively.
Second, the increase in light level per value can be redefined so that every value represents exactly the same visible increase in brightness. Camera manufacturers serving both multi-camera and single-camera acquisition were among the first to develop systems to do this; some have used the same mathematics for both. The term log refers to the base-2 logarithm, as in Sony’s S-Log series, Arri’s Log C, and Canon’s C-Log. Often, HDR production sees standards much like these enter the live broadcast world.
All of them do proprietary things which are generally not quite a literal implementation of a logarithm. Even so, each digital value represents something much closer to the same increase in brightness. Other systems - particularly the perceptual quantizer developed for HDR transmission and used as part of HDR10 and Dolby Vision - have been based on even more detailed analysis of how human vision works, although these are used in distribution.
More Color
Software designed to improve the range of colors - the gamut - predate high dynamic range. Adobe began working on improved color handling for Photoshop in the late 1990s. Early moves included an implementation of the SMPTE 240M standard which was technically incorrect, but the improved color range was so well-received that it was simply renamed Adobe RGB and remains popular to this day.
Until recently, though, improvements in color gamut were incremental. An image mastered for Adobe RGB would not be displayed accurately on a conventional computer monitor using the sRGB standard, but it would be viewable as an approximation and might not attract comment as clearly incorrect. More recent improvements, intended to make the improvement very noticeable, have been based on red, green and blue primaries which are much deeper and more saturated than ever before.
The ITU’s Recommendation BT.2020 defines red, green and blue values which are fully saturated, a single wavelength of light right at the very edge of a CIE 1931 diagram. Creating a display which is actually capable of displaying those colors will always be impossible, and pictures will not be backward-compatible with legacy displays - but getting close means television can finally show some saturated colors.
Virtual Color
Some colorspaces - perhaps most notably the ACES system sponsored by the Academy of Motion Picture Arts and Sciences - have used primary colors which exist only as mathematical constructs. On a CIE color chart, they exist outside the shaded area which represents all real visible colors. The advantage is that the triangle these primaries form can enclose the entire shaded area, and can encode any visible color. The disadvantage is that no real display can be made to display such a color space directly. Displaying ACES requires mathematically processing it to suit the real world primaries of a monitor.
In practice, this is not onerous. Many displays now handle a selection of color spaces, using processing electronics which could handle a signal using effectively any primary colors, including virtual ones. The monitor will never be capable of displaying things beyond the gamut limits of its own hardware, but it could do the best possible job of displaying the image within those limits.
Color gamuts using virtual primaries inevitably allocate a lot of digital values to represent non-colors that cannot exist in reality. As such they have found use mostly in post production where the advantages of a single unified color space are most valuable, and where data space efficiency is a lower priority. In broadcast, in the field, cameras are likely to work using a recommended, proprietary color gamut, often one which is associated with a particular brightness encoding (though some systems allow color and brightness to be configured separately).
Component Complications
Wide color gamut images present some of the same concerns as high dynamic range ones. Use too small a range of digital values, and gradually-varying hues might suffer banding, or quantization noise. Things are complicated by the fact that digital video signals are very often not handled as red, green and blue values. The human eye has more brightness-sensitive cells than color-sensitive cells, so it is wasteful to encode color and brightness with the same sharpness.
Because of that, component (sometimes slightly inaccurately called YUV) signals are often used, particularly in broadcast production. Component signals encode a monochromatic, black and white image with two color-difference signals. Each color difference signal represents the amount each monochrome pixel should be shifted along the red-cyan or blue-yellow axes of a color wheel. This helps, because the separated color information can be sent at lower resolution - typically half.
In an ideal world, this has no relevance to concepts of wide color gamut. All the same considerations apply with both component and RGB images, though the achievable range of color and brightness may be slightly constrained depending on the mathematics used.
A Word About Film
Photochemical imaging is subject to very different considerations. In principle, a projected film print can show any color or contrast its chemistry can be persuaded to create. In practice, considerable effort is made to standardize behavior. Crucially, film is a subtractive medium using cyan, magenta and yellow dyes. These combine to create notionally red, green and blue primaries in practice. Choices such as reversal film, cross-processing, pushing and pulling or bleach bypass to either the negative or the print can significantly alter the color and contrast behavior of a film.
This means that film-originated material can often make good use of HDR and WCG, in the same way that productions shot on film which were only finished in standard definition can often be remastered in HD later. That’s particularly true where processes such as bleach bypass on a print may have made for a very contrasty cinema experience, which HDR might recreate more faithfully than other approaches.
In Live Production
Effectively all modern broadcast cameras are, in principle, capable of high dynamic range and wide color gamut. In the past, cameras were required to process this high dynamic range image according to the ITU’s Recommendation BT.709, and associated standards, for reasonable results on monitors of the time. As we’ve seen, HDR and WCG pictures use the same principles as conventional pictures. The difference is that the processing in the camera is more gentle, allowing wider extremes of color and brightness to remain in the output image.
Most manufacturers of broadcast cameras and associated vision hardware are likely to have proprietary approaches to preparing their output for HDR broadcast. That workflow must carry the picture all the way through camera operating, vision engineering, mixing and distribution, often requiring both SDR and HDR outputs.
Progress in this area often means that the work of vision engineering SDR and HDR/WCG outputs simultaneously can be automated. Ensuring those pictures make it to the viewer in the best possible way, meanwhile, depends on distribution systems and even home TVs which can have much more complex responsibilities than ever before - and an increasingly demanding viewer base.
Part of a series supported by
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.