HDR: Part 3 - Grading

Almost since photography has existed, people have pursued ways of modifying the picture after it’s been shot. The “dodge” and “burn” tools in Photoshop are widely understood as ways to make things brighter or darker, but it’s probably less widely understood that they refer to techniques for exposure control that date all the way back to the earliest days of darkroom image processing. Bring moving images into the mix and consistency becomes a big concern too. Individual still photographs might be part of a single exhibition, but they don’t have any concept of being cut together in a sequence.


This article was first published in 2019. It and the entire 'HDR Series' have been immensely popular, so we are re-publishing it for those who missed it first time around.


Color brought new concerns. Not only did brightness have to match, but also hue and saturation. Given the variability of daylight and the difficulty of achieving consistent exteriors, it’s not surprising that a way to make adjustments on a per-shot basis was quickly developed. The process was originally called color timing on the basis that changes in settings had to happen at specific point throughout a reel of film. Color adjustment, then, was originally about solving problems; infamously, the first Star Trek television series had problems with makeup tests for its green-painted alien women when the processing lab carefully corrected the emerald skin back to normality.

The available tools were primitive: a color timer could increase or decrease red, green or blue, or affect overall exposure by moving all three. The contrast response of film meant that things weren’t quite as straightforward as that description suggests, but the creative possibilities were certainly limited. Inventions such as bleach bypass processing, which creates denser blacks (when performed on a positive element) were developed to give people more options.

The control available to electronic cameras has always been more comprehensive. Even now, a TV studio or outside broadcast will have someone solely dedicated to adjusting black and white levels, color balance and other parameters to match cameras. That sort of control didn’t become available to cinematographers until the perhaps 80s – some might say slightly earlier – when telecine color correction became more sophisticated.

Before long, color correction hardware was separated from telecines so that pre-recorded video material could also be processed. At first, that would be limited by the processing applied in video cameras to ensure that their pictures looked good on standard displays. The clipped highlights of standard video signals meant that a colorist was effectively layering adjustment on top of a harsh correction that had already been applied by the camera, leading to the development, in the late 90s, of less aggressive in-camera correction. Sony called this Hypergamma, and it was arguably the forerunner of the things we generally call log in 2019.

Log pictures are a departure because they are no longer intended to be viewable on standard displays. Right back to the dawn of film color timing, the intent had generally been to shoot images as close as possible to the final result, with adjustments made later intended to correct errors. Shoot log, and we are working on the basis that the picture will be – must be – processed later. Color negative film needed printing, but that was a single process: film from any manufacturer, shot in any camera, processed in any laboratory, would use exactly the same equipment and techniques, and could end up on the same projector or, via telecine, the same VHS player.

Now, there’s a different set of standards for every camera and for each of several types of display. Including standard and high dynamic range plus broadcast and cinema releases, a feature film will commonly be prepared for four or five different types of display, each of which may use different primary colors and handle brightness differently. To cover the most frequently asked question, no, it is not usually possible to automatically create all of these versions without at least a few manual adjustments to each one.

Diagram 1 – Color look up tables (LUT) are used to provide mapping for camera and monitor brightness response curves.

Diagram 1 – Color look up tables (LUT) are used to provide mapping for camera and monitor brightness response curves.

Despite all these developments, though, the core work has changed little, and there are as many approaches to grading as there are colorists. Most will use a waveform monitor to establish reasonable black levels and color balance; that’s analogous to the more technical task of evening out differences in exposure caused by something as simple as a changing time of day. Common grading applications will allow a normalization grade to be set up and then kept separately from others which may follow, meaning that making the material consistent, which is a more or less procedural task, can be kept separate from the more complicated and subjective world of creative grading.

Basic normalization is most often done using tools analogous to the video processing amplifiers of early telecines, which offered lift, gamma and gain controls, which (broadly speaking) most obviously affect shadows, midtones and highlights. These are controls with a known, mathematically defined behavior that exist in most grading applications, and are useful for matching black and white levels and the apparent contrast of an image. Creative grading may use these controls but is also likely to use more application-specific tools offering sophisticated ways to affect color and brightness.

The complication of log encoding is that modern grading involves a complex signal processing chain that is much more configurable than it ever was. That means flexibility, but it also means a lot of options that must be correctly set. As we’ve seen, in order to appear correct, the original camera image must be processed in the knowledge of the type of display in use. Generally, the grading system will need to be explicitly told those things, and what options are in use on those things, before this can be done correctly. There are a lot of potential equipment configurations, but usually, telling the grading system what to expect will do two things.

First, it will ensure that the necessary processing is applied to ensure the image will look normal, where “normal” is more or less defined by the camera manufacturer’s opinion and the settings on the camera during the shoot. Sometimes, a production will have established a creative look to be applied to all the footage, which might have been used in the monitors during principal photography, and that might be used in place of the manufacturer’s default settings. 

Diagram 2 – Colorist will need to provide different grading options for the target audience display.

Diagram 2 – Colorist will need to provide different grading options for the target audience display.

The result is an image that may be closer to the desired result than the manufacturer’s idea of normal. That can save time, though, some people prefer to start from scratch, since preset looks, usually in the form of a lookup table (LUT) file, can do unpredictable things that make some colorists nervous. Either way, the key realization is that the low-contrast camera original, with all of its contrast range intact, is always available and used as the basis for all the color processing that happens.

The second thing that happens based on the camera and monitoring settings is, usually, that the behavior of the grading controls will change. Most systems will ensure that grading choices are applied to the image before it’s processed to look right on the selected display. That means the grading controls work on an image that is, in effect, odd-looking, and which might be greatly altered before it’s seen by the colorist. This means the feel of the controls can be counter-intuitive, so many systems modify things in an attempt to keep things feeling normal in the context of the current camera and display settings.

Where a production needs to create deliverables for different kinds of display – as almost all do – the most detailed work is generally done on a display matching the most capable option. Often that will mean a high resolution, high dynamic range display. Once that version is finalized, the process of creating the less-demanding versions may be at least partially automated, although standard dynamic range versions of productions originally finished in HDR are likely to require at least some manual adjustment for the best results.

If all this sounds like a potential minefield of manufacturer-specific complexity, it is. We haven’t even mentioned the deeper topics of scene- or display-referred grading, ACES, calibration, and other factors which can all either help or hinder. This complexity arises from a lack of standardization. Camera manufacturers are often keen to promote their brightness and color processing as a feature, whereas a unified system would probably not involve much, if any, compromise in the real-world performance of cameras. At the same time, there’s a format war going on in the world of HDR, as well as a lot of technological change in both professional and consumer displays.

In the end, the actual work of grading, the key issues of altering color and brightness, is a matter of creative interpretation. Fashions change, and work which was considered cutting edge a few years ago might be laughed at now – unless or until fashions change back again. Consistency is a key craft skill of colorists, just as it is of cinematographers, but as long as the equipment is properly set up, making pretty pictures is just as much a matter of opinion. 

You might also like...

NDI For Broadcast: Part 2 – The NDI Tool Kit

This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.

HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG

Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…

Great Things Happen When We Learn To Work Together

Why doesn’t everything “just work together”? And how much better would it be if it did? This is an in-depth look at the issues around why production and broadcast systems typically don’t work together and how we can change …

Microphones: Part 1 - Basic Principles

This 11 part series by John Watkinson looks at the scientific theory of microphone design and use, to create a technical reference resource for professional broadcast audio engineers. It begins with the basic principles of what a microphone is and does.

Designing An LED Wall Display For Virtual Production - Part 1

The LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in an interdependent array of technology.