HDR For Cinematography - Part 2
In this second installment of our extended article looking into HDR for cinematography we look at the practical aspects and applications of HDR.
Greater Flexibility
Companies such as Netflix are placing great demands on cinematographers for the productions they supply. Dolby Vision is the norm at 4KP60 with full resolution 4:4:4 color subsampling at 12-bits.
Even with these formats, the resolution of the cameras still exceeds that of the playout and broadcast system. Furthermore, cameras are going to improve at a greater rate than our ability to change the broadcast formats thus demanding greater flexibility in the system.
Camera manufacturers have provided their own solutions to acquisition by using versions of logarithmic transfer functions to map the 14-bit video from the camera sensor to something more manageable such as 10- or 12-bit 4:4:4. Transfer functions such as S-Log from Sony, LogC from Arri, Canon-Log, and Blackmagic Log, all contribute to helping squeeze as much information into the 10- or 12- bit distribution and recording system as possible to maintain compatibility with existing infrastructures.
It’s also worth remembering that the formats HLG and PQ are completely different than one of the log format’s leaving the camera. For example, a live broadcaster might use Hybrid Log Gamma for their production, so they would only use the HLG output. If recording a movie production, then one of the camera log formats might be used and then processed in post.
Recording Log Formats
Even though PQ is used as a delivery and transmission format for the broadcaster, the cinematographer will still need to record in a log format for later post processing.
This also adds another interesting challenge, the cinematographer had much greater dynamic range and latitude when they were shooting using one of the 14-bit cameras for SDR productions. The camera-log recordings allowed the color-grader to effectively lift detail from the shadows and highlights when color correcting as there is much more information in the HDR image than can be seen by the naked eye allowing easier conversion to SDR. As the cinematographer is now thinking in HDR terms, the latitude for error has been much reduced so they must focus more on making sure the images are correct during acquisition.
There was a time when the cinematographer would have known they could have fixed a problem in post as there was a much higher margin for error when shooting for SDR. However, as we move to HDR productions, this margin of error has almost been completely removed. There is still some latitude as the image is providing 14-bit data and a company such as Netflix requires 12-bits, but there’s not much in it.
Viewing Conditions
HLG and PQ are the two distribution formats that are playing out in the HDR arena. Although both have their good points, HLG is proving the most applicable to live productions. The system is scene referred so the broadcasters cannot make any assumptions about the viewers home television or mobile device.
Consequently, the signal-to-light relationship must be maintained. HLG is often graded and shaded for 1000cd/m2 but a limit is none the less imposed.
The ICtCp color method is still a color difference system similar to YCbCr, however, it takes advantage of some of the adaptive aspects of the human visual system to provide color subsampling that exhibits fewer artifacts when processing in post-production.
It’s worth remembering that the potential brightness of a home television, often expressed in NITs, is not intended to be the brightness of the whole screen. If we were to sit close to a 1,500-NIT television with a peak white signal displayed, then the viewer would certainly find the experience uncomfortable. Instead, the maximum brightness level of a television or monitor refers to the brightness of specular highlights and peak transients.
This leads onto some interesting situations for cinematographers. HLG works well for live events as it is scene referred and there is still a direct relationship between the light level of the scene and the HDR signal level. PQ is display referred and allows the cinematographer to make some fundamental assumptions about the viewing environment.
Although PQ can work in the live environment, it certainly excels in the making of high-end movies. Metadata established during the grading and editing process provides information about how the image should be displayed. The viewers television or mobile device then uses this information to calibrate the screen so that it displays the cinematographer’s images as intended, often referred to as “artistic intent”.
Color Subsampling Opportunities
PQ even facilitates a different method of providing color subsampling that helps maintain the image quality during post-production. Although ICtCp can be used with HLG, the need to make HLG compatible with existing live infrastructures, greatly restricts its use. Cinematography doesn’t suffer this restriction. After the days shoot, the rushes are taken to the post house for grading and later editing, generally using software-based systems that are not real-time critical.
IC tCp is similar to YCrCb in that it is a color difference system. “I” is the intensity luma component, Ct is blue- yellow tritanopia color component, and Cp is the red-green protanopia color component. It differs from YCrCb as it improves color subsampling and hue linearity. The key with ICtCp is that it provides color uniformity by taking advantage of some of the aspects of the human visual system by optimizing lines of constant hue, uniformity of just-noticeable-difference, and constant illuminance. YCbCr introduces distortions into saturated colors when subsampled due to the nonconstant attributes of the luminance. This does not occur in ICtCp due to the nearly constant illuminance representation.
Mimicking the human eye, IC tCp has three distinct operations; the incoming light is captured by three types of cones that have peak sensitivity for the long (L), medium (M), and short (S) wavelengths. This captured linear light is converted into a non-linear signal to simulate the adaptive cone response of the HVS. And these non-linear signals are processed by a color differencing system in three different pathways to simulate the light-dark (intensity), yellow-blue (tritan isoluminant), and the red-green (protan isoluminant).
The major benefit for using IC tCp is found in post-production where multiple image processing is performed. Methods of converting RGB from YCbCr demonstrate significant artifacts and these are greatly reduced with the ICtCp conversions.
As cameras and monitors improve, any of the non-linearity’s seen in YC rCb are quickly seen, but processing IC tCp mitigates this.
No Longer Shackled to YCbCr
The ICtCp method can be used quite happily by the cinematographer if they decide to do so. They are not shackled by the same time constraints as the broadcasters.
Cinematographers also need new methods of monitoring. For the first time in nearly fifty years, we have made a significant change to the color space. Rec.2020 has a much greater vibrancy than Rec.709, especially in the greens and reds. Consequently, anybody working in television must now think more carefully about color space, especially with out of gamut errors.
On screen displays showing potential errors with color gamut are ideal and are much more descriptive and easier to use, especially in the field. The false color mode is a method of highlighting areas of the picture where the colors exceed the color gamut.
Linear Displays
The key luminance percentages used in HDR are 90% reflectance and 18% grey. Displays that can reverse the OETF of the camera, that is the log transfer function used, allow the cinematographer to continuously view the linear image from the camera without having to be concerned with the transfer function characteristics of the camera.
Look up tables (LUTs) are a convenient method of transferring from the log image to linear display and further facilitate how the data is presented to the cinematographer. Consequently, the luminance can be displayed in either NITs or f-stops.
The advent of HDR and WCG is not only providing broadcasters with new and improved images to help enhance the viewers immersive experience, but also provides new opportunities for cinematographers to deliver higher quality images than would have been traditionally possible in live television.
Cinematographers are able to use new features within HDR and WCG that are not applicable to broadcasters as there is no great need to maintain compatibility with such systems, opening up a whole new plethora of opportunities.
Supported by
Broadcast Bridge Survey
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Six Considerations For Transitioning To Cloud Based Video Distribution
There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…