Creative Technology: Getting It Right In Camera

With the advent of log recording and higher resolution, and large-format cameras, DOPs are increasingly entertaining the notion that just about anything can be ‘fixed’ or finished in post.

At first glance, there appears to be great truth in this, as the downstream power of cinematographers continues to grow exponentially. Today, with relative ease, we can change day to night, tamp down high-contrast scenes captured in bright sun, and remove intruding or distracting objects like utility wires with a few clicks of the mouse. So yes, given today’s RAW and lightly compressed recording formats like ProRes 4444, we have more ability than ever to crop, adjust, and remediate images post-camera. The question is: Does it still make sense to get it right in camera?

Gone are the days of carrying a horde of 100 image control filters to set. Within certain limits, most straightforward color and white balancing tasks can be achieved post-camera.

Gone are the days of carrying a horde of 100 image control filters to set. Within certain limits, most straightforward color and white balancing tasks can be achieved post-camera.

For quasi-routine tasks like image stabilization and wire removal, post-camera remediation is certainly reasonable, and most DOPs have adopted the approach to tackle such chores. Today, most folks agree that subtle adjustments to white balance, color, contrast, and noise level, can safely be addressed in the post-production environment. More significant shifts, however, in one or more color channels, are another matter entirely as such dramatic changes can increase noise and/or the loss of fine detail and color fidelity.

For DOPs, the issue becomes really knowing our limits downstream, and how much post-camera maneuvering is possible or practical.

Of course, baking in the show LUT on set would obviate the need for much post-camera shenanigans. While some cameras and workflows allow for capturing multiple streams with and without an applied show LUT, the fact is some tweaking downstream is pretty much a foregone conclusion.

The amount of camera stabilization, for example, that is possible and achievable in post, is dependent on many factors, including frame size, resolution, and codec. DOPs must assess the degree of post-amelioration desired, and figure in their particular camera setup and recording parameters.

The flicker from out of sync TV sets, computer monitors, and discontinuous lighting such as neon signs is another area of concern that requires on-camera addressing. DOPs should strive to reduce or eliminate the flicker through use of the variable, clear scan, or synchroscan shutter, like that found in later model Panasonic Lumix GH cameras. Shooting 24p in 50Hz countries? Set the camera shutter (in degrees) to 172.8º. Shooting 30p in 50Hz countries? Set the camera shutter to 108º, 216º, or 324º. The use of the variable or synchroscan shutter is the key to avoiding flicker from a field frequency mismatch.

The Panasonic Lumix GH6 features a synchroscan shutter that eliminates flicker from computer monitors and other asynchronous light sources. Addressing such issues in-camera obviates the need for ineffective, less-than-ideal post-camera solutions. Some earlier model Lumix GH series cameras also had this feature.

The Panasonic Lumix GH6 features a synchroscan shutter that eliminates flicker from computer monitors and other asynchronous light sources. Addressing such issues in-camera obviates the need for ineffective, less-than-ideal post-camera solutions. Some earlier model Lumix GH series cameras also had this feature.

Hoping to remove flicker in post is a tactic fraught with peril. While software solutions exist and may work on occasion, they tend to work poorly or not at all. For DOPs, the effectiveness of a post solution is a function of the flicker cadence; a regular and predictable pattern from frame to frame is much easier to ameliorate in software. Severe flicker, like the kind typically encountered in urban nightscapes illuminated by neon or mercury vapor, can produce widely varying exposure from frame to frame. It is the varying underexposed frames, lacking detail with deep impenetrable shadows, that cannot be fixed in post.

There is also the matter of performing significant cropping after the initial image capture, a practice that has gained popularity owing to today’s very large frame sizes. If applied excessively, this narrowing of field of view without a corresponding reduction in depth of field produces a highly unnatural, potentially audience-alienating effect. The cropping of scenes in post is not the same as using a longer lens on the camera!

The objectionable flicker in time-lapse scenes or in scenes containing discontinuous light sources is almost always best to address during original image capture. The flicker’s irregular cadence produces significant underexposure from frame to frame that precludes a quick and easy digital fix downstream.

The objectionable flicker in time-lapse scenes or in scenes containing discontinuous light sources is almost always best to address during original image capture. The flicker’s irregular cadence produces significant underexposure from frame to frame that precludes a quick and easy digital fix downstream.

Getting it ‘right’ in camera requires minimizing the noise that can deleteriously impact the video quality downstream. Understandably, we are reluctant to apply NR in-camera since this can lead to the loss of fine detail, along with the noise.

If reduced frame size and resolution upon output is a viable option, DOPs can adopt a strategy of oversampling during image capture. Shooting 4K for HD release? The downscaled output averages four pixels into one, eliminating the noisy pixel or pixels, and producing a virtually noise-free picture in the HD stream.

In most productions, reducing or eliminating noise is a laudable goal that can be facilitated through use of a physical, image-enhancing ‘diffusion’ filter. Filters like Schneider’s Digicon and Tiffen’s many iterations of GlimmerGlass can serve as excellent grain-tightening, noise-reduction utilities.

To DOPs, getting it ‘right’ in camera is critical to avoid tough-to-resolve problems downstream.  Camera setup – black level, LUT, frame size, etc. - and physical ‘grain-tightening’ diffusion and polarizer filters are prime considerations. The flattering look of a Schneider Digicon containing many irregularly interspersed elements, is difficult or impossible to achieve via a generalized software solution.

To DOPs, getting it ‘right’ in camera is critical to avoid tough-to-resolve problems downstream. Camera setup – black level, LUT, frame size, etc. - and physical ‘grain-tightening’ diffusion and polarizer filters are prime considerations. The flattering look of a Schneider Digicon containing many irregularly interspersed elements, is difficult or impossible to achieve via a generalized software solution.

In broad strokes, a scene’s contrast, look, and feel, can also be addressed digitally by adjusting the camera LUT or black level. Ideally, such approaches should be used in tandem with a measured downstream strategy, as most DOPs will invariably tweak, however slightly, the contrast or milkiness of scenes during grading and color correction.

It is critical to note that detail not captured in the original image is lost forever, and cannot be restored later. Accordingly, many DOPs get it ‘right’ in camera by utilizing a polarizing filter for nearly every setup. The improved rendering in the sky and clouds, reduced glare off glass surfaces, and enhanced texture in actors’ skin, are usually desirable effects that cannot be satisfactorily reproduced or approximated in post. Suffice it to say, the polarizer is the only physical filter that can increase resolution and the level of detail in the captured image.

In years past, for most DOPs, ‘fixing’ an image or finishing it in post entailed a complex process that was impractical and pricey. With the advent of low-cost digital tools such as Adobe’s AI-powered neural filters and vector stabilizers, post-camera processes have become an integral part of our modus operandi. Savvy DOPs would do well to understand that getting it ‘right’ in camera is still a worthwhile and eminently critical goal as we grapple with the promise and limitations of our evolving craft.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…

Designing An LED Wall Display For Virtual Production - Part 2

We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.