Narrow-Gauge Film Use By Documentarians - Part 2

Content producers often prefer to shoot or record original content. Documentarians, on the other hand, typically must rely on material recorded by others that is often stored on film stock, Regular 8mm and Super 8mm being common formats. Working with older technology is a challenge requiring special techniques.

To this point the questions asked, and hopefully answered, have not dealt with the technical choices that need to be made when specifying the type of transfer you want performed by the lab you plan to use. The fundamental question is how much information can be extracted from a film frame? And, what type of transfer will accomplish this goal?

The latter question focuses on the way Regular 8 and Super 8 frame-rates can be matched to video frame-rates. The former question involves the image resolution and chroma sampling to which film will be transferred. Part 2 of this article will deal with these technical challenges. Part 1 of this series can be found here.

Matching Film to Video Frame Rates

In all cases, except with 24fps to 24p, a Film-frame to Video-Frame transfer will result in a motion speed-up. Depending on the capabilities of your editing software there are three potential solutions available to prevent this speed-up. Each method has its strengths and weaknesses.

Frame Sampling:  Software outputs a frame at the video frame-rate. At each point where a frame is output the software grabs the nearest in time film frame. Figure 1 schematically illustrates how 16 film frames (upper row) might be converted to 30 video frames (lower row).

In this drawing the gray frames represent video frames that might contain a copy of the previous film frame (red = late) or a copy of the next film frame (green = early). The choice of which film frame, late or early, is dependent on the timing between the beginning and end of each film frame in relation to the beginning and end of each video frame.

Figure 1: Conversion from 16fps to 29.97p using Frame Sampling.

Each of the gray frames, by creating a pair of identical video frames, creates judder. Although the converted video cadence may be rough, each output frame will be a clear film frame. Figure 2 shows 18fps converted to 30fps. It illustrates how 18 film frames (upper row) might be converted to 30 video frames (lower row). Again, each of the gray frames, by creating a pair of identical video frames, creates judder.

Figure 2: Conversion from 18fps to 29.97p using Frame Sampling.

Figure 3 shows a possible, worst case, distribution of judder frames after conversion. The hypothetical presence of so many judder frames predicts the cadence will be very rough.

Figure 3: Potential judder frames created during conversion from 18fps to 29.97p.

Frame Blend:  This is a version of the prior solution except that when a video frame needs to be output that lies between two film frames, the video output is a blend (yellow) of these two frames. While cadence will be smooth, many video output frames—depending on the amount of motion in each film frame—will be blurred. The greater the difference between frames the greater the blur. Figures 4 and 5 show 16fps and 18fps converted to 30fps using frame blending.

Figure 4: Conversion from 16fps to 29.97p using Frame Blend.

Figure 5: Conversion from 18fps to 29.97p using Frame Blend.

Optical Flow:  This technique inputs a series of frames that undergo motion analysis that generates a set of motion vectors. Based upon these vectors, software creates output frames on a different time-scale where each output frame (yellow) is composed of pixels that will be in their predicted location.

While this very time-consuming technique can produce excellent results, optical flow can sometimes generate artifacts. Nevertheless, it is especially useful when conversions are the most difficult—when frame rates are quite close. Ideally your NLE should use your computer’s GPU for this process. Even using a GPU, real-time playback will require rendering. Figure 6 shows 24fps converted to 30fps using frame interpretation.

Figure 6: Conversion from 24fps to 29.97p using Optical Flow.

Image Resolution

The fundamental quality question is how much information can be extracted from a film frame. This question can be answered by tests or by a little mental effort. We know 35mm negatives are transferred to 2K digital intermediates. Four strips of 8mm film will fit within a 35mm frame, so each strip would have slightly less than SD resolution. Even with a 4K digital intermediate, each 8mm strip will have less than 1280x720 resolution.

So why do labs offer 2K/FHD and 4K/UHD transfer resolutions? As I learned five decades after shooting 8mm film when I decided to create a video—it was now an HD world. I found the quality loss of an SD upscale to HD—even using an entirely digital path—proved to be too great. Figure 7 shows my solution. I first upscaled SD video to 960x720 and then centered the result within a 1920x1080 black matte.

Figure 7: Export of FHD (1080p23.98) from an SD film transfer. 

Figure 7: Export of FHD (1080p23.98) from an SD film transfer. 

By transferring film to 2K/FHD, there may well be no increase in image detail, but 2K/FHD will not need to be upscaled during your edit. A 4K/UHD transfer can be mixed with 4K/UHD video. Or, it can be employed within a 2K/UHD production as a pan, scan, zoom, or reframe.

To test this concept, I transferred one cartridge of 50 ISO Super 8 film to both FHD and UHD. These transfers were done at no cost by Pro8mm in Los Angeles. (A big thank you to Rhonda Vigeant.) I shot using my Canon 814 (Figure 8) at the same locations in Paris 50, years to the month after I shot at the same location with my Bolex H8.

Figure 8: How cameras should look. Almost no plastic. No menus!

Figure 8: How cameras should look. Almost no plastic. No menus!

Because I would be color grading the transfers, I requested a One-Light scan rather than a more expensive supervised Scene-to-Scene scan. See Figure 9.

Figure 9: Scene-to-Scene verses One-light transfer.

Figure 9: Scene-to-Scene verses One-light transfer.

The FHD and UHD transfers of 12-bit 4:4:4 RGB data from Pro8mm’s film scanner was made to ProRes 4444. ProRes 4444 can carry 12-bit data so the only loss of quality would be from the compression of the RGB components. Transfer to ProRes 4444 certainly can be considered “high quality” transfer.

Two other FHD transfers were made of a second Super 8 cartridge—one to ProRes 422 HQ (10-bit) and the other to a DPX file. A DPX file is composed of a series of uncompressed pictures—one for each scanned video frame. See Figure 10.

Figure 10: One picture from a DPX file.

Figure 10: One picture from a DPX file.

These transfers were made to check the transfer quality of Super 8 film to the 10-bit 4:2:2 “normal quality” of ProRes 442 HQ and to the “super quality” DPX format.

Part 3 of this article series will examine what difference, if any, can be seen among the different quality transfers. The primary comparison will be made between the ProRes 4444 transfers. Will a quality difference be found, for an HD edit between the HD transfer and the UHD transfer? And, how will a 2X digital zoom from UHD into HD look?

Part 3 will also cover what you must tell your lab about how you want Super 8 and Regular 8mm “framed” during transfer: Over-scanned 16:9, 4:3 within 16:9, or 16:9. This decision will determine how you must treat your transfers during editing.

Steve Mullen.

Steve Mullen.

Editor’s note: You may enjoy these other Steve Mullen articles, listed below. A full list of his tutorials can be found by searching for his name from The Broadcast Bridgehome page search box.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…

Designing An LED Wall Display For Virtual Production - Part 2

We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.