Why RAW? What’s the Deal?
Demosaic in the camera or in post? That's the RAW question.
The announcement by Blackmagic of their new RAW camera file format reminds us that there is choice to record RAW versus regular video with digital cinematography cameras. What is RAW, what are the pros and cons, and why should cinematographers capture RAW files rather than conventional video? At a first pass, a RAW image file consists of the data from the image sensor with some basic processing, but cannot be viewed as a regular red, green and blue (RGB) video file.
RAW files are associated with single-sensor cameras, where the use of a color filter array (CFA) over the photosites, often referred to a Bayer filter, means that the output of the sensor must be demosaiced into conventional RGB color channels. The RGBG Bayer pattern inherently samples 4:2:2, interpolating to a 4:4:4 RGB signal.
In regular video cameras, the color channels are sub-sampled to 4:2:2 and output as 10 or 8-bit SDI. The signal can also be compressed as 4:2:2 or 4:2:0, typically with the H.264/AVC or with an editing codec like ProRes or DNxHD (both can also compress 4:4:4).
All Baked In
However, this process bakes in many of the parameters like gain and color setup, plus a deal of the raw sensor data is truncated and discarded. For live capture to an SDI output, shading adjustments can be made on the fly and paint controls allow for color correction to match the lighting conditions on the day.
In post-production it would be ideal if the colorist had all the data from the sensor to work with. This means a greater bit-depth than the ten of SDI, plus the rec 709 gamma with only seven stops of dynamic range does not allow for the full exploitation of capabilities of modern sensors.
A camera like the Alexa outputs a 16-bit signal with a dynamic range over 14 stops. The question is how to deliver that 16-bit signal to grading in the most efficient way?
The Constraints
There are two constraints on the conveyance of data from the sensor to the color grading application. One is the data rate, the other the processing power required.
Demosaicing, color space conversion, noise reduction, sharpening. lens correction and all the other processes that can be applied to the raw sensor data consume considerable processing resources. In a camera, the vendor designs application-specific integrated circuits (ASIC) or field-programmable gate arrays (FPGA) that can process the data in real time with minimal power.
In post-production this processing is done in software, and can use considerable CPU resources, especially for multi-layered timelines. One solution from Red is the Red Rocket that offloads the processing to the dedicated accelerator card.
More and more, editors want to work near the filming location, usually with laptops. These just don’t have the power of dedicated workstations. The workaround is to use fractional resolutions: 2:1, 4:1, etc.
This suggests that demosaicing is best done in the camera, but that triples the data rate. That calls for high performance storage, with high write speeds, and lots of it, which all comes with a price tag to match.
Reducing the Data Rate
There are simple ways to reduce the RGB data. The color channels can be sub-sampled to 4:2:2 or 4:2:0, but this is throwing away color information that can limit what is possible in the grade without running into artifacts.
Hence the conundrum, demosaic in the camera to ease post-production processor requirements, or demosaic in post to ease the storage and bandwidth requirements.
What Blackmagic are proposing is a hallway house. Do some processing the camera with efficient custom chips and do the remainder in the edit workstation/laptop and use an intermediate file format closer in size to RAW than RGB.
What is RAW?
RAW is often understood to be the raw data from the sensor before demosaicing into full plane red, green and blue images. However, a RAW file is not simply the raw data from the sensor. It is usually log encoded and may also be compressed. It has a certain amount of processing applied, and that varies by camera model.
Linear and log
The signal from the sensor has a linear response to light. The human visual system perceives brightness in a non-linear fashion. Psychovisual testing shows perceived brightness increases approxiamtely as the log of the luminous flux.
Logarithmic encoding and gamma encoding are both methods of distributing values perceptually, allowing them to be stored at reduced bit depths
If the gamma law is applied to the linear-light signal, a display will apply inverse gamma, and display the original linear light values of intensity.
It is serendipity that the gamma of the original displays, cathode ray tubes, approximated to the perception of brightness by the eye. The reason log or gamma are applied to the camera signal is so that the code values are distributed perceptually evenly, allowing them to be coded at reduced bit depths.
How Many Bits?
The aim is to deliver to the grading application as much useful data as possible. A high-end camera like the Alexa or Venice encode analog-to-digital to 16 bits resolution. How many bits are useful depends upon the dynamic range of the sensor. Lower cost cameras may be noisy, such that it is not efficient to deliver more than 14 bits to the grading application. But this is linear coding. Log coding can drop a couple of bits without perceptual effects. Hence, we reach the 12-bit log that is very common, or even 10-bit log if only mild color grading is to be applied.
Uncompressed/compressed
One of the most-used techniques to lower the data rate is to use compression. The RAW data can be compressed then decompressed before the demosaicing, with one example being Red. Blackmagic has also opted to use compression with a choice of constant bit rate or constant quality variable bit rate.
Processing options
Apart from the demosaic process, raw processing involves several other operations:
- Gain/ISO, analog and/or digital
- White balance and tint (green-magenta)
- Lens correction including chromatic aberration
- Colormetric interpretation (color space matrix conversion)
- Gamma or log transform to a lower bit depth
- Low-pass filter for Moiré suppression
- Noise reduction
- Anti-aliasing
- Sharpening
- Spatial transform from sensor size to standard picture dimensions like 4096 x 2160
The nature of these processes is very much the ‘secret sauce’ of the camera manufacturer and will be specific to each sensor design.
Colormetric interpretation
The values for data R, G, B channels of the raw data are arbitrary, depending on factors like the spectral absorption characteristics of the color filter array. To create the final image format the data must be converted to a standard color gamut like Rec. 709 or 2020.
Noise reduction, anti-aliasing and sharpening
The sensor and the associated electronics add a level of noise to the image data. The regular spatial sampling of the sensor array, plus the sub-sampling of color in CFA sensors create aliasing artifacts. The camera manufacturers use proprietary algorithms to reduce these artifacts, reduce noise, and to sharpen the image.
Blackmagic RAW
The Blackmagic developers have taken a new look at RAW workflows and have designed a new format that retains the advantages of RAW with the smaller file sizes and opportunity for the colorist to extract the maximum from the sensor, yet reducing the horsepower needed from the workstation. The novel methodology is to split the demosaicing between the camera and the edit workstation. At a minimum, cameras have to decode CFA files to feed the viewfinder, and if the camera records video or has a video output, then all the necessary processing for a RAW decode is onboard. Transferring some of the processing load to the camera relieves the workstation.
Grant Petty, CEO of Blackmagic, has explained how their goals with the development of a new RAW format included removing the need for a hardware decoder board in the workstation. Cinema DNG didn’t work out for the company, what was needed was a new RAW format that was more than just a folder full of frames, and it needed to be free of patent issues.
Blackmagic RAW is not just a container for video data, the RAW files include metadata with information about the sensor and its color science, as well as the camera settings.
The basic demosaic is a simple process. What needs processing power is the interpolation to create the RGB frames and associated edge reconstruction algorithms, and noise management. If some of this can be shifted to the camera then advantages are gained. This is especially so with noise management as any compression applied to the video data will be more efficient the less noise present.
The Blackmagic RAW process encodes using a custom non‑linear 12‑bit space and applies compression. Suitably designed compression has fewer artifacts when compared with the compressing a video signal, typically with H.264 or similar.
Users are offered a choice of constant bit rate (CBR) or constant quality with a variable bit rate. Maximum compression ratio for CBR is 12:1, other 3:1, 5:1, 8:1. Constant quality has two settings: Q0 and Q5.
The company has the Blackmagic RAW player available for download (MacOS now, Windows to come) and an SDK for developers designing applications that can read the RAW files.
The record menu from the Blackmagic Ursa Mini Pro showing the RAW settings.
Other camera manufacturers have chosen a number of methods for RAW encoding, with the most popular being some flavour of log encoding to 12 or 10 bits.
ARRIRAW
ARRI use a proprietary log encoding for their RAW files, ARRIRAW with a log curve is similar to Cineon (the log format developed for film transfer). The files are 12-bit with a claimed 14+ stops of latitude.
The Alexa also has option to record with DNxHD or ProRes codecs and to preserve the full sensor quantization with 12 bit logarithmic encoding if required. The Amira supports ProRes.
Canon Cinema RAW
Many Canon cameras can output a RAW signal,Canon RAW. This is 10 or 12-bit encoded using a log function, or in the case of the C700 a special transfer function that is optimized for the camera’s sensor.
Cinema DNG
BMD use the Cinema DNG format as a container for data encoded as‘Film’ gamma, a modified version of the standard Cineon curve. The modifications to the Cineon curve are designed to emphasize the strengths of the sensors used by the Blackmagic Design cameras.
The result is flat-contrast, wide-gamut image data that preserves image detail with a wide latitude for adjustment in post.
A Codex RAW recorder docked to a Panasonic VariCam Pure fitted with.
Panasonic V-RAW
The VariCam 35 can record 4K and UHD RAW, uncompressed V-log 12-bit up to 30fps. Frame rates higher than 30fps is 10-bit. Panasonic uses the Codex recorder, which docks to the rear of the VariCam, to record RAW.
For VariCam LT users looking to capture RAW files, Panasonic has released firmware for the VariCam LT that provides a 12-bit 4K RAW output to record to a Convergent Design Odyssey 7Q or 7Q+ recorder.
Red Redcode RAW
Red’s RAW format is Redcode Raw. Compression is used to reduce file sizes with tunable ratios allowing fidelity to be balanced with storage requirements.
In post, the decompression and demosiac process can be off-loaded from the workstation to one or more Red Rocket cards. The decoded files can be chosen as 8-, 10-, or 16-bit resolution for downstream grading.
Sony RAW and X-OCN
Sony is unusual in offering 16-bit linear RAW with the Venice, F65, F5 and F55 cameras, which can be recorded with the AXS-R7 recorder docked to the camera. Stemming from the very high data rate of 16-bit RAW, Sony has developed a compressed RAW format, 16-bit eXtended Tonal Range — Original Camera Negative (X-OCN). uses the same OP1a MXF wrapper as Sony RAW, XAVC, SR File and MPEG2 formats. Both picture and sound are contained within one file wrapper for easy file management.
There are two record modes of X-OCN, Standard (ST) mode, which preserves the image information and processing robustness of 16-bit Scene Linear, and Light (LT) with smaller file sizes.
The Sony AXS-R7 RAW recorder docks to the rear of the camera.
Other Codecs
Other popular means of delivering video from the camera to editing and grading is to use a proprietary editing codec. There are Apple ProRes and Avid DNxHD. Both formats have been licensed by camera vendors, so that the camera files can be dropped straight onto the timeline. The big advantage of these formats is they are designed to ease the process load in the edit workstation, plus without the need to transcode, they simplify and speed the workflow.
Summary
RAW files allow the colourist to extract the maximum from the original sensor data, information that can be lost in the conversion within the camera to regular 10-bit video formats like SDI. The RAW file has one-third the dat rate of the demosaiced RGB image, and can further be compressed to ease workflows with compact file sizes. The alternative of compressing the video with a codec like H.264 throws away the subtleties of an image that the sensor has captured. Whether it is rescuing a highlight or pulling detail out of the shadows, or applying a "look", the colourist will benefit from the RAW file. But, processing RAW in the grading or edit workstation requires considerable processing resources, and this is especially an issue with laptops.
The new approach from Blackmagic to split the processing between the camera and the workstation. The partial demosiac in the camera offers a way around the classic conundrum of RAW workflows. The addition of compression eases the demands on the memory cards, with reduced write speeds and smaller files.
DaVinci Resolve 15.1 supports Blackmagic RAW and a public beta is available for the Ursa Mini Pro camera.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.