HDR: Part 19 - Creative Technology - Bayer And Beyond
It’d be easy to think that when Bryce Bayer’s name appeared on the Kodak patent for single-sensor color cameras in 1976, it was a new idea. Sufficiently new to be patentable, perhaps, but actually the idea of covering a sensor with a pattern of primary-colored filters goes back to the earliest days of color photography.
There are a lot of ways to do it, as we’ve seen in the world of film and TV with designs like Sony’s F35 and more recently the Blackmagic Ursa Mini 12K, both of which use color filter arrays that aren’t what Bayer proposed. Bayer’s original design, though, seems to keep coming back.
Do these new ideas help, or are they just different embodiments of the same compromise?
The first imagemaking technology that looked recognizably like Bayer’s design actually predates digital imaging. The earliest color photography used a black-and-white photographic plate and overlaid a color filter on it. Some of them used random speckle patterns (Autochrome) and others used strikingly Bayer-style alternating squares (Paget or Finlay plates), but the approach was simple. Take the photograph with that filter matrix facing the subject, so the plate sees filtered light, then view the result through the same filter with a backlight. The result is a color photo. It was very insensitive, but pretty. Autochrome in particular produced a gentle, easy-to-like look that kept it in use long after technically better color materials were available.
Variations On Bayer
The Paget and Finlay designs, which used regular squares, weren’t identical to Bayer’s pattern. Bayer specified two green filters for each red and blue, whereas both those early color photo systems used more blue, and the colors weren’t always specified as red, green and blue in any case. Autochrome referred to a reddish-orange, and the greens lean toward the cyan compared to modern designs, in much the same way that modern imaging sensors can choose various color primary dyes to achieve different results. Bayer himself proposed a cyan, magenta and yellow filtered version and there have been many others.
As early as 2004, Sony’s CyberShot DSC-F828 camera used a sensor advertising a “halving of color reproduction error” through the use of what was described as an emerald filter, but which most English-speakers would probably describe as teal or cyan (possibly because the word ao, 青 in Japanese, mixes the definition of blue and green). The smallest repeating unit of the color filter array was still two filters square, including red, green, blue and cyan filters. Kodak itself has proposed several designs including RGB and white, which reveals a particular goal of this sort of thing: the color filters naturally absorb a proportion of the light, reducing sensitivity. A white (essentially unfiltered) photosite sees everything, and Kodak’s designs have four-square repeating patterns of filters.
Fujifilm has used a six-by-six repeating pattern on its X-Trans CMOS devices, which uses red, green and blue but clusters four green filters together while ensuring there’s a filter of every color in every row and column separated by no more than two. Another Fujifilm sensor, dubbed EXR, uses diagonal patterns of filters (as did the Paget and Finlay photographic processes) and Sony has done something similar in its F65 camera, which uses a conventional Bayer color filter array but effectively rotates the entire sensor to run the rows and columns diagonally. Sensors for cellphones have been called “quad Bayer,” with the same filter pattern as a Bayer sensor but with each filter covering four photosites. The camera can then average the four photosites for lower noise and so higher effective sensitivity in low light, or treat them independently for higher resolution on well-lit subjects.
Unscrambling A Filter Array
In short, a lot of ways of photographing full-color images on sensors only capable of seeing in black and white have been tried. What’s changed since the 1900s is that those color filter arrays are not a physical part of the image, so we’re not using the same filters to view the image as were used to photograph it. As we’ve seen, there are a lot of different types of filter, and therefore a lot of mathematics between the data captured by a camera and the monitor on which that image is displayed. Different approaches all have different benefits and drawbacks, optimizing for things like sharpness, minimum artifacts, noise, color fidelity, and other things.
The most basic demosaic algorithms tend not to do very well at any of those things. With Bayer’s original design, for every pixel in the final image we have just one of the red, green or blue components. We can estimate the two we don’t have by averaging the nearest few, depending on the specifics. It’s quick and easy and will give us a result that, overall, looks fairly reasonable on photographic scenes that don’t contain too many sharp edges. As we might expect, once we do start shooting sharp edges, we can start to see colored artifacts as those sharp edges interact with the pattern of filters on the sensor.
Slightly less primitive techniques rely on assumptions about how the real world looks. One particularly useful approach is adaptive homogeneity-directed demosaicing, which works on the basis that a solution that contains the fewest sharp edges – which is most homogeneous – is likely to be closest to ideal. Often, homogeneity-directed algorithms calculate the values they need by averaging nearby pixels in several directions – that is, they’ll average the values above and below the target pixel, or the ones to each side, or diagonally, or in other directions, and essentially choose the result that’s smoothest over some small area of the image.
More advanced techniques, as are commonly used in modern, real-world applications, try to detect edges in the image data and maintain them, without introducing unwelcome brightness or color artifacts around those edges. This is where we encounter one of the most common compromises in modern color cameras: the fact that the red, green and blue filters probably aren’t as red, green and blue as people expect them to be. If we make the filters paler, every photosite on the sensor can see at least something of the whole scene, which makes it easier to detect edges in the image, no matter what color those edges are.
Compromises
Of course, the result of using less saturated filters is a less saturated picture. Normal color can be recovered to an extent with mathematical matrix operations, which can be summarized as “turning up the saturation.” That in turn leads to an increase in noise and can create issues with color accuracy, and that’s why at least some modern single-chip cameras can see the deep blue of a blue LED as purplish. The balance between sharpness, colorimetry and noise is one that can be tuned to taste by manufacturers, but there is no ideal solution.
The other reason manufacturers sometimes skimp on the saturation of the color filters is sensitivity. Red, green and blue primary color filters, assuming they’re highly saturated, absorb around two thirds of the light going through them, costing the camera more than a stop of light. As we’ve seen, the filters are rarely as saturated as they could be, but they’re still deep enough colors to absorb a significant amount of light. This is one way in which Bayer’s ideas about cyan, magenta and yellow filters are interesting, because those filters (assuming similar saturation) only absorb half as much light. At the time, the technology to produce a cyan-magenta-yellow filter array didn’t exist, and Bayer wasn’t able to pursue the idea.
All of these problems affect different filter array designs to varying degrees. Some of them (particularly those with unfiltered white photosites) have better sensitivity. Some have better sharpness. Some have better color reproduction. All of these are massively variable depending on the behavior of the demosaic algorithm and the results inevitably represent the essential engineering compromises between sensitivity, color accuracy, sharpness, noise and aliasing. The algorithm, and the priorities of the people who designed it, are so influential that minute comparisons of one camera against another, even if they have the exact same basic resolution, can be very difficult.
Color Imaging In 2020
If there’s a way around all this, it may be through sheer resolution. Something like Blackmagic’s 12K Ursa Mini is always likely to be working at a much higher resolution than any display it’s ever likely to see. That keeps the errors small compared to the overall size of the frame, as well as averaging out noise and aliasing. Blackmagic’s design includes white pixels, presumably as a way to offset the sensitivity compromise of such a high resolution and therefore such small individual photosites, but it also ensures there are red, green and blue filters in every row and every column. Perhaps not by accident, the six-by-six repeating pattern ensures there are 2048 such patterns across the width of the sensor. Even with the most primitive demosaic algorithms – and there’s nothing to suggest Blackmagic’s is primitive – the camera can hardly fail to have sufficient resolution for more or less any job from TV to IMAX.
The compromise, despite those unfiltered pixels, is a stop less dynamic range than the 4.6K version of the same camera, and that’s a fairly familiar story. As we’ve seen, a lot of variations on Bayer have been tried, and at some point we might start to suspect it’s a zero sum game. We can trade off sensitivity for color reproduction, aliasing for sharpness, resolution for aliasing, noise for all of the above, but at some point there’s only so much performance available from a given slab of silicon. The mainstream still uses straightforward Bayer sensors and most of the fundamental improvements have come from things like backside illumination and improved processing. Still, with 12K now available on a Super-35mm sensor without any serious compromise, at least as far as conventional film and TV goes, it’s hard to complain about the state of play, and the long-term lesson from all this may simply be that we’re beyond needing to worry about it too much.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.