HDR: Part 13 - Cameras And The Zero-Sum Game

For a long time, selecting camera gear has been fairly easy. For twenty years, digital cinema cameras have never quite had everything we wanted, and the choice often boiled down to comparing the compromises. That’ll always be true to a degree, but for the last year or two it’s felt like we’re arriving somewhere. We can’t have anything, but we can have more than enough, and those compromises are boiling down to a zero-sum game.

What does that mean? Well, make a bigger-chip camera for lower noise and we end up using longer lenses to achieve the same field of view. Longer lenses magnify things more, so out of focus areas look more out of focus, which is where we get the idea that longer lenses reduce depth of field. So, we might have to stop down to get to the same depth of field, to make accurate focus achievable. Stop down, though, and we’ve darkened the picture, so we might have to select a higher sensitivity. Higher sensitivity on a digital camera just means gain, which increases noise, compromising exactly the things we were trying to achieve with a bigger sensor to begin with.

There are other solutions. We can simply add more light, though that’s an expensive approach that might be a big ask of a production that’s already paying for a big-chip camera. And it’s not even particularly effective; double the amount of light and you’ve bought the focus puller a stop, which is welcome, but not enough to offset the difference in depth of field between a super-35 and full-frame sensor in otherwise equivalent circumstances. Quadruple the amount of light and – well – that’s two stops. That’s great, but it’s well beyond the ability of most productions to do.

Optical Quality

The vicious circle continues in other ways. Instead of improving noise, we might want a bigger sensor to improve resolution without having to accept a noise penalty. The problem is, to achieve, say, f/2, a lens must (at the very least) have a diameter that’s equal to half its focal length. But the focal length is longer, for an equivalent field of view, on our larger sensor. The lens must therefore be larger – which is intuitive enough – to achieve the same f stop at the same field of view and with the same quality. That bigger lens will be considerably more expensive. If it isn’t sufficiently expensive, the optical quality might begin to suffer, compromising the improved resolution we wanted to begin with.

That’s a zero-sum game. A combination of practicalities that leave us with a certain maximum level of image quality. No matter which compromises we choose, we’re trading one thing off against another.

Larger Sensors

Going any further down the road of bigger sensors probably isn’t an idea with much of a future. More light and better glass helps, but costs can only escalate so far. There certainly doesn’t seem to have been too much of a push for even larger sensors in mainstream cinematography. There have been digital medium-format camera backs – generally not covering the full medium-format frame, but still very large – which will shoot video, and perhaps it’s only a matter of time before Imax steps in with a sensor the size of a 15-perf 65mm negative. The purpose of Imax is not really restraint or moderation, after all. Still, yet bigger chips sort of thing seems likely to stay in its specialist niche.

Is there a better solution? Sure, though it applies to every imaging sensor ever made. All you have to do is find a way to make each square micron of sensor area more sensitive to light, without compromising anything else.

Improving Sensitivity

That is what sensor manufacturers R&D departments spend their days trying to do. One key figure of merit is “quantum efficiency.” Ideally, a photosite on a sensor would capture every photon that struck it and convert that photon into an electron. Real world designs are not quite that perfect. Equally, modern sensors have built-in hardware which converts the numbers of electrons captured, which is fundamentally an analog signal, into a digital signal. Doing that creates noise we’d prefer wasn’t there. The reality is that a competitive sensor in 2020 can record light levels up to a few tens of thousands of photons per frame, per photosite, with a handful of electrons in read noise.

Naturally, we’d like more, which is where scale helps. The simplest way to improve things is to make the photosite bigger so more photons are likely to hit it and more will fit in it, which demands a bigger sensor for the same resolution. We’ve done that, though. Now we have to find a way to achieve higher sensitivity without just scaling things up; to break that zero-sum game.

Various approaches to doing that have been tried. One is to maximize the amount of the sensor that’s covered in photosites, minimizing the extra circuitry that’s around each one. This is why many sensors have a rolling shutter, as the extra electronics for global shuttering take up more space. We can also make more room for photosite area by separating the photosites from the electronics and stacking them in layers. People have even put tiny arrays of lenses on the front of sensors to focus light on the active areas, though that can cause problems with lenses that fire light at the sensor at anything other than a right angle. By far the most popular approach, for all sorts of reasons, is to reduce the saturation of the filters which allow the sensor to see in color, compromising color performance for sensitivity.

Zero Sum Choice

The best of those ideas are the fundamental advances, the advanced development, and they can give us real advancement. They come very slowly, though. Yes, we can pay more money for more performance, tolerating a lower yield and more rejects in sensor manufacturing for a design that really pushes the envelope, but as with anything, the last 5% of the performance costs the last 50% of the money. Real progress in sensor design comes much less frequently than the camera market needs it to, so we trade off size, resolution, color performance – and when we’re selecting a camera for a job, we dance around the zero-sum game.

You might also like...

NDI For Broadcast: Part 2 – The NDI Tool Kit

This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.

HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG

Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…

Great Things Happen When We Learn To Work Together

Why doesn’t everything “just work together”? And how much better would it be if it did? This is an in-depth look at the issues around why production and broadcast systems typically don’t work together and how we can change …

Microphones: Part 1 - Basic Principles

This 11 part series by John Watkinson looks at the scientific theory of microphone design and use, to create a technical reference resource for professional broadcast audio engineers. It begins with the basic principles of what a microphone is and does.

Designing An LED Wall Display For Virtual Production - Part 1

The LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in an interdependent array of technology.