Sensors and Lenses - Part 3

There’s a terrible tendency in cinematography to concentrate too much on the technology, overlooking creative skills that often make a huge contribution. In the last two pieces of this series we’ve gone into some detail on the historical background to current camera technology. In this last piece on the art and science of sensors and lenses, we’re going to consider what difference all this makes in the real world.

The size of the imaging sensor and the choice we make affects the most fundamental things about photography. A lens, in the end, simply projects an image onto a surface. The size of that surface determines the amount of that projected image we see. So, despite common claims to the contrary, what a bigger sensor actually does, all else being strictly equal, is show us a wider field of view. In many discussions of the subject, though, it’s taken as read that the larger sensor gives us a shallower depth of field. That’s true if, and only if, we change lenses to achieve the same field of view. Achieving what we might consider the same frame between two cameras with different sensor sizes will require a longer lens on the larger sensor, and the reality is that a longer lens has a reduced depth of field.

There are several ways to think about this, but one interesting approach is to consider the real physical size of the sensor compared to the display we’re watching. If we’re viewing the results on the same 55-inch TV, the image from the smaller sensor is blown up more than the larger sensor, in terms of their sheer physical dimensions, in order to fill it. That enlarges details and makes out-of-focus details look more out of focus than they otherwise would. There are several ways to think about this, all equally valid, but in the end a larger sensor will, in practice, produce shallower depth of field for the same field of view.

Now we’ve changed the magnification of the lens to achieve the same frame, we’ve also changed a lot of other things. The magnification of a lens is, of course, proportional to its focal length. The f-number of a lens – its speed – is equal to the size of the entrance pupil divided by that focal length. In that sense, “entrance pupil” means the size of the aperture as viewed through the front elements, so that magnifying lenses in front of the aperture can make a lens faster by creating a bigger target for the light to hit. That works up to (in theory) the real physical diameter of the front of the lens, so that a 50mm f/2 lens must be at least 25mm, and in reality, considerably more than that, in diameter.

At the same time, we need a big enough image to cover the sensor, which also tends to make lenses larger; it’s no surprise that glass designed for larger formats tends to be physically larger and often very, very much more expensive.

Figure 1 – the top diagram shows the relative sensor positions for a lens focusing on a scene with a viewing angle of 35 degrees. From our thin lens approximation formula (1/f = 1/u + 1/v), we can see that if “u” stays the same, that is the distance from the scene to the lens, and “v” increases from v<sub>1</sub> to v<sub>2</sub> because a larger sensor is being used, then by definition, the focal length (f) must also increase. This is the reason why a lens with a longer focal length is required when a larger sensor is used, and the viewing angle is kept constant (see figure 2 for more details).

Figure 1 – the top diagram shows the relative sensor positions for a lens focusing on a scene with a viewing angle of 35 degrees. From our thin lens approximation formula (1/f = 1/u + 1/v), we can see that if “u” stays the same, that is the distance from the scene to the lens, and “v” increases from v1 to v2 because a larger sensor is being used, then by definition, the focal length (f) must also increase. This is the reason why a lens with a longer focal length is required when a larger sensor is used, and the viewing angle is kept constant (see figure 2 for more details).

It’s possible, though, and it’s at this point that unequivocal engineering realities begin to give way to subjective interpretation, because there’s no single amount of depth of field that is somehow correct. Invariably, when the subject of a shot is a human, the eyes are the target for focus, because humans habitually look one another in the eye. When the focus puller needs to ask which eye the director would like to be in focus, depth of field is probably too shallow. At the risk of offering an opinion, a shot which doesn’t clearly delineate the edge of the subject against the background is likely to lack depth and separation; we are, after all, talking about what’s invariably a two-dimensional artform.

There will always be exceptions, but the public demand is often for very fast lenses, and even lenses which cover very large sensors can be very fast. Arri’s Signature Primes open up to T1.8 up to 125mm, with 2.8 available on the 280mm. 280mm lenses create, of course, a rather wider field of view on a large sensor than a super-35mm sensor, and that sensor will be magnified less for display, as we discussed above, but it’s still a downright intimidating challenge for the focus puller. The option is nice, but it’s crucially important for everyone involved to understand that super-low f-numbers are, like super-large sensors, a possibility, not a target, and almost certainly not both at once.

How bad is it? Fine, we’ve matched field of view, and in doing so we’ve reduced depth of field on our larger sensor. If we want to maintain the same depth of field, assuming we’ve gone from a sensor roughly the size of super-35mm to a sensor roughly the size of a full-frame still photo negative, the difference in depth of field is equivalent to closing the lens down one or perhaps two stops, depending on the specifics. Instinctively, that doesn’t feel too bad; it’s only one or two more notches on the lens, but that’s up to four times the light, and that’s significant.

In reality, most people shooting with larger sensors are not significantly increasing the amount of light they’re using. There’s perhaps an argument that the sensitivity and noise performance of digital cinematography has made us lazy; in the middle of the twentieth century the effective sensitivity of processes such as three-strip Technicolor was sometimes in the single digits, and light sources vastly less efficient, far bulkier and more demanding on crew and support equipment to work well. It’s been said that despite the modern interest in shallow depth of field, a standard shooting stop for a typical late-twentieth-century feature film working on 35mm film was f/4, which produces a depth of field similar to a 2/3” video camera with the lens wide open.

Figure 2 – For a constant viewing angle of 59 degrees, the focal length can be seen to increase as the sensor size also increases from 2/3” sensor to the Arri Alexa 65. For example, to achieve a 59 degree viewing angle on a 2/3” sensor a focal length of 10mm is required, and to achieve the same viewing angle on a Blackmagic URSA sensor, a lens with a focal length of 26mm is required (all measurements are rounded to zero decimal places).

Figure 2 – For a constant viewing angle of 59 degrees, the focal length can be seen to increase as the sensor size also increases from 2/3” sensor to the Arri Alexa 65. For example, to achieve a 59 degree viewing angle on a 2/3” sensor a focal length of 10mm is required, and to achieve the same viewing angle on a Blackmagic URSA sensor, a lens with a focal length of 26mm is required (all measurements are rounded to zero decimal places).

And, as we know, it is far from coincidental that “wide open” can mean f/1.3 on a 2/3” video lens, because it’s so much easier to build very fast lenses for smaller sensors. Zeiss’s Digiprime range was built for digital cinematography on 2/3” cameras and uniformly achieves T1.6 out to 70mm, a reasonably long lens on such a small sensor – and they’re not that large.

Either way, the solution to the concerns of focus pullers faced with a large format shot at 125mm and f/1.8 is not, usually, more light, because light levels are so often controlled more by budget than by photographic need. Sometimes, the solution is to select a different – higher – sensitivity in the camera. Two stops are equal to about 12dB of gain in the language of broadcast camera engineering and more than most people would want to apply to, say, a newsgathering camera. In 2019, though, a lot of digital cinematography cameras are capable of effective sensitivities in the thousands of ISO to begin with.

Does this solve the problem? Well, it depends if all that sensitivity is already being leveraged to reduce the lighting budget (or make larger locations practical, or to shoot more with practical lighting, or for any other reason.) Is any production likely to back down from those things in order to give the focus puller an easier time? In the end, it might be the case that larger sensors are at least something of a zero sum game. As we saw in part 2, larger sensors give us the option of more sensitivity, dynamic range or noise performance. If we push sensitivity beyond where we normally would, then there’s a risk we’ve traded off the very advantages we were seeking with a larger sensor in the first place.

In the end it's great that there are enough camera choices for most productions to have what they want, even if the fight between f-number, sensitivity and depth of field can become something of a stalemate. If it’s any consolation to the first assistants of the world, the move toward larger sensors has happened more or less at the same time as a massive increase in the quality of on-set monitoring, although even the most experienced will admit that the best monitor will only tell us once something has already gone soft.

As to how the average focus puller is to deal with the new reality, when productions want all the advantages of larger sensors without the increase in light level, the solution is often simple: concentrate harder, people. Think of it as an opportunity to shine.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…

Designing An LED Wall Display For Virtual Production - Part 2

We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.