Presented In Cinemascope
Electronic camera manufacturers have spent – by some measures – something like the last twenty years trying to make digital cameras that shoot pictures that look like real movies. Now, they’re making cameras with larger and larger sensors, the better to simulate the sort of cameras that shot some of the greatest mid-twentieth-century movies, in the days of 70mm and VistaVision.
But what does a real movie even look like, and why?
What we like is a matter of opinion. What we consider cinematic is a matter of conditioning, which on one hand makes engineers frown because human beings can be conditioned to like almost anything, and their desires sometimes don’t accord very closely with engineering ideals. Film grain, while beloved of purists, is just noise by any other name. Larger sensors reduce depth of field, which is popular, but blur is not a faithful representation of the scene. Nonetheless, the business of cinematography is, in part, dedicated to fulfilling the audience’s expectations, and what the audience expects is the result of an imaging sensor around an inch wide.
Unlike a lot of advances in technology, the genesis of 35mm film was fairly straightforward. It was launched, with a specified width of 1-3/8” (almost exactly 35mm), straight into what might originally be called the first format war, which took place in the late nineteenth and early twentieth centuries. More or less any random number between 8 and 75 had defined the width of film at that time. When Thomas Edison (yes, that Thomas Edison), in collaboration with his employee William Dickson, introduced the format in 1892, he probably didn’t anticipate that it would still be occupying the minds of camera designers 128 years later. He did, however, understand the value of a single, standardized format – and perhaps also the value of a patentable standard format. It’s quite possible that he chose that film width for no reason other than that nobody else had picked that particular number.
Figure 1: Edison's original design specified 1-3/8" wide film, which is almost exactly 35mm, and four sprocket holes per frame. This drawing shows a modern sprocket hole shape, but Edison's design was startlingly close.
Some sources suggest that it was created by splitting a 2.75” (effectively 70mm) roll film in half; certainly 70mm film had been shot (at the Henley Regatta in the UK) perhaps as early as 1894. Others say that Edison had independently decided on the size and was forced to rather inefficiently cut down 40mm film to suit, although those records fail to state why the 1-3/8” gauge was originally chosen, or why it wouldn’t have made more sense to use the slightly wider rolls as supplied. In those days, splitting commercially-manufactured film rolls into narrower strips to make them go further was common, though in fact the 61mm-wide 120 roll film was actually developed around the same time – in 1901 – specifically to be less expensive than glass plates or large sheets of film. Either way, by the mid-1890s, Edison had arranged to buy film split to his preferred size, laying the foundation for sensor sizing we still use today.
At the time of its invention there was nothing particularly special about 35mm film; it was one of many. It was always a motion picture format first, and was later adopted as an even more affordable route to still photography. Edison developed the format for his single-viewer Kinetoscope machine which was first shown in 1893. Kinetoscope films look stunningly similar to modern 35mm in that they were specified to have four sprocket holes per image, and an image that would fill the horizontal space between the sprockets. Those decisions – the width of the film, the layout of the sprockets and the number of sprockets per image – created something very close to what we think of as the 1.33:1 silent aperture, and with it almost all modern moving image frames, including television.
Figure 2: Before the invention of optical sound, this - the silent or full aperture - was the standard image placement for all 35mm production.
Edison’s patents didn’t last. If they had, perhaps 35mm would not have gained the popularity it did. Edison attempted to exert huge control via his Motion Picture Patents Company, which he had intended quite straightforwardly to protect incumbents against the threat of competition. The MPPC’s dominance was greatly diminished after a series of court battles in the first decade of the twentieth century, and after cautious but determined non-compliance from independent producers.
One side-effect of this is that producers concerned about legal difficulties were eager to put some distance between themselves and Edison’s New Jersey base. They were also keen to operate in a jurisdiction felt (at least then) to be less than enthusiastic about the enforcement of patent law, and one with plentiful sunshine to illuminate scenes for the insensitive stock of the time. So, the fact that four-perf 35mm film is the standard, and why modern cameras have sensors around 25mm wide, is also part of the reason that Hollywood is where it is.
Trimming a vertical edge from Edison’s frame to make room for optical sound might be expected to make the frame even squarer. In fact, the opposite happened. Silent films had used every last bit of space between the sprocket holes for picture, and while that might make sense, the rather uneven projection practices of the pre-sound era often made for equally uneven framing and could leave splices visible. When, in the late 1920s, it became clear that optical sound would become popular, the Society of Motion Picture Engineers (which would later become the modern SMPTE) acted to standardize a 20.3 by 15.2mm frame, which would essentially crop the top and bottom of the image and allow the 1.33:1 aspect ratio to be maintained. Soon after, the nascent Academy of Motion Picture Arts and Sciences (of which Thomas Edison was the first honorary member) proposed modifications which very slightly widened the image to 1.375:1. This was accepted by the SMPE and created what we still know now as Academy Ratio.
Figure 3: This is the famous academy ratio frame which defined the shape of a television screen and was the basis of almost all professional moving image production for almost a century. The optical soundtrack is shown in blue.
Things remained stable until the early 1950s, when the perceived danger of competition from television (which itself had adopted a 1.33:1 aspect ratio to suit existing films) led to an explosion of interest in widescreen images. The famous example is CinemaScope, a term which is still sometimes used – with dubious accuracy – to refer to aspect ratios near 2.39:1, however they are photographed or distributed. Properly, the name refers to a process developed by 20th Century Fox as a response to other widescreen or specialist presentation formats, including the three-lens Cinerama system, Paramount’s VistaVision, and the 70mm format developed by Mike Todd.
Figure 4: This is the famous VistaVision, which was developed to improve 35mm filmmaking in general but struggled as a general purpose format. It was brought back into limited use for visual effects-heavy sequences where the extra quality was important. Inset in red is the much smaller, squarer Academy frame for comparison.
Todd, who had been involved with Cinerama, collaborated with the American Optical company to create Todd-AO in the hope that it would be easier to shoot. It was perhaps inspired by the rather different 70mm Fox Grandeur process of the 1920s, and was effectively re-implemented by Panavision as Super Panavision 70. These 5-perforation, 65mm-negative formats are generally fractionally smaller than the full sensor on an Alexa 65, but the line of inspiration is clear, and cinematographers may choose to crop the image in camera or post-production.
The idea of using an anamorphic lens to horizontally compress an image dates back to Henri Chrétien’s work of the 1920s, and Fox involved Chrétien himself in the development of its first anamorphic camera equipment. The degree of compression has almost always been 2:1, so we might expect that the Academy ratio would be doubled to around 2.75:1, but Fox defined extra magnetic soundtracks to create the first stereo sound on film format. These reduced the width to 2.55:1, and then optical soundtracks were added which created the 2.35:1 ratio and established another frame that survives – albeit fractionally modified to 2.39:1 to hide splices – to this day.
Figure 5: CinemaScope uses (very nearly) the Academy frame. The red object is circular in reality and has been compressed by the lens.
The simpler, commoner 1.85:1 ratio was developed almost simultaneously as a less expensive alternative without cumbersome anamorphic lenses. It’s hard to find a reliable source on where the 1.85:1 numbers came from, other than that several were tried. The approach of simply masking the Academy frame to create a wider image was, with some justification, criticized for wasting film stock, but it established probably the most popular and widely-used shooting and distribution format of the twentieth century.
Figure 6: The black area is cinema widescreen, the format that defined most of the late twentieth century. The Academy area is show in red.
Modern electronic cameras, though, tend to use something approaching the silent aperture, at least in width, and define the height of their sensors to create something like a 1.85:1 frame. The term “super 35” arises from the fact that the space for the optical soundtrack on a film is only used for release prints, and in camera is entirely wasted. Another 1950s format, Superscope 235, sought to use the entire width of the film, moving the lens over slightly to keep its center in the middle of the frame. The resulting larger image improved quality, but complicated making a print for theatrical release, which would usually use a contact printer that did not have the ability to move or scale the image.
Superscope 235, later simply called Super 35mm perhaps after the Super 8mm and Super 16mm formats, would later be revived in situations where the film would not be printed, as for television broadcast, and for other situations where the bigger negative outweighed the inconvenience of requiring an optical printer to rescale the image for distribution. In the 80s, the format was used for cockpit shots in Top Gun, most of which was shot on anamorphic lenses which would not fit in an average fighter cockpit. Later, when digital intermediate and film scanning became possible, the necessary resizing could be done electronically while retaining the benefits of a bigger negative.
Figure 7: Superscope 235 was an attempt to maximise image quality when shooting for wider screens without anamorphic lenses. It uses the whole silent aperture (shown in red), on the basis that almost all films are now finished digitally and the space for optical sound need not be reserved in camera.
This is what modern digital cameras are, in effect, mimicking – a system based on Edison’s work of the 1890s, modified via the introduction of sound and widescreen, but still ultimately founded on nineteenth century Kinetoscope machines. A minority of cameras, such as the Alexa Plus 4:3, implement the full 1.33:1 area for compatibility with true anamorphic lenses. There is a lot of variation due to the practicalities of making microchips, with almost as many sensor sizes as there are digital cameras, but many of them are described as “super 35.” Bigger chips are largely based on bigger formats – VistaVision ran 35mm film horizontally for a 1.5:1 aspect ratio, and 65mm was designed for printing to 70mm release prints. Both were obsolete by the mid to late twentieth century other than for special purposes, but cameras designed to approximate those frame sizes are, of course, popular. That popularity says something about the desire for spectacle that still inhabits the film industry today, and in many ways the barrier to greater cinematic pageantry is no longer the capability of the cameras – it’s the willingness of the exhibitors to put them on an appropriately gigantic screen.
You might also like...
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.
HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG
Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…
Great Things Happen When We Learn To Work Together
Why doesn’t everything “just work together”? And how much better would it be if it did? This is an in-depth look at the issues around why production and broadcast systems typically don’t work together and how we can change …
Microphones: Part 1 - Basic Principles
This 11 part series by John Watkinson looks at the scientific theory of microphone design and use, to create a technical reference resource for professional broadcast audio engineers. It begins with the basic principles of what a microphone is and does.
Designing An LED Wall Display For Virtual Production - Part 1
The LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in an interdependent array of technology.