The Relentless Pace Of Evolution In Imaging Technology

Phil Rhodes shares his personal perspective on the sometimes-staggering pace of change in new commodity technologies that are disrupting professional media production.

Speculating about future film and TV production technology is dangerous, because speculations about future hardware assume that there will be hardware to speculate about. That’s a prediction we’ve often got wrong, because no matter how outlandish our expectations, we’ve normally anticipated that rooms full of expensive, custom-built hardware are likely to be replaced by rooms full of better expensive, custom-built hardware. The reality, meanwhile, has been that more and more technology has been overtaken by the ability of information technology, from workstations to cell phones to cloud resources that aren’t even physically present, to emulate the features of what was once a building full of hardware.

Whether that means a computer, or someone else’s computer in a warehouse at the end of a network connection, what we might call workstationisation has been universal in post production for longer than many careers, but replacing many on-set tools really required smartphones to make computers accessible to people who work on their feet. That has accelerated hugely in the last five years, subverting ever more granular pieces of production technology, right down to monitoring and lighting control devices. Few gaffers, in 2022, want to walk over to a lighting console when a cellphone will do the same job, and the modern concept of a digital imaging technician effectively replaces the laboratory contact a cinematographer might once have used with more or less the same functionality on a laptop.

The Past

The input of collaborating people is still what really matters, although the vast change in working practices sometimes prompts questions about how carefully the change has been considered, and whether some of the things which are now done on set really need to be done in such an expensive and potentially rain-sodden environment. What’s most powerful about the commoditization of more and more tools, meanwhile, is that it makes them increasingly software-based, increasingly affordable, and increasingly capable of doing staggering things based on a level of research and development which no one industry could ever have funded alone. At the same time, the downside is complexity, reliability, and the suitability of what’s really consumer devices for a professional environment.

There are two main reasons all this matters. The first is that some of the most fundamental technical underpinnings of film and television technology arise from the behavior of the technology which was used to implement it. One good example of this involves the way brightness is represented in TV and film images, and it’s worth understanding.

From the very first days of electronic moving pictures, the signal level on a wire, whether that’s digital or analog, was not directly related to the amount of light entering a camera’s lens. Double the brightness of the scene, and the signal level doesn’t double in unison. That slightly counter-intuitive approach comes from the need to transmit the pictures by radio, which would leave too much noise and grain in the shadowy areas of the picture if the signal level directly followed the light level.

So, early television equipment was designed, in effect, to boost the brightness of dark parts of the scene before transmission, to be reduced again at the receiver. The huge coincidence here is that the natural behavior of cathode ray tubes, as used in old TV receivers, just happened to create exactly the required effect to display a normal picture. Now we don’t use cathode ray tubes any more, we’re building displays for phones, tablets and workstations which actually have to include electronics specially designed to emulate the same behavior, so that pictures made according to TV standards established decades ago display properly on modern equipment.

The Present

Emulating pre-existing things inevitably involves at least some degree of additional cost in design and manufacturing, extra bulk and weight, processing delays, and at potentially compromises to reliability and power consumption. In display technology, that’s often reasonably trivial, since any given device would invariably need color and brightness processing anyway, to be a useful tool on a modern film set. In other situations – particularly where computer code is involved – the compromises may be more influential.

One place this sort of compromise becomes visible is with wireless communications. Commodity IT has brought us several levels of radio communications, in the form of Bluetooth, wireless Ethernet, and various cellphone systems. All of these systems have levels of sophistication (automatic negotiation, security, routing, etc) which would have been impossibly expensive to develop for a market as small as film and TV. At the same time, the only way to access most of those technologies is via a device with a realtime operating system such as a smartphone or laptop.

Those platforms give us access to a whole raft of other services, things like video compression and wonderfully fluid and usable user interface components that are streets ahead of anything found on an old-school equipment case with some switches on it. It wasn’t long before people started asking questions about whether it’s possible to, say, use the excellent display panel on a cellphone in conjunction with its wireless Ethernet connectivity to create a wireless monitoring system that would otherwise require advanced custom-built hardware.

And the answer is, well, yes, it’s possible to stream video over computer networks, so it is very much possible to use a cellphone as a field monitor, though there are some fairly significant caveats which are somewhat talismanic of the wider issues. The first is that all of the technologies we’re using here, from the wireless networking to the video encoder to the compression and decompression algorithms, are required to be very modular. That’s what makes them so useful, but it also creates latency and delay. Very often, information will be passed from module to module in the form of a video frame, or a sequence of several video frames, which generally means an absolute minimum of several frames of delay, no matter how fast the electronics are. For that reason, while some devices have done some very careful engineering to keep these delays as small as possible, it’s often difficult to operate a camera using such a display.

Solutions In Old Tech

Wireless video links are not a randomly-chosen example. There are purpose-built displays and wireless video links which avoid delay by using joint source and channel coding. That intimidating bit of jargon simply means that the device encodes the video signal for transmission on a radio link as a single step. A cellphone might apply the H.264 codec to the video (the source) then transmit it (the channel) using wireless Ethernet. Joint coding techniques combine both operations into one, essentially using a radio transmission technique that itself de-emphasises less important information, creating approximately the same result more quickly.

The idea is recent, arising from the early 90s in studies of video encoding for broadcast distribution. Even so, the concept of doing two things in one operation might reasonably be applied to exactly the situation we discussed earlier, where the cathode ray tube both converts the video signal back to light, and normalizes its brightness. Not only does it do those things vanishingly quickly, with a delay of mere nanoseconds to account for the propagation of electrons through the device, but it does both of them at once. Doing it on the GPU, much less CPU, of a phone is a wonderful application of multi-purpose technology, but it inevitably takes longer.

The Future

All of that is a microcosm of a set of concerns which apply to IT not only in film and TV productions, but also latency critical applications like VR, and delay is not the only problem which afflicts information technology in applications which were once served by specific equipment. One major issue that sometimes surprises filmmakers is that cellphones are designed to spend the vast majority of their time in a very low power standby state. Try to use one as a monitor and it quickly becomes clear that its power subsystem was not designed to support ten or twelve hours of continuously active operation.

Commodity problems often mean commodity solutions – USB power banks, with that example in mind, are off the shelf, and much more affordable than cinema camera batteries. The broader solution to all of it, though, is exactly the same solution that made it possible to handle video on a pocket phone or even a workstation in the first place: sheer, unadulterated performance, and not just of the CPU. Making networks faster might mean streaming video with less compression, circumventing the multi-frame minimum delays for certain codecs.

Even now, it’s possible for people to stream video from set to editorial quickly enough for a meaningful real-time feedback loop to develop between production and post. Similarly, more than one company has shown that a television studio floor and its associated gallery can be separated using the public internet, reducing the amount of equipment travelled for an outside broadcast and allowing long-suffering vision mixers and graphics people to work sane hours and sleep in their own beds. As technologies like 5G roll out – and they’re not really rolled-out yet, despite what providers would like us to believe – the ability to do things remotely and to send less adulterated media to more remote people can only improve.

The Limits

There are just too many reasons for the hardware to get faster and better, and if most of those reasons are to do with making Candy Crush Saga even prettier it barely matters. As it happens it’ll become possible to implement more and more things in commodity hardware. The skyrocketing performance of cellphone cameras is another example. The reason this doesn’t scare the big cinema camera companies, however, remains. While the technology evolves, humans remain largely the same. Human fingers need buttons to push, which is why those remote studio galleries are full of essentially low-technology, USB-connected control surfaces with the usual keypads and crossfaders of a vision mixer which has had its core functions deferred into software.

There’s also the small matter of human desire to consider. LED video walls and virtual environments make it possible to produce hugely convincing in-camera composites rendered on off-the-shelf gaming hardware, but then some people like travelling. Some people like designing sets. Certainly, focus pullers might need a better user interface than can reasonably be attached to a phone. Film and TV are an art form, or at least they ought to be, and some of the artists involved have processes that are used because they are liked.

The inevitable march of technology, the financial constraints faced by producers and just sort of artisanal inclination toward familiar tools seem likely to make this a constant balancing act, and considering the advances in commodity technology seem inevitable regardless, this is, happily, a situation that doesn’t necessarily have to involve any downsides for anyone – so long as we see it coming and act accordingly. The film industry held out for longer than it really needed to against the march of digital imaging, so we do have a track record of waiting until the moment is right. Here’s hoping that record can be maintained.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.

HDR Picture Fundamentals: Camera Technology

Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.