HDR: Part 22 - Creative Technology - Non Standard

Read too much film and TV industry technical literature, and it’s easy to get the impression that everything about the technology is built to carefully considered specifications. As Philo Farnsworth’s wife was probably aware, though, as he tinkered with the electronics while she sat in as a test subject, it’s often the other way around.

Television was invented then the specification written around what could be built. That’s been the case ever since, and as a result, a lot of the things we work with aren’t quite as set in stone as we might think. Such as:

How Bright Is The Light?

Instinctively, photographic exposure seems pretty simple. You thumb the meter, and it gives you a number. You might aim the bulb in a certain direction, shade it with your hand, or spot meter a specific object, and you might interpret that number, mentally increase or decrease it to fulfil some goal, but the measured intensity isn’t subject to interpretation. That’s the exposure it metered.

It sounds simple, but it isn’t. Simply measuring the intensity of light falling on a meter’s sensor absolutely is something we can do in a definitive way. We measure light in lux and the definition of a lux itself is reliant on, among other things, the behaviour of the human eye. While the human eye is something of a moving target, being made out of unreliable biology, science gives us numbers that approximate the average human’s eyes reasonably well. Those numbers are fixed and thus so is the value of a lux.

Getting from a raw reading in lux to an exposure value is a simple enough thing, although most people will be happy to let the meter do the mathematics. For the sake of completeness, exposure value is the binary logarithm of light in lux multiplied by the ISO sensitivity, all divided by 330, which is something you can do with a dozen keystrokes on a calculator.

Some people will have immediately started wondering where the 330 comes from.

Simply put, it’s the meter calibration constant, chosen here to be halfway between the numbers used by Minolta (who like 320) and Sekonik (who use 340). Why the difference? Well, historically, the calibration constant was determined by showing a lot of people a lot of photographs and having them give an opinion as to which was correctly exposed. It’s a very small difference. It was eventually standardised in ISO 2720:1974, which states that values can be up to 540. That’s a rather larger difference, and the meter might not even tell you which it’s using.

So, next time you’re carefully metering an exposure and worrying about a quarter of a stop either way, ponder that the numbers you’re using are based on a lot of people’s opinions, decades ago, not some piece of precision mathematics.

How Loud Is The Sound?

There have always been a huge number of ways of metering audio signals. Digital audio required some new standards and so those standards were duly developed. Documents such as the SMPTE’s RP155, which dates all the way back to 1990, defines reference levels in the same terms they might always have been used – how big the sine wave of a reference tone should be to achieve a certain amount of voltage coming out of a socket.

To be fair, audio levels were already slightly nightmarish from a standardisation perspective. Assuming that 0dBu, represented by 0.775 of a volt of signal, is alignment level, British engineers would set tone to the number 4 on a peak programme meter designed to the BBC standard ratified as IEC IIa. Other PPMs, to IEC IIb or Nordic standards, would label the same point on the scale “test,” surrounded by values in dBu. Volume units, the standard behind the slightly infamous VU meter, were already defined differently under French standards than elsewhere; the difference here was large at 6dB, with VU meters reading that same 0dBu alignment tone as either +2 or -4 depending on their nationality. That’s complicated enough,

Naturally, digitisation of audio represented a valuable opportunity to clean this up, and that opportunity was neatly missed by both the European Broadcasting Union and the SMPTE. EBU R68 states that reference level is -18dBFS (that is, 18 decibels below the full scale value the file can represent). SMPTE RP155 describes a slightly different situation, where a reference level of +4dBu (one more notch on a BBC PPM, for those following along at home) is represented by a signal at -20dBFS. The SMPTE approach has more headroom but, again, that’s a very large 6dB offset that is more than enough to make something sound very wrong.

Of course, calling this section “how loud is it” is something of a misnomer. Human perceptions of loudness are, of course, connected to signal levels, but only in a very roundabout way. Measures such as loudness units try to standardise that, in documents such as EBU R 128 and ISO 80000-8, but even then the target values are anywhere between -16 and -20 depending on programme content.

How Big Is The File?

There was a time when at least some film and TV companies specified recording bitrates in contracts. That’s always been a dubious way of ensuring picture quality, with modern codecs squashing a lot more picture quality into a given bitrate than older ones. When computers first became able to handle video at any level at all, in the early 90s, they used codecs such as Radius Cinepak. It could pack something that was vaguely recognisable as video onto a single-speed, generation-one CD-ROM disc that was capable of delivering about 150 kilobits per second. A modern codec could achieve far better results with that bandwidth. The limitation was not the data rate from the CD; it was the ability of the computers of the time to decode a complex codec.

Even if we specify the codec, it doesn’t help that much. Many modern codecs are specified in terms of the decoder – that is, any recorded file is correct so long as the reference software will play it back. Most modern codecs use a variety of techniques to achieve compression, turning old picture data into new picture data. A good implementation of that codec can use all those techniques. A less good one might use fewer and produce a poorer picture while still using as much bitrate, and the same codec, as some other file.

But none of this matters if we select, say, a 50-megabit file and don’t actually get one, and that’s, well, normal. You can easily record a file of a given bitrate in a camera, then examine that file using software such as VLC which will present an actual measurement of the real bitrate. The number can be lower than expected. Much lower, perhaps. It depends on the picture content, but some real world files advertised as fifty-megabit can be as low as 35 even on fast action scenes.

Why does that happen? We’ll skip the details, but the problem is often called rate control and it is not an exact science. Most codecs need to know how much compression to apply before starting the mathematical work of actually applying that compression. The size of the resulting data can’t be calculated without actually doing the work. If a frame ends up taking up less space than it could, then its quality will be compromised more than it needs to be, but the only way to increase the quality is to do the compression work all over again, using gentler settings.

Some devices can redo compression like that to get better numbers, but only to a point; at some stage, the compressed data must be written to a file to make room for more frames as the camera continues shooting. Worse, if we compress a frame too gently, and it ends up being larger than it should be, then we have a serious problem. We might exceed the data rate the flash card can sustain and drop frames. Modern flash is big and fast, but we might still hit issues with playback on constrained devices like cellphones whose hardware-based video decoders can have issues with nonstandard files.

This encourages designers to be very conservative when setting up camera codecs, especially more cost-constrained designs. Things differ when we’re compressing data to be uploaded to, say, YouTube. The compression process is not really time-constrained and we can go over the file more than once, often called two-pass encoding. This makes for much better rate control, but even then, the selected bitrate is a maximum, not a guarantee.

How Wide Is The Shot?

Lenses breathe – they zoom in and out slightly whenever the focus changes. Lenses intended for film and TV work are usually designed to minimise it, as it can be distracting during focus pulls, though most still do it to at least a small degree. Lenses for stills, on the other hand, often aren’t optimised around breathing, because a still image can’t show a change in size. Assumptions based on the behaviour of movie lenses, where breathing makes a negligible difference to frame size, don’t always apply to stills glass.

And in 2021 lenses that were really designed for stills are very often being pressed into service on moving-image cameras. This is especially true now that many camera manufacturers are implementing advanced autofocus features on motion picture cameras that’s based on the very effective systems designed for stills, and which require the company’s electronic lenses with fast servos built in.

At first glance this seems like a plan with no drawbacks. Stills lenses are comparatively lightweight, inexpensive and often boast fantastic performance in terms of sharpness and contrast. One of the things we sacrifice for that is real, mechanically-linked and reliable manual focus pulling and stepless aperture adjustment. Those compromises are fairly well known. What’s less often mentioned is the sheer amount of breathing that goes on in many stills lenses. Common midrange zooms, often supplied with higher-end cameras, may advertise a long end of somewhat over 100mm, but at the extremes of focus may not actually achieve a triple-digit effective focal length.

It seems increasingly likely that problems like this are better solved by using very high-resolution sensors and correcting the issue in digital signal processing. That doesn’t complicate the lens design and creates few other problems. Still, particularly where we’re trying to match shots, a lot of very popular modern lenses suffer the problem that at the extremes of focus, the long end of your 105mm zoom is simply not 105mm anymore.

The Good Old Days

As we’ve seen, a good few of the things many people assume are completely definitive might actually be based on quite a lot of ad-hoc working practices and rules of thumb. Maybe it’s inevitable; as technology improves, things become more complex, and might confound old assumptions.

That’s true particularly of technical minutiae such as the flicker visible on camera when shooting certain types of mains-powered practical lighting, particularly the gas discharge lights used in industrial facilities or even fluorescent tubes. In decades past, if they flickered at all, these things would routinely flicker at the local mains frequency which was either 50 or 60Hz worldwide. Gas discharge lighting has used electronic ballasts for some time and is now being supplanted very quickly by LED, both of which have the issue that the electronics in the ballasts may introduce flicker at mains frequency but may also work at any frequency the designer found convenient. Similar issues attend shooting video monitors; there were techniques for shooting CRT displays, but they don’t work for modern TFT-LCD and OLED panels at all.

Even camera sensor sizes are no longer entirely standard. Cameras described as “Super-35mm” may have sensors close to the 24.89 by 18.66mm of a true Super 35mm frame, but in practice there can be noticeable differences; where sensors are larger than standard, there may be coverage issues – even if just soft corners – with lenses designed for a more standards-observant world.

In the end, this is why we try to be objective when designing our technology – it’s why we have standards and systems of measurement in the first place – but it’s also why camera tests, and tests of other film and TV gear, have never been more critical.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…