The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Resolution is not really a natural phenomenon. In this digital age, it’s something that we’ve imposed on nature. That’s not to say that the concept is entirely unknown in the analog world: we happily talk about “resolving power”, and the effectiveness of optical and electrical visual transfer functions. These concepts still apply in the digital world, by and large, but with an additional complication of spatial quantization: pixels, in other words.
You could argue that film has a resolution, and it does, in the sense that there’s a threshold below which meaningful information is lost in the noise floor, but it’s not the same as a pixel grid. If you were to push the point almost to breaking point, you could draw an analogy between film grain (the chemical crystals in a film substrate) and resolution. That might be true for an individual frame (essentially a photograph), but moving images rely on multiple sequential frames, and each of those frames will have a completely different pattern of grain.
Ironically, this continuous variation can increase resolution over time in a process called “dithering”. Fixed-shape pixels impose a visual limit on resolution, but with film, these “chemical pixels that aren’t really pixels” are different in every frame, which means that much smaller details than you would expect if you were only to measure the grain size (or the average size). Again, this concept remains valid in the digital domain, where adding noise can smooth out color contour lines in (say) 8-bit video by forcing variations in color values between successive frames. Yet again, the same process can increase apparent quality in digital audio.
In this context, it’s worth mentioning that while VHS was an objectively sub-optimal video recording system, the visible resolution of individual frames was much worse than the subjective resolution of moving video.
Analog television didn’t have pixels but was definitely quantized in at least two senses: in time and in vertical resolution. From the instigation of 405-line TV in the UK, it has been captured and broadcast at a rate of twenty-five times per second, or fifty frames per second if you take interlacing into account. And, even though systems before the late Nineties were strictly end-to-end analog, they were sliced up into horizontal lines. The ultimate resolution was determined by the television system itself and by the available analog wireless bandwidth. So, mathematically, it is perfectly valid to talk about resolution in this way, but it is still significantly different from resolution in a digital system.
Compared to any digital video technology, film typically has a higher resolving power. One of the biggest joys of the rollout of HD TV was being able to watch old films in higher fidelity than we had ever seen them, not only because of the increased resolution of the HD system, but because of the increased resolution of film scanning systems. If you don’t remember films looking that good in your local cinema in your childhood, don’t forget that the film copy you were watching was probably several generations deep and was consequently far removed from the original’s quality.
Fast-forward to today, and you can buy a 12K cinema camera and an 8K domestic television.
Broadcast standards led the way with video resolution all the way up to around 2006. The only deviation from this was with early Sony CineAlta cameras, which, while still substantially HD in resolution, could be switched to the more film-friendly 24p. But then RED Digital Cameras burst onto the scene with the RED One camera - a fully working 4K video camera with a difficult workflow that didn’t deter early adopters.
4K was an essential breakthrough because it broke through the broadcast barrier. You might think the advance went unnoticed to the majority of TV viewers, not least because only now are 4K programs being made available through VOD services like BBC iPlayer. In fact, the difference was noticeable, even to viewers in SD, never mind HD. The reason was that the more information you feed to a typical video compression system, the better the result.
This was first noticeable when the Canon EOS 5D Mk II was picked up by filmmakers for its ability to shoot HD video with its full-frame sensor and film-like look. Suddenly photographers started using the terms “video” and “cinema” in the same breath. The makers of the popular hospital drama House shot an episode in HD with the 5D Mk II which was aired on April 17th 2010. It had an entirely different visual feel. This was partially due to the full-frame aesthetic but also because, even when delivered in SD, the digital TV codecs at the time could do a better job because of the additional information fed to them from the higher-resolution source material. Parenthetically, the episode looked “natural” and “dramatic” because the smaller digital cameras were unobtrusive and could go anywhere – and because of the shallow depth of field from the full-frame sensor.
Yet, what was holding everything back was “broadcast standards.” This was a legacy of the analog domain, where everything had to comply with the same standard, or it would never work. HD standards were an example of this: a mix of 720p and 1080i, different aspect ratios, frame rates, and both interlace and non-interlace were all options, along with integer frame rates.
The moment video became computer data, moving images could be any size and any frame rate, subject only to the capabilities of the computer systems involved. Rather like film, which is not bound to any distribution format, computer-based video wasn’t tied to any single standard either.
This was the starting pistol for digital cinematography: capturing to the medium of digital video with the intention of matching the cinematic qualities of film. The RED One had a complicated and completely unique workflow. The camera’s output was RED RAW - a compressed format direct from the sensor. As such, it needed work before viewing, but the result was a 4K image with color depth and accuracy. 4K matched the perceived quality of cinema and allowed filmmakers to work on serious productions without the cost and inflexibility of film.
Digital cinema cameras haven’t stopped increasing resolution. 8K is now commonplace, and 12K and even 17K cameras are available off the shelf. Digital cinematographers have never had it so good. But viewers can’t buy 17K televisions. Three or four years ago, 8K televisions started to become available, but they are not commonplace even now, partly because there is very little 8K content, but mainly because people can’t see the difference. On the face of it, that’s surprising. After all, 8K has four times the pixels of 4K. But that only amounts to a 2x increase in linear resolution. It also means that with only a tiny lack of sharpness, camera shake or other type of degradation, the image ceases to be 8K and can instantly drop back to a much lower resolution. At normal viewing distances and on, say, a 42” screen, to a casual viewer, 8K doesn’t look any different to 4K. From a commercial perspective, broadcasters find it hard to justify going beyond 4K; globally, HD remains the default resolution for broadcasting.
This is disappointing to resolution enthusiasts, especially since 8K viewed on a very large screen is an incredible experience. If you imagine watching 4K on a 65” screen, to see 8K at the same visual resolution, you would need a grid of four 65” screens—or a 130” screen—from the same distance.
Ultra-high video resolutions are not going away, though. They give content creators new options, and while consumers do not see the full resolution directly on their screens, they still benefit from the higher-quality sources.
The advantages of ultra-high resolution video fall into two groups.
The first is avoiding digital artefacts. The moment you impose a regular grid on reality, which is an inevitable part of the digitization process, you are vulnerable to aliasing. This type of artefact is often thought of as “staircasing”, most easily visible on diagonal lines. Perhaps counter-intuitively, aliasing is everywhere, not just in the most obvious places. It degrades the picture and can only be reduced in two ways: by filtering, which loses detail, or by increasing video resolution to the point where aliasing will be far less significant to the eye.
The second is the flexibility that comes from being able to digitally zoom in to a video image that’s a higher resolution than the delivery format and not lose quality. This is an incredible boon for live production, where you can cover the same area with fewer cameras. It also allows for image stabilization, where the edges of the picture can be sacrificed to “hold” the shot in place, again without the viewer noticing.
There are other advantages. Shooting and storing video in a higher resolution than broadcast is a way of futureproofing the content in case higher broadcast resolutions become standard in the future. As we mentioned earlier, compressing video from a higher resolution allows video codecs to do a better job, increasing the perceived quality even when viewed at a lower resolution.
Exponential progress in computation, compression, bandwidth and storage means that resolutions that would have seemed like science fiction are now routinely available. This, along with HDR video, means that today’s video quality is easily comparable to, and quite possibly exceeds the very best quality film stock.
While there are still technical and commercial reasons for limiting video distribution to HD and sometimes 4K, capturing and storing video at much higher resolutions ensures compatibility with all possible video systems, and preserves a priceless record of our civilization for viewers in the far future.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Six Considerations For Transitioning To Cloud Based Video Distribution
There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…