Is Gamma Still Needed?: Part 3 - The Program Chain, Surround Brightness Ratio (SBR) & The HVS

There are two components of gamma that have quite different purposes. One of them is always necessary because displays and their surroundings are never equally as bright as the original scene. The other one is a compression technique.

If one considers viewing an object directly with the naked eye, the space though which the light travels is linear. Consider a camera alongside the viewer, driving a display alongside the object that is equally as bright as the original scene. To make the display resemble the direct view, the overall transfer function, what is now known as the Optical-to-Optical Transfer Function (OOTF), would have to be linear.

In one branch of photography based on film, prints are produced. These are intended to be viewed using reflected ambient light, under which they will appear as bright as their surroundings. When the print is produced from the negative, the gamma of the printing process is designed substantially to cancel the gamma of the negative so that the center of the overall light transfer function obtained is linear or nearly so.

However, movie film, transparencies, television and computers are designed such that the screen emits light, either by generating it internally or by reflecting the light from a projector, and ambient light is undesirable. Such images need to be viewed in subdued lighting conditions because the power that the screen can develop may be limited and because ambient light falling on the screen reduces contrast.

It was established long ago that if light-emitting or reflecting screens in areas of subdued lighting are used to display pictures that are linear representations of the original scene, they appear to the HVS to be lacking in contrast. This is an example of what is known as the surround effect, which takes place within the human visual system. An object of interest surrounded by a dark area appears to have less contrast. When surrounded by bright areas the reverse is true.

In that case, overall unity gamma is not a goal and some overall positive gamma is always present in movies, transparencies and electronic screens as a function of the peak to surround brightness ratio (SBR), that is, the ratio between the peak brightness of the display and the brightness of the surroundings.

Work by the BBC suggests the overall gamma, or OOTF appears to fit the following expression:

Overall Gamma = 1 + 0.2 log10 SBR

Overall gamma is the video equivalent of a loudness control in audio that equalizes the signal when replayed at less than the usual volume. When the display is as bright as the surroundings, as in looking at a photograph, gamma reverts to unity, as one would expect.

Fig.1 is representative of most electronic imaging systems. Light from the scene is converted to an electrical signal and that conversion has an Optical-to-Electrical Transfer Function (OETF). That electrical signal somehow arrives at a display having an Electrical-to-Optical Transfer Function (EOTF).

In Fig.1a) the traditional CRT-based television system is shown. The EOTF of the CRT has a gamma of 2.5. The OETF (the correction at the camera) has a gamma of typically 0.45. The EOTF of the camera is not fully compensated by the OETF of the tube, which means that the OOTF is not linear.

OOTF gamma in traditional television is typically about 1.1 - 1.2, whereas in the darker environment of the cinema it is more like 1.5.

Fig.1b) shows an example where a modern non-CRT display that has a native linear EOTF is in use with a traditional TV signal. Prior to the display a gamma function must be applied to the video signal to compensate for the gamma that would have been introduced by the CRT. In other words the receiver has to simulate a CRT and oppose most of the gamma correction inherent in the signal, leaving only an appropriate overall OOTF to compensate for surround effect.

When the CRT was the only form of display, a fixed SBR could be assumed and a fixed overall OOTF could be applied. Today the camera can no longer assume the transfer function of the display, where it is situated or how bright it is and a fixed rendering intent at the camera is unlikely to be optimal.

Fig 1. At a) the OETF at the source has a gamma of about 0.45 which does not quite compensate for the EOTF of the CRT, giving an OOTF gamma of about 1.2.  At b) the source is unchanged, but a non-linear process must precede the linear display to remove most of the effect of gamma correction.  At c) a linear system is shown in which the overall gamma is determined at the display, which alone knows the surround lighting conditions.

Fig 1. At a) the OETF at the source has a gamma of about 0.45 which does not quite compensate for the EOTF of the CRT, giving an OOTF gamma of about 1.2. At b) the source is unchanged, but a non-linear process must precede the linear display to remove most of the effect of gamma correction. At c) a linear system is shown in which the overall gamma is determined at the display, which alone knows the surround lighting conditions.

Now the CRT is obsolete and the need to compensate for its non-linearity has gone away, what effect does gamma have today? Fig.1b) shows that at the transmitting end, nothing has changed. After the linear camera sensor, non-linearity is encoded in the electrical domain as usual. The difference is that in order for a linear display to be compatible with legacy video signals the gamma compensation must be removed again in the electrical domain by a decoder, prior to the display. Gamma used in that way is no more and no less than a compression codec and must be assessed in that context.

As will be seen, gamma has all the attributes of a compression system. These including being lossy, introducing artifacts, making production difficult, suffering generation loss and concatenating poorly with other compression codecs such as MPEG and its successors. We also expect all that to get worse if the compression factor is increased.

The CRT also had limited dynamic range and modern display technologies are much better in that respect, as indeed are modern cameras. The cameras can produce more information and the screens can display it. The difficulty is that legacy television production and delivery systems were optimized for the limited dynamic range of the CRT and could not handle the increased information.

Fig.1c) shows an alternative in which the camera, and the display are linear and the EOTF at the display is there for the sole purpose of correcting for the surround effect, which can only be known at the display. Such a linear light system has many advantages, especially for production.

High Dynamic Range (HDR) came about to deal with developments in camera and display technology. It may be as well to consider what dynamic range means. Unfortunately there seems to be no single definition of dynamic range in video and as a result it is necessary to be extremely cautious when making comparisons. Dynamic range is often measured in stops, used in the photographic sense, where one stop is a doubling and the number of stops of dynamic range is the base-2 logarithm of the ratio, whatever that ratio might be.

The definition of dynamic range adopted by the ITU was based on the relationship between the peak value of the signal and the size of the first quantizing step up from black. This is unsatisfactory because it fails to address the dynamic range of analog signals such as emerge from sensors, and it fails to take into account the psycho-optics of the dark parts of images.

The definition of dynamic range found, for example in BBC White Paper 309, is the ratio between the brightest and darkest parts of the screen. We see immediately that dynamic range defined in that way is entirely controlled by the capabilities of the display. The full dynamic range could be obtained with a single bit that is one for white and zero for black and tells nothing about the picture quality.

Dynamic range defined in that way is perhaps useful only for the binary condition found in printing, where black ink on white paper manages to absorb about 99 percent of the incident light whereas the paper reflects most all of it, to give a dynamic range of about 100:1, which is a little under 7 stops.

A better definition of dynamic range for television is given in BBC White Paper 283 as the ratio between the brightest and darkest parts of a grey scale that is portrayed without visible artifacts. Dynamic range defined in that way becomes a subjective measurement that tells us how well the whole range of brightness is being reproduced with respect to the sensitivity of human vision. That might advise us what signal-to-noise ratio we might need and if and how it might change with brightness.

The most efficient system from an information theory standpoint is one that adopts perceptual uniformity, which means that at no screen brightness is the signal quality significantly better than it needs to be. There are many ways of obtaining perceptual uniformity and gamma is only one of them.

Once a perceptual approach is adopted, we are straight back into the subject of psycho-optics, without which little progress can be made. We disappear into the arcane world of research into the HVS where names such as Barten, de Vries, Rose, Weber, Fechner, Schreiber, Moore and Stevens are quoted.

It does not matter which of the works of these good people we read, there is one common denominator, which is that they are all discussing the effects of light entering a human eye and the reaction to it. In the case of electronic imaging, that light reaching the eye comes from one place only, and that is the display. It is only at the display that we can possibly know the surround brightness ratio and the overall gamma needed. Only the display knows the dynamic range of which it is capable and whether the signal it receives contains enough information to explore that dynamic range without artifacts. 

You might also like...

NDI For Broadcast: Part 2 – The NDI Tool Kit

This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.

HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG

Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…

Great Things Happen When We Learn To Work Together

Why doesn’t everything “just work together”? And how much better would it be if it did? This is an in-depth look at the issues around why production and broadcast systems typically don’t work together and how we can change …

Microphones: Part 1 - Basic Principles

This 11 part series by John Watkinson looks at the scientific theory of microphone design and use, to create a technical reference resource for professional broadcast audio engineers. It begins with the basic principles of what a microphone is and does.

Designing An LED Wall Display For Virtual Production - Part 1

The LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in an interdependent array of technology.