Camera Lenses Part 2: Types Of Optical Distortion

High quality lens design is a scientific art. In Part two of this four-part series, John Watkinson explains some of the issues involved in optics and camera lenses.


This article was first published in 2015. It and the rest of this mini series have been immensely popular, so we are re-publishing it for those who missed it first time around.


A lens is a mapping device that transfers a two-dimensional view of real object to an image on our sensor. In most cases the sensor is flat; electronic sensors in TV cameras are all flat, so the lens ought to map the flat sensor to a flat object plane, such that if we tried to shoot a flat surface square to the optical axis it would be in focus everywhere. A pinhole camera can do that, but real lenses sometimes fall short. Frequently the shortcoming is that for a planar object, the image plane in which focus is achieved is roughly spherical. As focus is usually assessed near the centre of the image, the result is usually that the focus appears to fall off towards the corners, although it could be the other way round.

The lens in the human eye has a fair approximation to a spherical image plane, but that works just fine because the sensor is also spherical; a good shape for a swivelling eye. On another occasion we can look at the consequences of watching video from fixed sensors with a swivelling eye, but for now, back to the plot.

Lenses are usually designed in such a way that the field is reasonably flat. The more off-axis the light gets the longer the focal length should be. One way of flattening the field is to insert an element that is shaped to perform correction. That works on a fixed focal length lens, but not in a zoom.

The elements in TV lenses are all axisymmetrical. In other words the three-dimensional shape of the lens is determined by taking a two-dimensional section and rotating it about the optical axis. As a result what happens in an image is more a function of the distance from the centre than it is of direction. If you ever find a TV lens that is directional, for example it is in focus at one side but not the other, then one of the elements is off centre or tilted. It is either because it hasn’t been assembled properly or because it has been beaten up.

Given the axisymmetric nature of lenses, they never have any trouble mapping circular objects provided the centre of the object is on the optical axis. However, things might be different if the object was not centred. Imagine an onion in which all of the rings are exactly the same thickness. We cut it in half and use it as a test chart. On the sensor, we ought to find a series of rings all the same thickness. Chances are we won’t. In some cases, the rings might get wider as we move outward, in others they might get narrower. In others, they might start doing the one thing near the centre and the other near the edge.

Halved onions make for boring television, so what will be the effect of this distortion on some other shapes? Let’s shoot a square object with a lens in which the rings get further apart as the distance from the optical axis increases. The result is that the square is no longer comprised of straight lines. The corners are pulled out until it resembles a pin-cushion. And if the reverse is the case, you’ve guessed, the square becomes barrel-shaped. When shooting an off centre circle with a lens like that, things can easily go pear-shaped.

In natural subjects that don’t have straight lines, these distortions are hard to detect on still photographs. But in television, we have the ability to pan the camera. In that case even natural subjects will reveal distortion because objects change their shape and size as the pan progresses. The mechanism is called the globe effect and it isn’t fully understood. It appears that lens distortion messes with human depth perception so that the screen no longer appears flat when the camera pans, but instead appears to bow out towards the viewer.

That doesn’t happen when we pan with our own eyes, so it may be that the human visual system can compensate when it is doing the panning, but not when the panning is done artificially.

Distortion should be better controlled in TV lenses than it is in general photographic lenses, whereas in practice the reverse seems to be the case. TV cameras tend to have wide range zoom lenses and the down side of the ability to change the focal length is that distortion rears its head, generally barrelling at the wide end of the range. Still cameras have restricted range zooms to limit the distortion and some photographers, the author included, won’t touch a zoom lens for serious still picture work.

The fisheye lens has a problem, which is that with a 180 degree field of view the object plane is essentially infinite. Clearly the only way an infinite object gets mapped onto a finite frame is if there is non-linearity. The barrel distortion is inevitable. Fisheye lenses tend not to be used on TV cameras. The distortion gets boring and pans are scary.

Canon fish eye lens. This unique style of lens offers still photographers a stunning viewpoint on imagery.

Canon fish eye lens. This unique style of lens offers still photographers a stunning viewpoint on imagery.

The range of wavelengths over which we humans can see forms a tiny part of the electromagnetic spectrum, and spans just under an octave. In other words the shortest wavelength we can see, at the blue end of the rainbow, is about half the wavelength of the red light at the other end. In an ideal world lens elements would behave in exactly the same way over the whole span of visible colours/wavelengths. In the real world they don’t and the characteristic is called dispersion.

All lenses work by refraction: the bending of light where it enters or leaves. The bending is caused by the speed of light being less in the lens than in air. Dispersion occurs because the speed of light in the lens material isn’t constant. It varies with wavelength and as a result the amount of refraction becomes a function of the wavelength. The visible result is called chromatic aberration and it shows up in two main ways. The first is that the focal length of the lens becomes a function of the colour. This is called axial aberration. It is possible for one of two objects having different colours to be out of focus even though they are at the same distance from the camera. The human eye is pretty lousy in this respect, especially in the blue, so before you jump on a lens manufacturer make sure it’s not your own eyes that are the cause.

Two types of chromatic aberration are referred to as Longitudinal Chromatic Aberration and Lateral Chromatic Aberration and can occur concurrently. Image courtesy bestspottingscopereviews

Two types of chromatic aberration are referred to as Longitudinal Chromatic Aberration and Lateral Chromatic Aberration and can occur concurrently. Image courtesy bestspottingscopereviews

The other problem dispersion causes is colour fringes where there should simply be a step in the brightness in the image. What has happened is that the different colour components in the image, red, green and blue, have not lined up perfectly because the magnification of the lens is not constant. Instead of a luminance step where R, G and B all change together, you get colour bars where they change at different places. In principle this can be fixed electronically, if artificial distortions are added to make the three primary images register properly.

In television we have it easy working with less than an octave of light wavelengths. Pity the poor astronomer who wants to look at infra-red and ultra-violet as well. That is one reason astronomical telescopes use mirrors. Because they reflect rather than refract, there is no dispersion.

The problem of dispersion in lenses was known back in the days of Robert Hooke and Isaac Newton and various solutions have been adopted. The most common is the so-called achromatic doublet, in which a single lens is replaced by a pair of elements that are made from different types of glass having a different refractive index. One of the lenses is convex and one is concave, so the dispersion of one is in the opposite direction to the other. One lens is stronger than the other so there is a net focusing effect. The trick is that the strong lens has lower dispersion glass than the weak lens, in proportion to their strengths, so the dispersion of one is exactly opposed by the dispersion of the other.

Achromatic doublet lenses have significantly better optical performance than singlet lenses in visible imaging and laser beam manipulation applications. Image courtesy Globalspec.com

Achromatic doublet lenses have significantly better optical performance than singlet lenses in visible imaging and laser beam manipulation applications. Image courtesy Globalspec.com

In practice the best that can be done is that the dispersion is cancelled perfectly at two wavelengths typically spaced well apart in the visible spectrum, and there is imperfect cancellation elsewhere. Some fine lenses have been made working on this principle.

Another more recent option has been to employ low dispersion glass that minimises the initial problem and makes it easier to correct.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.

HDR Picture Fundamentals: Camera Technology

Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.