Color and Colorimetry – Part 7 – CIE XYZ

The rg color space served to document the chromaticity gamut of the HVS, and so was a great step forward in understanding color and color vision. However, it was based on a certain set of primaries. As no set of primaries can embrace the whole of the HVS gamut, it is inevitable that the color matching functions have negative excursions. The CIE set out to remedy that by taking the color matching experimental data and representing it in a different way. The color space they developed is a cube having three orthogonal axes, X, Y and Z. The white point was defined as the equal energy point, otherwise known as Illuminant E (for equal).

In an RGB color cube, the three axes are mutually at right angles like the co-ordinates of space. A real TV set has real primary phosphors that can only produce varying amounts of positive light and it is not possible to get outside the cube. The problem the CIE had was that the color matching experiment showed the need to simulate going outside the RGB cube by shining some primary light on the test screen. That meant the color matching curves had negative values, in particular a large negative amount of red in the region of 500nm. To get rid of the negative values, the RGB cube is mapped into XYZ space in such a way that the R, G, and B axes are turned inwards.

Fig. 1a) shows the CIE matrix function that maps from RGB to XYZ. The green function in RGB is not that different from the luminous efficiency function of the HVS, so the CIE decided that it would be useful if XYZ space were to be defined in such a way that the axis nearest to G, which was Y, would be one and the same as the luminance axis. Luminance has been represented by Y in television ever since. After mapping, one of the color matching functions would be the same as the luminous efficiency function of the HVS. In order to do that, the coefficients used to calculate Y from RGB in Fig.1 are the same as the un-normalizing factors needed to get from the normalized RGB color matching curves to actual primary luminosity. Luminance cannot have negative values, of course, but in order to have no negative values in X or Z, the chromaticity space is made larger than rg space. As a result, a significant part of XYZ space is outside the gamut of human vision and is meaningless.

Fig.1.  The matrix function used to convert from RGB to XYZ space. The center row, Y, represents the luminous efficiency function of the HVS and Y has come to denote luminance ever since. Z contains only a small amount of G and is mostly B, so the Z and B axes are close together.

Fig.1. The matrix function used to convert from RGB to XYZ space. The center row, Y, represents the luminous efficiency function of the HVS and Y has come to denote luminance ever since. Z contains only a small amount of G and is mostly B, so the Z and B axes are close together.

The effect of mapping on B is quite small, as the B and Z axes do not diverge much. For simplicity, we can start by neglecting the divergence of B and Z and consider only the X Y plane. Fig.1b) shows that the R axis is where G and B are zero, so if we put R = 1 into the matrix, it appears at X = 2.7689 and Y = 1. Similarly, the G axis is where R and B are zero, so if we put G =1 into the matrix, it appears at X = 1.7517 and Y = 4.5907. If we add the vectors from the origin to those R and G points, we can locate yellow. The former square of black, red, yellow, green, which was one face of the RGB cube, has become a diamond. The red-green axis has been squeezed up with respect to the black-yellow axis. The whole reason to do this is that negative values of the color matching functions are not negative in XYZ space.

The R axis had to be turned in quite a lot because of the large negative value of R at 500nm. The G and B color matching functions don't have the extreme negative values that R exhibits, but they needed to be turned in to ensure that the neutral axis would pass through the equal energy Illuminant E white point. Fig.1b) has X and Y orthogonal and the mapping has turned in the R and G axes so they are less that 90 degrees apart. On the other hand, if we distorted the figure such that R and G were orthogonal, the X and Y axes would be turned outwards.

As explained in an earlier piece, RGB space was converted to rg space as shown in Fig.2. The RGB axes were intersected by a unit plane of constant luminance; an equilateral triangle, which is the chromaticity plane. That plane was then projected on to the rg plane, where the chromaticity space became a right-angled triangle with the primaries of the color matching experiment at the corners. The gamut of the HVS goes outside the triangle, requiring negative values. The same approach is taken with XYZ space, which is converted to x, y, z space, where x = X / X + Y + Z, y = Y / X + Y + Z and z = Z / X + Y + Z. That plane corresponds to x + y + z = 1, so the equal energy white point is where all three are 0.33... The unit plane is projected onto the x,y plane such that it too becomes a right angled triangle and y = 1 - x.

Fig.2. The rg space representation of the HVS is primary dependent and as a result many visible colors, especially greens, fall outside the rg triangle.

Fig.2. The rg space representation of the HVS is primary dependent and as a result many visible colors, especially greens, fall outside the rg triangle.

Now we can look at how CIE XYZ really works. Fig. 3a) shows rg space with its large negative area to the left of the rgb triangle. The figure also shows the x and y axes that are drawn just touching the rg gamut, so that negative values cannot exist. If Fig.3a) is transformed, or distorted, according to the mapping functions of Fig.1, such that x and y are at right angles, the propeller blade shape of rg space becomes the familiar horseshoe shape of xy space shown in Fig.3b). RGB color matching functions are transformed by the RGB to XYZ matrix, the new color matching functions x, y and z are obtained, which the CIE calls the standard observer. These are positive only, which was the goal of the exercise. The y function is the same as the luminous efficiency function of the HVS. Fig.4 shows how the horseshoe diagram was obtained from the color matching functions. It is simply a matter of picking a wavelength, reading off x , y, and z from the color matching curves and calculating x and y from x = x/ x + y + z and y = y/x + y+ z. The monochromatic locus touches x = 0.15 at 400nm and touches the y- axis at about 505nm. The peak of the HVS response at 550nm is where the locus touches y = 1 - x at x = 0.33.., y = 0.66... The white point is where x = y = z = 0.33.....

Fig.3a). The effect of the RGB to XYZ transform and the projection into xy space is to create new axes that are outside the HVS gamut in rg space so there can be no negative values.

Fig.3a). The effect of the RGB to XYZ transform and the projection into xy space is to create new axes that are outside the HVS gamut in rg space so there can be no negative values.

Fig.3b). Distortion of a) to make x and y orthogonal results in the familiar horseshoe curve with the equal energy point at 0.33..., 0.33...

Fig.3b). Distortion of a) to make x and y orthogonal results in the familiar horseshoe curve with the equal energy point at 0.33..., 0.33...

It is quite possible to make a real camera with real filters having the standard observer responses shown. It would output an XYZ format video signal that could reproduce absolutely any color the HVS can see. The y output would be the same as from a truly panchromatic black and white camera whereas the x and z signals would together determine the hue and saturation. It should be clear that many combinations of x and y are illegal as they are outside the horseshoe. Any color visible to a human and analyzed with the standard observer filters must fall somewhere in the horseshoe.

The x filter with its two peaks would be difficult, but not impossible to implement, but in practice the problem is not the camera; it's the display. There is presently no display technology that can reproduce the whole of the HVS gamut. All real color reproduction systems form a subset of the xy space, which is then used to compare them. 

Fig.4. The color matching functions, known as the standard observer, are derived from the horseshoe curve as shown here.  Examples are given for three different wavelengths.

Fig.4. The color matching functions, known as the standard observer, are derived from the horseshoe curve as shown here. Examples are given for three different wavelengths.

This means that the CIE chromaticity diagram cannot be reproduced correctly in color on any known video, computer, film or printing system, but that doesn't seem to stop people trying. The result is that every colored version of the CIE chromaticity diagram you have ever seen is incorrect. The other common mistake is to say that CIE xy has virtual primaries. It does not. Primaries cannot exist outside the horseshoe and the three axes were simply chosen to depict the HVS gamut in a particular way. In fact, the whole point of the CIE chromaticity diagram is that it is independent of any primaries, so any attempt to introduce them is defeating the object.

With the CIE xy depiction, had a different set of wavelengths been used in the color matching experiment, R, G and B would have been different and the color matching functions would have been different. However, the matrix needed to locate the equal energy point and to prevent negative excursions would also be different so the final result would be the same. As a result, CIE xy is truly independent of any primary colors, and so can be used to compare them.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.

HDR Picture Fundamentals: Camera Technology

Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.