Fixes the Build: A Crash Course in Color


Author: Kevin Todisco

At SIGGRAPH I had the pleasure of attending three (!!) different talks about color science and its applications in digital media creation.  Ever since I worked with HDR I’ve been particularly interested in understanding color science more, and it seems to have received renewed focus as this new technology emerges which expands the color gamuts that our digital screens can recreate.  So, allow me to condense several courses worth of material on color science into an easily-digestible Fixes the Build post.

So, You Thought You Knew Color

We’ll start with the basics.  We have two dominant systems of depicting color – color models and color wheels.

Color model – Creation of colors from a small set of primaries and either additive or subtractive mixing.  RGB, for example is additive, and CMYK is subtractive.

Color wheel – Shows relationships between colors and describes color harmony.  Worth noting – in a major case of coincidence, right after returning from SIGGRAPH, I encountered a trivia question that asked what color is opposite blue on the color wheel.  You bet I got it.

Perception of Color

Human color perception is imperfect.  Like, wayyyy imperfect.  And I’m not just talking about colorblindness.  As we’ll see, we reproduce colors in the digital world by tricking our brains.

Human eyes have cone cells that respond to short, medium, and long wavelengths of light.  The “average” response curves are depicted here:

But they differ per individual, so our own perception of color is slightly different than that of others.

An actual color that we perceive in the real world has what’s known as a spectral power distribution (SPD), which describes the full spectrum of light that it reflects.  For example, this is the spectral power distribution for this ocean-blue color:

This power distribution can be matched against the wavelength sensitivities of the human visual system to generate a visual response curve for a particular color.  Now here’s where things get crazy: different SPDs can generate the same visual response curves!  For example:

This phenomenon is known as metamerism, and it is the fundamental basis for how we reproduce real world colors on digital displays – we recreate the color using intensities of red, green, and blue light that generate the same visual response curve as a real-world color, and voila!  Color reproduction.

By determining all the tristimulus (triplets of values for a given color representation, such as RGB, XYZ, HSV, etc.) values to reproduce light at different wavelengths, we can generate color matching functions.

The other funky thing about our perception of color is that it changes depending on viewing conditions.  Different surrounding conditions can change color and even contrast:

Therefore, if we want to describe color we either need a representation that eliminates human vision as a factor or we need to define an idealized viewing condition.

Describing Color

Now that we know all this, how should we actually describe color?  As engineers, or even non-engineers working in digital media, we’re extremely used to specifying an RGB triplet and calling it a day.  I’ll throw a wrench into that now: displaying the same RGB triplet on two different screens, do you think you’ll perceive the same color?

Nope.  We’ll see why shortly.

One of the most well-known, science-y ways of describing color is a system based on color vision.  In 1931 the International Commission on Illumination (CIE) created what’s known as the 1931 CIE chromaticity diagram and the xyY color space.  The chart was created using the science of colorimetry, which is the quantification of color measured by both the physical properties of surfaces and illumination and the visual properties of observers.  The CIE color model describes colors in a device-invariant way.  Along with the chromaticity diagram, the CIE also defined the XYZ color matching functions, based on imaginary color primaries XYZ.  They’re imaginary because X, Y and Z are not actually physically realizable colors, but they can be combined to match all visible colors and have other useful properties that make our reproduction of color easier.  Lastly, these were all based on a 2° standard observer, meaning the colors were observed through a 2-degree field, under standard illuminants which are commonly either D65 or D50, denoting the color of light that a blackbody gives off at 6500K and 5000K, respectively.


Color in Displays

Displays have lots of different settings that alter how they emit light and reproduce color.  All displays have a color gamut, which is the collection of colors that they can display.  The standard for high-definition television defines a color gamut referred to as Rec. 709 after the International Telecommunication Union’s (ITU) Radiocommunication Sector’s (ITU-R) recommendation first defined in 1990.  It’s shown here on the left.


The standard defines red, green, and blue primaries in the CIE XYZ color space, and the triangle is the range of colors reproducible by the HDTV standard.  Recent advancements brought on the advent of HDR, which expands the standard to an even larger color space known as Rec. 2020, in the middle above.  However, most displays today can only go so far as the DCI-P3 color space which is a common color space for digital movie projection.  It’s shown on the right and represents something of a middle ground between 709 and 2020.

The same RGB triplet pumped to displays with different color gamuts will end up displaying different colors, because the place that that triplet falls on the chromaticity diagram depends on the primaries of the gamut it’s displayed in.  Also, even between two of the same models of display, settings like color temperature, brightness, contrast, will change our perception of the color between the two screens.  AND it changes even further depending on view environment!  This is why reference monitors and professional display calibration can cost thousands and even tens of thousands of dollars.

Engineering Applications

Of course, the applications of color science in games primarily apply to graphics, but I wanted to touch on that since this is an engineering blog.  We always do our lighting calculations using RGB values, but this technically incorrect and can cause incorrect visual results.  A truly physically-correct representation would describe lights and materials using spectral power distributions, compute the interactions between the two, then apply the appropriate color matching functions and desired display model to get the output RGB values for the target display.  The reason we don’t do this is that it’s more computationally expensive, both in time and memory.

Many of the top production renderers, such as Arnold and Renderman, are actually RGB-based.  Manuka and the open-source pbrt are spectral renderers.  I think it would be really interesting if we started to explore spectral rendering in games as we gain more and more processing power out of hardware.

Mind-Bending Stuff


What is white?

Ok, what comes to mind when I ask you to imagine the color white?  You probably looked right around this text and pointed to the background and said “that’s white right there.”  And you’d be right.  But now click on this image:

If you get it to go to fullscreen, your brain now thinks that this gray color is reference white, because there is nothing brighter or more dominant in the surrounding conditions.  The same thing happens on a projector screen:

A projector screen by itself is white, but a brighter white projected onto it will make your brain see the surrounding screen as black.

Laser Primaries

You may note from the color gamut charts above that Rec 2020 has the primaries of the gamut on the outline of the visible spectrum of light.  This means that the primaries for 2020 are absolute colors, which could really only be accurately reproduced by laser light.  So, if you ever see a TV that touts Rec 2020 color gamut coverage, it’s worth asking if the TV is using laser projection just to be a pain to a salesman.

Magenta Isn’t Real

Another interesting thing about the CIE diagram – the line of colors between the blue end of the spectrum and the red end are all imaginary colors.  The curve of the visible light spectrum runs around the top of the color space in the diagram, and the rest of the outline depicts colors that have no corresponding physical wavelength.  In short, magenta doesn’t exist.  The hand-wavy reason why we perceive magenta is because our visual system operates on a circular color system, but physical colors operate on a linear spectrum.  The video linked does a good job of visualizing this.


This is only the tip of iceberg.  Other people have done a great job of explaining color science so I don’t want to try to be exhaustive here, and would strongly encourage you to read more if you enjoyed this post.

Here are the materials for the three talks that I attended at SIGGRAPH:

Fun fact, the latter two (and one of the panel members of the first) were given by researchers at RIT, right here in New York.

I’ve also written some code to experiment with the math of converting between color spaces and analyzing color in images.  I’ve put it up on Github:

leave a comment