Neltner Labs

Fun Projects and Art

How to convert from HSI to RGB+white.

This post covers, with example code, how to convert from the HSI colorspace previously mentioned as the best colorspace to use when working with LED lighting to RGB+white colorspace in an optimal way.

The punchline is that adding white light to a LED fixture increases dramatically the accuracy of pastel colors and the quality of unsaturated colors generally. The devil, however is in the details.

First, some background.

As mentioned in a previous post, HSI colorspace is a space where hue is intuitively the “color”, saturation is the “whiteness”, and intensity is the “brightness”. It is the most intuitive way to describe projected LED light because the power output of the light does not vary as you change hue or saturation at constant intensity.

But wait, what is a colorspace anyway? Well, to understand that we really need to delve fairly deep into how the eye works.

Background on Color Spaces

Fundamentally, the eyes are a very odd sensing system. The ears do a frequency based analysis of incoming pressure waves, and report all of the dominant frequencies to the brain for interpretation – if we hear two frequencies of different pitches, they sound distinct. This isn’t quite as true when you talk about harmonics of sounds, as they will start to affect the timbre instead of sounding as a distinct pitch, but the basic idea is that we can pick out independent sounds with different pitches fairly easily.

The eyes, on the other hand, do spatial and frequency-based sensing; however, they throw away much of the information about the specific frequencies detected. For instance, if you look at any particular spot, you will see a single color – not a spectral map of the complete visible spectrum coming from that point. This is great for the purposes of vision; it would be rather difficult, I think, to walk around while receiving that much information. However, this means that the eye behaves very strangely in the presence of multiple colors from the same location.

The classical example of this effect is the additive color wheel. You mix red light and green light, you get what appears to be yellow light. But how is this possible? If yellow is a frequency of light, how does mixing red (620nm) and green (530nm) produce yellow (590nm) light? There is certainly no physical process that does this sort of mixing in general.

In fact, the idea that red and green combine to form yellow is a trick of the mind only. You may think you’re seeing yellow light, but the fact is that you are seeing independent red and green light, and your brain is converting that information into the appearance of yellow! Very strange. This trick is summed up in the Chromaticity Diagram (pulled from wikipedia). On this diagram, pure frequencies are displayed along the outer border from 460 to 700nm. As you mix two colors together, you draw a line between their positions on the border, and the ratio of the two tells you the position in the diagram that your apparent color lies. For example, if you combine 520nm green light with 620nm red light in a 50-50 ratio, you will have what appears to be yellow light. Likewise, if you have 620nm red light and 490nm cyan light in a 50-50 ratio, you will have what appears to be approximately white light.

image

The subset of the chromaticity diagram covered by sRGB colorspace. The grey areas represent colors which cannot be reproduced on an RGB display. (Source: Wikipedia)

This explains how an RGB cluster of LEDs can produce so many apparent colors of light – they aren’t actually producing those other frequencies of light; instead they are tricking the eyes into thinking that they are producing those other frequencies of light. To quote wikipedia:

The choice of primary colors is related to the physiology of the human eye good primaries are stimuli that maximize the difference between the responses of the cone cells of the human retina to light of different wavelengths, and that thereby make a large color triangle.

The normal three kinds of light-sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish-green light and to differences between hues in the green-to-orange region.

One good point in this is that it says that the responses are peaked at yellow, cyan/green, and violet – not red, yellow, and blue (the traditional triad of primary absorptive colors). The explanation for the difference between absorptive and additive colors is long and well explained elsewhere.

image

The absorption of the cones in the eye. (Source: Wikipedia)

And to continue quoting Wikipedia:

Since the likelihood of response of a given cone varies not only with the wavelength of the light that hits it but also with its intensity, the brain would not be able to discriminate different colors if it had input from only one type of cone. Thus, interactions between at least two types of cone is necessary to produce the ability to perceive color. With at least two types of cones, the brain can compare the signals from each type and determine both the intensity and color of the light.

For example, moderate stimulation of a medium-wavelength cone cell could mean that it is being stimulated by very bright red (long-wavelength) light, or by not very intense yellowish-green light. But very bright red light would produce a stronger response from L cones than from M cones, while not very intense yellowish light would produce a stronger response from M cones than from other cones (counterintuitively, a “strong response” here refers to a large hyperpolarization, since rods and cones communicate that they are being stimulated by not firing). Thus trichromatic color vision is accomplished by using combinations of cell responses…

Many historical “color theorists” have assumed that three “pure” primary colors can mix all possible colors, and that any failure of specific paints or inks to match this ideal performance is due to the impurity or imperfection of the colorants. In reality, only imaginary “primary colors” used in colorimetry can “mix” or quantify all visible (perceptually possible) colors; but to do this the colors are defined as lying outside the range of visible colors: they cannot be seen. Any three real “primary” colors of light, paint or ink can mix only a limited range of colors, called a gamut, which is always smaller (contains fewer colors) than the full range of colors humans can perceive.

Implications of Color Theory

The implications of this theory is that you can in principle use a small number of primary colors to approximate the full gamut of colors visible to the eye. Although you can’t quite cover the full chromaticity diagram with any three single primary colors, and what the best primary colors to use are is qualitative, what is not under debate is that there are three primary colors. This is not a matter of opinion, it is a matter of biology and physics. You detect roughly three frequencies of light, so approximately speaking, you need three numbers to represent any color. Maybe those three numbers are a red, green and blue intensity (in the RGB colorspace). Maybe those three numbers are a hue, saturation, and intensity (in the HSI colorspace). But what really throws a wrench into colorspaces is a fourth color.

Why is this? Well, let’s take a concrete example. Let’s add cyan, say, 520nm, to our spectrum. Based on the chromaticity diagram above, this is a fantastic idea! Look at how much more area of the eye’s detection region you can capture! But okay, now how do you represent white?

Oh boy.

Now there are – literally – an infinite number of ways to represent one point on the spectrum. You could mix yellow with blue and red, blue with red and green, cyan with red and blue… for any value of cyan intensity you add, there is a corresponding amount of red, green, and blue that can be added to produce white. And by that, I mean that to a human eye, it will be biologically indistinguishable.

Wow.

To break it down, you could have red, green, and blue all equal in intensity to make white. But then by mixing in an infinitely small amount of 520nm light, you can still make a perceptually identical color by simply slighly decreasing the amount of red and green relative to blue.

This is a big problem. It’s called degeneracy. By adding an additional unnecessary dimension, you’ve created a problem where you have to constrain artificially one of the variables in order to rigorously define the others.

Now, I would be remiss in my duties as a friend of quite a few mathematicians if I did not at least mention that it is not always degenerate. Just often. For instance, if you want 520nm perceptual color, you have no choice except for 100% 520nm light and 0% red, green and blue. However, as soon as you move off of 100% 520nm light, the selection again becomes completely degenerate.

On to the solution for RGB+white!

Now that we’ve established the fundamental theoretical problem, let’s look at the specific case of the MyKi light. By now you’re probably convinced that there is no clean way to convert from HSI colorspace to RGB+white colorspace!

And in a way, you’re right. The solution is to constrain the problem based on intrinsic imperfections in all LED lighting systems. This intrinsic imperfection is contained in three different problems:

  1. No two LEDs are exactly the same.
  2. Different colors of LEDs are perceptually different in brightness.
  3. It is impossible to get two LEDs of high enough power to project a spotlight close enough as to be considered overlapping.

If you’ve ever attempted to produce white light using a RGB LED array, you will inevitably find that the perceptual quality of that light is… off. To be more specific, the color balance is never quite right, there are always shadows that have variations in hue, and the precise color of white is poorly defined.

In order to fix this, there is a constraint that we can apply which changes our conversion from degenerate to fully defined. This constraint is as follows:

Non-saturated hues will be generated by mixing of a fully saturated hue with white.

That’s all! Why is this a great constraint? First, the white from the white LED is carefully manufactured to be a very good approximation to “real” white. Second, it means that we will never be attempting to balance red, green, and blue off against each other.

The end result, as you can see in the MyKi light, is that the colors produced have the same apparent hue when fully saturated as they do when only slightly saturated.

According to the definition of saturation, it is the ratio of whiteness to “colorfulness” to be somewhat vague. In practice, we will assume that this means a linear relationship where a 50% saturated color is 50% “color” and 50% white. This fits in with traditional definitions, but is far more intuitive than definitions based on trigonometric functions in the context of an actual light that has actual white to add in.

So, what we do in order to produce a fully optimized conversion from non-degenerate HSI colorspace to degenerate RGB+white colorspace is to first convert to a fully saturated, non-degenerate RGB colorspace, and then simply mix linearly with white.

Awesome! Hopefully that mathematical logic is clear, but what I’m essentially saying is that by adding in white, color conversion actually gets easier due to the way that saturation is defined. Let’s try an example.

Take fully saturated red. Hue is zero degrees, intensity is 1, saturation is 1.

Now, let’s try to make it less saturated – a nice pink.

In the traditional HSI->RGB colorspace conversion, you would do this by scaling back on red, while adding in equal parts of green and blue. Weird, but true. And complicated; you have to keep the sum of red, green, and blue constant while making it so that red - green - blue is 0.5. Even this fairly straightforward example is not easy math.

Now compare to the new HSI->RGB+white conversion. In this case, you just… mix red with white. red goes to 0.5, white goes to 0.5. Done.

At all conversions, you will have at most two colored LEDs on, making with their ratio a fully saturated hue, and then mixed with real white to get to the appropriate saturation. It’s more intuitive, easier to calculate, and it looks a LOT better.

Find it easier to interpret a code example? Here you go!

And as a reminder, if you happened to find this post useful and arrived here via some source like hacker news or facebook, please upvote it so that others can find it as well!

First, the original HSI->RGB function:

// Function example takes H, S, I, and a pointer to the 
// returned RGB colorspace converted vector. It should
// be initialized with:
//
// int rgb[3];
//
// in the calling function. After calling hsi2rgb
// the vector rgb will contain red, green, and blue
// calculated values.

#include "math.h"
#define DEG_TO_RAD(X) (M_PI*(X)/180)

void hsi2rgb(float H, float S, float I, int* rgb) {
  int r, g, b;
  H = fmod(H,360); // cycle H around to 0-360 degrees
  H = 3.14159*H/(float)180; // Convert to radians.
  S = S>0?(S<1?S:1):0; // clamp S and I to interval [0,1]
  I = I>0?(I<1?I:1):0;
    
  // Math! Thanks in part to Kyle Miller.
  if(H < 2.09439) {
    r = 255*I/3*(1+S*cos(H)/cos(1.047196667-H));
    g = 255*I/3*(1+S*(1-cos(H)/cos(1.047196667-H)));
    b = 255*I/3*(1-S);
  } else if(H < 4.188787) {
    H = H - 2.09439;
    g = 255*I/3*(1+S*cos(H)/cos(1.047196667-H));
    b = 255*I/3*(1+S*(1-cos(H)/cos(1.047196667-H)));
    r = 255*I/3*(1-S);
  } else {
    H = H - 4.188787;
    b = 255*I/3*(1+S*cos(H)/cos(1.047196667-H));
    r = 255*I/3*(1+S*(1-cos(H)/cos(1.047196667-H)));
    g = 255*I/3*(1-S);
  }
  rgb[0]=r;
  rgb[1]=g;
  rgb[2]=b;
}

Next, the version where instead of mixing R, G, and B to get unsaturated colors we use a white LED.

// This section is modified by the addition of white so that it assumes
// fully saturated colors, and then scales with white to lower saturation.
//
// Next, scale appropriately the pure color by mixing with the white channel.
// Saturation is defined as "the ratio of colorfulness to brightness" so we will
// do this by a simple ratio wherein the color values are scaled down by (1-S)
// while the white LED is placed at S.
 
// This will maintain constant brightness because in HSI, R+B+G = I. Thus, 
// S*(R+B+G) = S*I. If we add to this (1-S)*I, where I is the total intensity,
// the sum intensity stays constant while the ratio of colorfulness to brightness
// goes down by S linearly relative to total Intensity, which is constant.

#include "math.h"
#define DEG_TO_RAD(X) (M_PI*(X)/180)

void hsi2rgbw(float H, float S, float I, int* rgbw) {
  int r, g, b, w;
  float cos_h, cos_1047_h;
  H = fmod(H,360); // cycle H around to 0-360 degrees
  H = 3.14159*H/(float)180; // Convert to radians.
  S = S>0?(S<1?S:1):0; // clamp S and I to interval [0,1]
  I = I>0?(I<1?I:1):0;
  
  if(H < 2.09439) {
    cos_h = cos(H);
    cos_1047_h = cos(1.047196667-H);
    r = S*255*I/3*(1+cos_h/cos_1047_h);
    g = S*255*I/3*(1+(1-cos_h/cos_1047_h));
    b = 0;
    w = 255*(1-S)*I;
  } else if(H < 4.188787) {
    H = H - 2.09439;
    cos_h = cos(H);
    cos_1047_h = cos(1.047196667-H);
    g = S*255*I/3*(1+cos_h/cos_1047_h);
    b = S*255*I/3*(1+(1-cos_h/cos_1047_h));
    r = 0;
    w = 255*(1-S)*I;
  } else {
    H = H - 4.188787;
    cos_h = cos(H);
    cos_1047_h = cos(1.047196667-H);
    b = S*255*I/3*(1+cos_h/cos_1047_h);
    r = S*255*I/3*(1+(1-cos_h/cos_1047_h));
    g = 0;
    w = 255*(1-S)*I;
  }
  
  rgbw[0]=r;
  rgbw[1]=g;
  rgbw[2]=b;
  rgbw[3]=w;