Three algorithms for converting color to grayscale

How do you convert a color image to grayscale? If each color pixel is described by a triple (R, G, B) of intensities for red, green, and blue, how do you map that to a single number giving a grayscale value? The GIMP image software has three algorithms.

The lightness method averages the most prominent and least prominent colors: (max(R, G, B) + min(R, G, B)) / 2.

The average method simply averages the values: (R + G + B) / 3.

The luminosity method is a more sophisticated version of the average method. It also averages the values, but it forms a weighted average to account for human perception. We’re more sensitive to green than other colors, so green is weighted most heavily. The formula for luminosity is 0.21 R + 0.72 G + 0.07 B.

The example sunflower images below come from the GIMP documentation.

Original image color photo of sunflower
Lightness sunflower converted to grayscale using lightness algorithm
Average sunflower converted to grayscale using average algorithm
Luminosity sunflower converted to grayscale using luminosity algorithm

The lightness method tends to reduce contrast. The luminosity method works best overall and is the default method used if you ask GIMP to change an image from RGB to grayscale from the Image -> Mode menu. However, some images look better using one of the other algorithms. And sometimes the three methods produce very similar results.

Update: See More on colors and grayscale for more details and more examples.

22 thoughts on “Three algorithms for converting color to grayscale

  1. Always worth just seeing if each channel on it’s own produces good results too! Can also get some really interesting shots when just looking at one channel :-)

  2. Perhaps this is obvious to others, but what algorithm does actual film “use” to convert colors? An old-school analog (i.e. not digital) camera changes real world colors into b&w photographs, but using which of these methods, if any? I’m guessing it may vary with camera and film settings, and likely has changed over time, but is there one method it tends to favor over the others?

  3. the answer is none of the above methods are used for film. in film or any system sensitive to the electromagnetic spectrum, can be considered as a reduction of an infinite number of frequencies (well infinite depending on your quantum view of the world I guess), into a smaller number, in a similar manner to a weighted sum histogram.

    In film or a digital sensor there is a spectral sensitivity function that determine which frequencies of light get absorbed/sensitise the emulsion based upon the chemical structure. Through various means this gets converted to either silver halide crystals or in colour film different dye’s are formed (eventually).

    You then shine a light through the resultant film and bounce it off a screen, your eyes then view the result, In mathematical terms this is a highly non-linear transformation from scene-to-screen and is a function of many reductions from spectral to n-dimensional ‘records’ all of whose intensity functions are non-linear.

    In film the actual ‘colours’ vary mostly due to the stocks used, but there are some effects that vary with intensity of exposure so the camera settings can effect the results though this is minor compared to the stock and development.

    Re: the magic numbers, well they are based upon a mixing assumption. They are are saying what linear combination of R, G and B would you perceive as being without colour, i.e. neutral. Depending on the exact colours of R, G and B you will get different proportions. The mixing also assumes that the individual colour channels are linear, else your not really that close to how our eyes behave. What you consider neutral is also heavily effected by the surroundings in which you view, your eyes are able to adjust their own gain functions to compensate for a certain amount of change, thus a piece of paper can look white to you even if the colour of the light shining on it is changed, although with highly coloured light you will still see it as being coloured.

  4. I did some testing with all three mentioned methods and discovered that Luminosity method is best if picture is not too blue. So I tried this:

    1. for every pixel I first check if blue component is greater than green and red

    2. if it IS I use Lightness method (or Average – both are good) … if NOT I use Luminosity method for current pixel.

    This algorithm gives very good results. It comes very close to Photoshop’s convert to Grayscale “mystic” algorithm :)

  5. sir, thanks a lot.wish u all the beat for your future work and hope u will always help us by providing such data.

  6. Pingback: Wineapp – creating a high-tech hash: gray scaling « Mobile apps
  7. Pingback: gattя
  8. I just use the average method to grayscale a button (disable status) in a HTML5 canvas project

    var context = canvasEl.getContext('2d'),
    imageData = context.getImageData(0, 0, canvasEl.width, canvasEl.height),
    data = imageData.data,
    iLen = imageData.width,
    jLen = imageData.height,
    index, average, i, j;

    for (i = 0; i < iLen; i++) {
    for (j = 0; j < jLen; j++) {

    index = (i * 4) * jLen + (j * 4);
    average = (data[index] + data[index + 1] + data[index + 2]) / 3;

    data[index] = average;
    data[index + 1] = average;
    data[index + 2] = average;
    }
    }

    context.putImageData(imageData, 0, 0);

    Thanks for the post.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>