McAce wrote:Could you please elaborate on the filtering part? (I guess the second column of your table refers to that.) Which part of the conversion requires that and what is sampled exactly?

Also, don't you lose a dimension when converting to XYZ first and then to a 4-dimensional space? Why not convert from the spectrum to the 4D space directly?

Each of those color spaces were defined from their center wavelength.

Color space, Nb bands, Center wavelength[4]

{ "Digital Camera", 3, { 0.0f, 470.0f, 550.0f, 630.0f } },

{ "ITU Rec 709", 3, { 0.0f, 435.0f, 548.0f, 700.0f } },

{ "Equispaced RGB", 3, { 0.0f, 450.0f, 550.0f, 650.0f } },

{ "AC1C2", 4, { 456.0f, 490.0f, 557.0f, 631.0f } },

{ "RYGB", 4, { 460.0f, 515.0f, 570.0f, 625.0f } },

{ "Wide RYGB", 4, { 455.0f, 515.0f, 575.0f, 635.0f } },

{ "IOGU", 4, { 435.0f, 510.0f, 585.0f, 660.0f } },

{ "Prime colors", 3, { 0.0f, 450.0f, 530.0f, 600.0f } },

{ "Prime colors 2", 3, { 0.0f, 450.0f, 540.0f, 605.0f } },

{ "Color Scanner", 3, { 0.0f, 473.0f, 532.0f, 635.0f } },

{ "Sharp Colors", 3, { 0.0f, 450.0f, 540.0f, 620.0f } } };

Looking at the code, I have to modify my spectra to color space conversion description. For those "experimental" color space for which there were no published color space matrices I did not convert the spectra to XYZ. Instead, I directly converted the spectra to 3 or 4 bands representation and used those 3 or 4 band representations for illumination calculations. While doing the conversion, I also computed the from-XYZ and to-XYZ color space transformation matrices.

The spectra were converted using those methods:

Sampled : One single sample was taken at the provided central wavelengths.

Gaussian_Filtered : Spectrum samples were weighted according to a Gaussian filter centered at the provided wavelength and with a width that attempted to get an as flat as possible response when all the filters were added together.

Interpolated : A pyramid filter where the central wavelength is weighted 1 and the neighbors central wavelengths are weighted 0 with linear interpolation between central and neighbors.

Splitted : A box filter that includes all wavelengths between the mean wavelengths between central wavelengths.

SmoothStep_Filtered : A filter between a box and a Gaussian filter. I programmed this method after observing typical digital camera filters that were flatter and larger than Gaussian but not boxed. I adjusted the width of the smoothstep to visually match the typical form of digital camera filters.

I will try to find graphics of those filters and post them.

I've been thinking of each color space I know in terms of "a small set of functions (of wavelength) to which I store the coefficients needed to create a color spectrum as a linear combination of those functions". So I've always wondered why we don't just use your "dividing the visible bandwidth into 3 equal bands"-method for rendering. It is the 3-dimensional analog to high-dimensional spectral rendering.

The conversion from a reduced number of bands to Spectrum is not defined. There are an infinite number of conversions possible. The linear combination of capturing filters weighted by those coefficients will get you a Spectrum if you know the form of the capture filters used but the probability that this Spectrum looks like the original one is nearly nil. Even less if you don't know the form of the capture filter and yet even less if the coefficients were the result of some color transformation.

The reduction of dimensionality necessarily involve loss of information as several wavelengths are weighted together into one coefficient and we lose each individual wavelength weights.