a small chat about demosaicing

Started by chmee, October 13, 2014, 12:44:22 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

chmee

hey. so, i'm about to implement something like demosaiced output and now trying to decide, what algorithm is good/better. what are the "best" algorithms today? its not only about canon-ml, but as well for the future implementation for apertus-mlv-data. lets do a smalltalk about it.

thanks and regards chmee
[size=2]phreekz * blog * twitter[/size]


Levas

I have a line skipping canon 6d.
And for raw video I often get best results with LMMSE(linear minimum mean square error) demosaicing.
LMMSE get's rid of most color aliasing stuff at high contrast areas.
So for line skipping video I think LMMSE is often the best choice.

I think it's another story for canon 5d markIII video and full res silent pictures and the axiom beta.
Although I don't know if the the axiom uses line skipping for high FPS read out ?

g3gg0

something more theoretical, even experimental which is (far?) beyond my mathematical capabilities.

when converting bayer RGGB -> XYZ -> RGB we always assume that the single tristimulus values "R" "G" and "B" are single-frequency reference points at "x nm" wavelength in both RAW and RGB colorspaces.
(see http://en.wikipedia.org/wiki/File:CIE1931xy_CIERGB.svg )

but the CFA filters are not that narrow! they have a very wide range of sensitivity and are - unlike tristimulus values - correlated.
check the spectral response of a CFA pattern, like here for the D90: http://vitabin.blogspot.it/2013/04/spectral-response-of-nikon-dslrs-d90.html
so every RAW-RGGB pixel's component is correlated to the value of the other components of that pixel.
the conclusion is, there is no 1:1 mapping between the real spectral color that occurred and the RGGB representation.

if we get an accurate spectral response curve, like the one on the webpage above could be, cant we recover the probability map of every single pixel which (spectral) color
would have led to the RGGB values we find in the raw pixel values?
using this statistical data, recovering unusual color phenomens and a more perfect CA should be possible, aint it?

the downside: recovering is very time and memory consuming
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

chmee

example 40D 50D:

example 5D 5DII:

from: http://www.astrosurf.com/buil/50d/test.htm

idea seems logical. the "green" CFA-Pattern (laying between R and B) is able to optimize R- and B-values (simply the accuracy of colors), because of its crossover-range.
[size=2]phreekz * blog * twitter[/size]

SpcCb

Quote from: g3gg0(...)
but the CFA filters are not that narrow! they have a very wide range of sensitivity and are - unlike tristimulus values - correlated.
check the spectral response of a CFA pattern, like here for the D90: http://vitabin.blogspot.it/2013/04/spectral-response-of-nikon-dslrs-d90.html
so every RAW-RGGB pixel's component is correlated to the value of the other components of that pixel.
the conclusion is, there is no 1:1 mapping between the real spectral color that occurred and the RGGB representation.
Indeed.
Even if we have a very narrow band filter||source, each pixel is not only illuminated by the corresponding wavelength; Red received a part of light in the green and blue wavelength, etc.
Plus, regular demosaicing get informations from contiguous pixels to determine the RGB values of each pixel.

Quote from: g3gg0if we get an accurate spectral response curve, like the one on the webpage above could be, cant we recover the probability map of every single pixel which (spectral) color
would have led to the RGGB values we find in the raw pixel values?
using this statistical data, recovering unusual color phenomens and a more perfect CA should be possible, aint it?
(...)
Theoretically yes, it is possible to reduce color artefacts (I mean false color generated), but you should introduce a diffraction model in the calculation (it's why AMaZE demosaicing is smart done), and some other stuff, because of the nature of the light.

Quote from: chmee(...)
idea seems logical. the "green" CFA-Pattern (laying between R and B) is able to optimize R- and B-values (simply the accuracy of colors), because of its crossover-range.
Just a note about these QE graphs; be careful, they are not representing the intrinsic QE of sensors: analysis are made with filters (IR-cut, VAF, ..) front of the sensors (we can see the difference with the 40D mod in the same page).