Lately, I am studying by myself the Bayer filter and some demosaicing algorithms.
I think those included in MlvApp are great and the results are generally stunning.
What I'm not sure about is the correctness of the common way to proceed with videos recorded using the 1x3 binning pattern.
In my opinion, demosaicing the video then doing a horizontal stretch (or a vertical shrink) is going to lose part of the information captured with the raw image.
Personally, I would first expand the raw image unbinning the pixels with an ad-hoc algorithm, then I would apply to the resulting raw one of the existing algorithms (e.g. Amaze).
I could be wrong, but if you look at the pixels that are binned... they are spatially in a slightly different position compared to a normal Bayer filter.
I have an idea about how to do the unbinning, but I would like to have some suggestions on where to make the changes to the code.
BTW, I am a software developer, so my problem is not in coding, but not all the project code is clear to me.
If someone can help proceed with the test, that would be great.