as far i can understand these fragments, the idea is to
- shoot [n] images
- detect motion between them (optical flow)
- overlay the raw images accordingly
if you have some motion now, due to vibration etc, different bayer pixels overlay now and you have a good estimate what R, G and B is at a specific "image" location.
wonder if it was meant to shift the sensor automatically (didnt hasselblad do this already using piezo?) or if the unsteady hands would deliver the required motion.
latter makes sense if you have motion *estimation* because with piezo you already know the motion.
we could capture "short 4-frame MLV files" like with silent picture and the post can be done on the pc, but...
*AFTER* someone has proven that it makes sense
