MLRawViewer manages to get rid of them with its vertical Stripe fix, but introduces artifacts like horizontal stripes and Chunky vertical stripes in black/dark spots and/or high contrast sources instead. Is it in anyway possible to combine the best of both worlds? It is my understanding that MLRawViewer uses OpenGL shaders with combination of different coding languages
Unfortunately, MLRawViewer just locks up my current machine (bringing down the entire operating system), so it's not straightforward for me to play with this code. Otherwise I'd probably maintain this program, to some extent.
This shot could have been +0.5 EV higher, maybe +1, that's a big maybe.
To see how it would have looked at +1 EV, do this:
exiftool M08-1044-cut_000000.dng -WhiteLevel -BlackLevel
White Level : 16200
Black Level : 2047
exiftool M08-1044-cut_000000.dng -WhiteLevel=9123
How did I come up with this number? (2047 + 16200) / 2.
Proof that taking the average of black and white is equivalent with increasing the exposure 1 stop (regarding overall look and highlight clipping point, but not noise levels) if left as an exercise for the reader.
Result...
octave:1> a = read_raw('M08-1044-cut_000000.dng');
octave:2> prctile(a(:), [1 5 10 50 90 99 99.9 99.99 100])'
ans =
2054.0 2066.0 2076.0 2176.0 3285.0 6054.0 8156.0 8828.6 9113.0
That means, not a single pixel would have been clipped in this frame by increasing the exposure by 1 stop (which means, no information would have been lost). You should be able to grade this file identically to the original (please try).
To simulate 2 stops, try white level 5585, and for 1.5 stops, 7050. Math is left as exercise.
octave:3> prctile(a(1:2:end,1:2:end)(:), [1 5 10 50 90 99 99.9 99.99 100])'
2050.0 2059.0 2067.0 2126.0 2767.0 4212.5 5019.0 5236.2 5364.0
That means, with 2 stops, no pixel on the red channel would have been clipped (therefore, no highlights should be clipped to pure white).
My JPEGs (0, +1, +1.5, +2 from white level, +3, +2, +1.5, +1 from exposure slider; for other settings, change extension to .ufraw):


The highlights in the last two (where some channels are clipped) are desaturated (they do not retain the original colors, but they are not clipped to white either). That's how highlight recovery works in ufraw. To my knowledge, many commercial raw processing programs perform a lot better in this area (curious to see how Danne would render this).
btw bit off topic, a line above in the link you posted:
> The read noise can be isolated by taking a "black frame" image, an exposure with the lens cap on and the highest
> available shutter speed; there are thus no photons captured, and only the electronic noise from reading the sensor remains.
Is this the actual proper way of doing a Darkframe ? Setting shutter speed to the highest available?
Not sure how exactly you reached this conclusion;
this page (next time, make sure the article you are talking about can be identified, e.g. with a link) mentions the highest shutter speed as a way to analyze the read noise component (without e.g. thermal noise, which depends on exposure, or PRNU, which depends on the number of photons captured), not as the best way to capture dark frames.
I'd take the dark frames under the same conditions as the original clip. Probably some settings (like shutter speed) won't matter much, but I didn't experiment with it.
I have some Vintage lenses that have quite big variations in the Vignetting at the different F-stops, from 1.4 up to 5.6
Flat frame would correct the vignetting in those lenses as well - but you will need one averaged flat MLV at each aperture. Repeat for each ISO and some usual shutter speeds, and you'll probably end up with a 3D matrix of correction frames. That would definitely require some sort of automation.