Dual ISO - massive dynamic range improvement (dual_iso.mo)

Started by a1ex, July 16, 2013, 06:33:50 PM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

Audionut

Try using the 20bit build from a couple of pages back.

I'm not seeing the problems you describe.
https://dl.dropboxusercontent.com/u/34113196/ML/forum_stuff/_MG_2268.jpg

Note that by using such a strong filter, the green and blue channels receive very little information.

a1ex

I've tested the 20-bit version with a few infrared files, but these two are even stronger.

The result is a little noisier than I've expected. In channel mixer, I found this ratio to be close to the sweet spot, noise-wise: R=1, G=0.6, B=0.1 (at auto WB 1, 6.5, 20).

What crop window did you use for the test sample? I can't identify it in the CR2's...

edit: after playing a bit with your sample, here are some tips:

- your files have a very high SNR in the red channel, but quite poor in green and blue channels; unlike regular pictures, where the green channel is cleaner
- therefore, the first tip would be to disable chroma smoothing (--no-cs)
- related: internally I use a temporary demosaic to get a better interpolation in half-res areas; however, this demosaicing will take some noise from the green/blue channels and put it into the red channel, so let's disable this step (--mean23, which averages only pixels of the same color)
- in your flower sample, you have out-of-focus shadows, so we don't need to recover full-res detail there (--no-alias-map)
- in your flower sample, the resolution is not critical either, so let's also try --no-fullres
- in some cases, stripe correction may be less than ideal, so let's try disabling it (--no-stripe-fix)

Results (click for full-res):


Script used to render these crops (you may find useful for trying out things):
compare-options.py

Kalman analysis (optimal averaging formula, see http://robocup.mi.fu-berlin.de/buch/kalman.pdf ):

- let's assume constant additive noise, in arbitrary units normalized to get stdev=1 for the red channel
- green channel will have stdev=6.5 per pixel (from auto WB)
- blue channel will have stdev=20 (from auto WB)
- now, let's mix one red, two greens and one blue (as with dcraw -h) => green channel has stdev=6.5/sqrt(2) = 4.6 per RGGB cell
- variances: 1, 21, 400
- optimal blending factors (after auto wb, without color matrix): 1, 0.048, 0.0025.

Now, look at tip #2 from above: debayering will usually do some kind of channel mixing (depending of the algorithm), which will introduce noise. Let's minimize this. Under the additive additive noise hypothesis (same noise stdev for all pixels), this means the noise for each pixel should be mixed in equal proportions, so, let's do the debayer at UniWB (multipliers 1,1,1). This is not exactly a good idea for color images, but should be close to optimal noise-wise. In this case, the optimal blending factors would be 1, 0.3, 0.05, since we no longer have WB applied.



Proof that the difference is caused by debayering: develop with --shrink=2 (half-res without debayer), both WB settings, and the noise difference will vanish (auto-wb vs uniwb).

Exercise to the reader: show (mathematically) that if you use AHD debayering performed at WB 1,6.5,20, and color matrix applied, the optimal blending factors for the channel mixers are close to 1,0.6,0.1 (my initial guess).

dpt90

Would it be safe to use Dual ISO mode for a timelapses? taking maybe 300-500 shots at a time? Will this affect my sensors at all?

Daniel

dsManning

Quote from: dpt90 on May 31, 2014, 01:19:50 PM
Would it be safe to use Dual ISO mode for a timelapses? taking maybe 300-500 shots at a time? Will this affect my sensors at all?

Daniel

No bad sensor exposure, same shutter count as a regular timelapse.  So this improves on HDR timelapses in the shutter count area, because instead of 2+ (usually 3 or more) per frame for an HDR, you get increased DR with one shot.  Just a heads up, as with HDR timelapse, your post processing time increases dramatically.

dpt90

Quote from: dsManning on May 31, 2014, 05:14:30 PM
No bad sensor exposure, same shutter count as a regular timelapse.  So this improves on HDR timelapses in the shutter count area, because instead of 2+ (usually 3 or more) per frame for an HDR, you get increased DR with one shot.  Just a heads up, as with HDR timelapse, your post processing time increases dramatically.

I know the drawbacks/benefits of doing HDR in post lol.... What I mean is, will shooting in dual ISO mode put more strain on my cameras sensor at all? I've just hear that Dual ISO mode can put a lot of stress into the sensor causing problems with it. Or should I not even worry about it?

Walter Schulz

Quote from: dpt90 on May 31, 2014, 06:06:49 PMI've just hear that Dual ISO mode can put a lot of stress into the sensor causing problems with it.

May I ask, where you heard it and who told the tale? Link, please!

Ciao
Walter

Hey

Alex mentions it on the original post, page 1. I admit I got scared too, as I don't exactly understand what it's doing.

Walter Schulz

He wrote about possible hazards to the sensor, esp. frying it.
He doesn't say anything about additional stress during Dual ISO operation under standard conditions per se.

a1ex

Side effects noticed by me: there seem to be more hot pixels than usual during long exposures, and the ISO alternation confuses the feedback loop a bit.

About temperature: anybody can run an experiment and record it during a timelapse, for example, with and without dual iso; if there are differences, such a test should reveal them. I didn't run any temperature comparison, so I can't tell.

I took around 20000 pictures with dual iso on 5D3 by now (including some timelapse that I have yet to postprocess), so I guess it should be fine. Of course, that's not a guarantee.

dpt90

Quote from: Walter Schulz on May 31, 2014, 06:12:10 PM
May I ask, where you heard it and who told the tale? Link, please!

Ciao
Walter

Mostly been hearsay by people who havent used ML before and also mentioned in the original post!

this is just my first time trying the nightly builds!!! didnt want to have a bad experience lol....

dpt90

Quote from: a1ex on May 31, 2014, 10:56:37 PM
Side effects noticed by me: there seem to be more hot pixels than usual during long exposures, and the ISO alternation confuses the feedback loop a bit.

About temperature: anybody can run an experiment and record it during a timelapse, for example, with and without dual iso; if there are differences, such a test should reveal them. I didn't run any temperature comparison, so I can't tell.

I took around 20000 pictures with dual iso on 5D3 by now (including some timelapse that I have yet to postprocess), so I guess it should be fine. Of course, that's not a guarantee.


This is good enough confirmation for me about it being safe to use. :)

Audionut

Those noise differences from debayering at different WB are quite remarkable.  Using the optimal blending, I prefer --csx2, as it retains good detail, and doesn't suffer those black spots.

One thing I am not certain on.

Quote from: a1ex on June 01, 2014, 08:18:09 AM
The color resolution in a debayered image is usually lower, but most algorithms are able to reconstruct full-resolution luma by exploiting inter-channel correlation.

It doesn't make sense why the luma resolution has to be reconstructed (at all).  Are you talking about the efficiency differences in the color filter array?

It seems to me, that we should have full luma resolution, with the problems being the color resolution (since we only have 1/3 resolution of RB, and 1/2 resolution of G), and the noise problems associated with WB'ing that color information.


I tried playing with filters about 12 months ago, for the express purpose of reducing the sensitivity of the G channel (ETTR'ing RB :) ), but since it was some cheap POS filter from China, the results were not spectacular.  I don't even seem to have sample images available anymore, but I do still have the filter.  Might have to break it out and re-check some things.

a1ex

Quote from: Audionut on June 01, 2014, 04:00:37 PM
It doesn't make sense why the luma resolution has to be reconstructed (at all)

With a naive demosaic algorithm that only interpolates from pixels of the same color, you don't get full luma resolution. You would get a luma resolution similar to upsampling some image by sqrt(2).

To interpolate missing green pixels, by using information from red and blue (since you don't have other hints), you need to make assumptions about how your captured data behaves locally (in the neighborhood used for interpolation). A common assumption is that chroma doesn't vary much locally, therefore we can smooth it out without much perceived loss in quality, and use the local variations in red/blue channels to reconstruct luma data.

Quote
In the second step the colors are updated by transferring information from the green to the red and blue channels. [...] It subsumes the observation, underlying most algorithms, that the high frequencies of the three channels are very similar.

Some stuff worth reading:
http://www4.comp.polyu.edu.hk/~cslzhang/paper/conf/demosaicing_survey.pdf
http://www.arl.army.mil/arlreports/2010/ARL-TR-5061.pdf
http://www.ipol.im/pub/art/2011/bcms-ssdd/

So, I call it "reconstructed" because it's not there from the beginning, but it's guessed (and most of the time it's guessed very well, unlike blind upsampling).

poromaa

A good algorithm for debayering could probably also solve the chromatic aliasing because of the line-skipping. Maybe using some heuristic methods for recognising "unlikely" pixel-values. Combining this with some "superresolution" methods (using several frames as a1ex has linked to before) and the result could probably get very good.

Audionut

Quote from: a1ex on June 01, 2014, 05:32:35 PM
With a naive demosaic algorithm that only interpolates from pixels of the same color, you don't get full luma resolution. You would get a luma resolution similar to upsampling some image by sqrt(2).

To interpolate missing green pixels, by using information from red and blue (since you don't have other hints), you need to make assumptions about how your captured data behaves locally (in the neighborhood used for interpolation). A common assumption is that chroma doesn't vary much locally, therefore we can smooth it out without much perceived loss in quality, and use the local variations in red/blue channels to reconstruct luma data.

Why not process it like a video signal?  Brightness match the signal, based from the known properties of the CFA.  ie  Green pixels have a unit of 1, blue has a unit of 0.8, red has a unit of 0.7 (for instance).

This way we have full resolution luma detail, and we know that chroma isn't so important (perception), so we can interpolate that.  It seems to easy, so clearly I have some reading to do.  ;)


a1ex

If you shoot a monochrome image, yes, that will work.

Audionut

So why not a 2 stage process?

After the full luma resolution has been obtained, go back and process the chroma. 


edit:  Wait, I think I am missing the importance of "monochrome".

Luiz Roberto dos Santos

Quote from: a1ex on May 31, 2014, 10:32:48 AM
I found this ratio to be close to the sweet spot, noise-wise: R=1, G=0.6, B=0.1 (at auto WB 1, 6.5, 20).

Trying to find it for days... thanks.

Quote from: a1ex on May 31, 2014, 10:32:48 AM
What crop window did you use for the test sample?

The crop is 1:1, but this sample is from another image (the strips are the same of others). If you want to play with other files, I put others here (note: is just aleatory shots, test purpose).

Quote from: a1ex on May 31, 2014, 10:32:48 AM
Script used to render these crops (you may find useful for trying out things):
compare-options.py

Will be very userfull for some tests.



I don't know if it make sense, but I get best results with IGV algorithm on B&W photos [?].
Thanks for this huge explanation A1ex, I obtain a good result now. And, @Audionut, thanks too.

Quote from: Audionut on June 01, 2014, 04:00:37 PM
I tried playing with filters about 12 months ago, for the express purpose of reducing the sensitivity of the G channel (ETTR'ing RB :) ), but since it was some cheap POS filter from China, the results were not spectacular.  I don't even seem to have sample images available anymore, but I do still have the filter.  Might have to break it out and re-check some things.

So, about the color filters, Is not a good deal using it on digital? I buy this to use with film, like HP5+ Ilford, but I think: Why not use it on digital?
Ansel Adams was a big user of this filter (091 dark red), and he said it amplify (on analog) some scale of gray.
Of course, on Digital it not work in the same way, but I really like of cut 600nm and below of spectra.
For maximum information, the 'correct' way is not using a color filter, or using the violet (about the discution of ETTR RB, little absorption of gree channel)?

a1ex

Can you share the final result and the settings? Just curious.

IGV looks interesting. I'm not yet convinced to switch to RawTherapee though (I simply couldn't get the same look as from ufraw+enfuse, which fits my taste). I might have better luck with Darktable though (they started from the ufraw codebase, and still use a few key routines from there), but had some trouble installing it.

nachordez

For me Darktable, after trying all possibilities in Linux, has become THE tool. Worth trying, I think. Really useful, comfortable and full of powerful tools... I clearly prefer it to RawTherapee, personally.
EOS 600D  /  OpenSuse 13.1

ShaunWoo

Hey guys, i got a problem, last few shoots were perfect, camera raw, no crop mode, with dual iso, processed with cr2hdr20 fine, no flickering, all perfect, now i had a 3 day shoot, and tried to process dual isos, got this error:

bright dark detection error, iso blending didnt work

has anyone had this problem before? what do i do

i also tried to process with the old cr2hdr, didnt have any problems with the above, but it did flicker which is unacceptable for this production

im uploading these dng files for you to diagnose if you can please, really confused why as some files processes fine, some dont, just with this cr2hdr20, but i need this one as this one doesnt cause flickering (with same levels parameter)


heres the files:
https://db.tt/VYeGuQoU

so just to recap:
cr2hdr.exe  processes all files fine but flickers
cr2hdr-20bit.exe  doesnt process many files and gives the error code: "bright dark detection error, iso blending didnt work"
the shortcut file is what you drag the dng files onto to use same levels parameter
M23-1529.000000  WONT PROCESS WITH 20bit
M23-1529.000001  WONT PROCESS WITH 20bit
M23-1529.000002  WILL PROCESS WITH 20bit
M23-1529.000003  WILL PROCESS WITH 20bit

Marsu42

Concerning greymax: It's known the algorithm completely misses if the shot is underexposed. Now, some very smart people would say "just don't underexpose!" but for simple /me and probably others who expose for highlight safety: Probably there is a way to adjust the awb detection for these cases? If required, I can provide some dual_iso samples.

a1ex

Quote from: Marsu42 on June 03, 2014, 04:45:34 PM
It's known ...

By whom and by what method is it known?

I didn't notice major issues in my samples, except for those requiring more than +10 EV of correction, like this one (and I hope you don't underexpose like that). However, this algorithm looks for a large area of gray color, so it sometimes gets it completely wrong (but not because of exposure).

The graymed algorithm is a bit closer to the traditional gray-world hypothesis; maybe it's worth trying that too.

Marsu42

Quote from: a1ex on June 03, 2014, 05:08:32 PM
By whom and by what method is it known?

Sorry for not providing an exact quote, I remembered you saying as much after introducing the dual_iso awb so I thought it's not necessary to provide exact proof. For my shots, greymax starts failing with ~2ev underexposure and results in tint off the chart, usually magenta.

Quote from: a1ex on June 03, 2014, 05:08:32 PMThe graymed algorithm is a bit closer to the traditional gray-world hypothesis; maybe it's worth trying that too.

Ok, I'll try that for a change. Probably the reason for the failure really is that it fails to find color areas which might not be surprising in nature shots.

a1ex

So, if you set the camera on a tripod and shoot the same test scene at different exposure settings, you get different auto WB results?

I'd call this one a bug.