Dual ISO - massive dynamic range improvement (dual_iso.mo)

Started by a1ex, July 16, 2013, 06:33:50 PM

Previous topic - Next topic

0 Members and 3 Guests are viewing this topic.

Levas

Found the Lightroom plugin cr2hdr v2.2 BETA 6 in the topic you linked.
And with 20bit export, the image is clean, no more stripe pattern !!!  :D

How come the 20 bit export does work and normal export doesn't  ???

Levas

huh wait, that's not the whole story.
Did process all frames now, and some of them are good and some of them are not good (horizontal stripes).
These are frames shot within the same clip, they share the exact same mlv file...

uploaded 2 original frames from the same MLV clip to the following folder, also uploaded their processed image by CR2HDR-20bit.
one has horizontal stripes, one doesn't... ???

https://drive.google.com/folderview?id=0B1BxGc3dfMDaZWxfLTlVQmxnU1k&usp=sharing

How come and is there a fix ?

a1ex

In which of the two files do you see stripes?

( I'm not that good at pixel peeping, can't find any :P )

Levas

The last one has stripes

M27-1603_frame_000200-Processed-with20bitcr2hdr.DNG

Frame 200
You can see it below the reflecting sun on the car hood.
But if you download the file and push exposure, you can see it through the entire frame, did that for you and took a screenshot, see "stripes.png"  :P
Also uploading a screenshot of frame 199 with the same exposure push, see "NoStripes(frame199).png" in the google drive folder.


a1ex

The problem is here: https://bitbucket.org/hudson/magic-lantern/src/f12976885e6168813b69dd9acdde719f25045a67/modules/dual_iso/cr2hdr.c?at=cr2hdr-20bit#cl-1520

If you have octave, run it with --iso-curve to see where the problem comes from.

Suggestions welcome (math skills needed).

Levas

Too much formulas for bedtime  :P, will look at it tomorrow

But I see it does handle each frame separately, for dual iso photo's this makes sense.
For dual iso video clips, it makes more sense to "find the factor and the offset for matching the bright and dark images" only for the first frame and treat all next frames the same.

Just thinking out loud dreaming :
I'm seeing a GUI(in Lightroom CR2HDR plugin) with a slider for manual matching the dark and bright area's  :D 
By viewing a realtime preview and moving the slider to match and get rid of any lines...
(probably takes a load of programming work  :P )



a1ex

I had some luck with this patch:

diff -r f12976885e61 modules/dual_iso/cr2hdr.c
--- a/modules/dual_iso/cr2hdr.c Tue Jul 15 08:45:33 2014 +0300
+++ b/modules/dual_iso/cr2hdr.c Thu Aug 28 00:39:38 2014 +0300
@@ -1546,10 +1546,11 @@
             int pa = raw_get_pixel_20to16(x, y-2) - black;
             int pb = raw_get_pixel_20to16(x, y+2) - black;
             int pi = (pa + pb) / 2;
+            if (ABS(pa - pb) > 500) pi = clip;             /* if it's likely to be aliasing, discard this pixel */
             if (pa >= clip || pb >= clip) pi = clip;
             interp[x + y * w] = pi;
             int pn = raw_get_pixel_20to16(x, y) - black;
-            native[x + y * w] = pn;
+            native[x + y * w] = pi < clip ? pn : clip;      /* if interpolation failed, discard this pixel */
         }
     }
     
@@ -1582,8 +1583,8 @@
     //~ int bmed = kth_smallest_int(tmp, n, n/4);

     /* also compute the range for bright pixels (used to find the slope) */
-    int b_lo = kth_smallest_int(tmp, n, n*98/100);
-    int b_hi = kth_smallest_int(tmp, n, n*99.9/100);
+    int b_lo = kth_smallest_int(tmp, n, n*95/100);
+    int b_hi = kth_smallest_int(tmp, n, n*99/100);

     /* median_dark */
     n = 0;


The fitting algorithm is, in a nutshell:
- place the bright pixel values on the X axis and the dark ones on the Y axis
- discard clipped pixels
- take the median of bright pixels, the median of dark pixels, and draw this point on the graph
- choose the highlights (between two high percentile values)
- take the median of bright and dark pixels from highlights, and draw this second point on the graph
- draw a line through these points - its slope gives the ISO, and its offset is the black level difference between the two exposures

The patch also discards pixels likely to be aliasing, which give poor matches (these are usually outliers) and increases the number of pixels considered highlights (which should filter highlight outliers a bit better).

Can you suggest a better algorithm for robust linear regression?

Levas

"Can you suggest a better algorithm for robust linear regression?"

Knowing that dual iso processing is developed and fine tuned for probably more then a year...I doubt it  :P.
But I will take a look at the code and maybe I will come up with something  :D

Levas

Ok, here's a idea.
Don't know what exactly goes on in the code, so I might be wrong:

-I assume all pixels of the frame are read (so if you have a 20 megapixel image, it takes all 20 megapixels for calculation of the difference between the dark and bright image area.
-I'm sure white clipped pixels are discarded, I assume this also means that black clipped pixels are discarded.

Maybe the problem appears when many pixels are discarded.
I don't see anywhere in the code, that there's some compensating for the other ISO when throwing away clipped pixels.
I guess it would be appropriate that if you have too discard about half a million of white clipped pixels in the bright area(higher ISO), you also have to throw away half a million pixels in the dark area (throw away the exact same amount of half a million pixels, so discard half a million pixels with the highest brightness values (this will shift the Median of the non clipped ISO, I hope this Median comes closer to the Median of the other ISO).

If you don't compensate for throwing away clipped pixels in the non clipped ISO part of the image, you are not comparing identical images.

EDIT: Ofcourse the same goes for throwing away clipped shadow pixels, if you throw away half a million of clipped black pixels in the lower ISO, you als have to throw away the same amount of pixels in the higher iso, so throw away the same amount of pixels with the lowest brightness values.

a1ex

There are two overlapping images: bright and dark. If a pixel from the bright image is clipped, the corresponding pixel from the dark image is not used.

However, to get perfect overlap, I've interpolated the two images with a very simple algorithm - so the two images are aliased (you will always compare a real pixel with an interpolated one). In this case, I believe the aliasing resulted in many outliers - the median, as used in midtones, has no trouble filtering them out, but in highlights there are only a few pixels selected, so outliers are more likely to cause trouble.

I ran a few tests over some images with the same problem, and the algorithm still struggles; what worked with the stock 20-bit version, now has stripes after the above changes. So I'm not there yet.

Levas

"If a pixel from the bright image is clipped, the corresponding pixel from the dark image is not used."
It's all even more clever designed than I expected  :o

Just to understand.
The median from the mid tones is no problem, lot's of overlap and accuracy.
But you also need to know the exact difference in the highlights (so the brightness difference between the ISO's is not linear through the range ?)

The whole situation with dual iso is, you use the highlights from the lower iso and have them clipped in the higher iso... and yet you do need a way to  compare the highlights...ironic...

Levas

"but in highlights there are only a few pixels selected, so outliers are more likely to cause trouble."
Can't you build something in like, if less then x(for example 100) pixels are usabla for comparing highlights, start again with slightly lower brightness values, until you get more then 100 pixels to compare.
In the worst case this moves you toward the mid tones and still won't give good results, but maybe it does work for most images ?

a1ex

It's linear, and the thing we are talking about is just a linear regression. But you have a ton of outliers, so least squares is not going to work.

Here's the data set for your image (frame 200, cr2hdr-20bit unmodified; left is linear, before matching, right is log, after matching):



The green dots are what the algorithm selected as "highlights" (wrong choice).

Here's the correct choice (frame 199):


Levas

You can see in the right graph of frame 200, that the red line is off, indeed highlight point is wrong..blue dots are starting to cluster above the red line from the middle of the graph towards the highlights.

Did try to process the already processed frame 200 with CR2HDR again (processed image does still look like a dual iso file  ;D), but it spits out the exact same result.





Levas

Just thinking out loud (dual iso is about a year old, so probably a lot has already been discussed in developing it, so sorry if things already have been suggested  :) )

"I've interpolated the two images with a very simple algorithm"
I don't see the need for comparing two exact overlays pixel by pixel  :-\ (probably missing something important here  :P)


If I did know nothing about dual iso and someone gave me one of those raw dual iso images with the question to gave the brightness difference between the dark and light lines, I would do something like this:

-read the amount of dark pixels (let's say 10 megapixels) and sort them in order of value, plus throw out all values below 100(for example),let's say 100.000 pixels are thrown away because they are too low in value.
-read the amount of bright pixels (the other 10 megapixels) and sort them in order of value, plus throw out all values above 15000(for example), let's say 200.000 pixels are thrown away because they are too high in value.

So you end up with two strings of numbers ( 100, 100, 100, 101, 101, 110...14322, 14322, 14355 etc,)

Now compensate for throwing away pixels.
For the bright value string, discard the first 100.000 numbers in the string (assuming they are sorted out on value, this throws away the 100.000 darkest pixels in the bright image)
For the dark value string, discard the last 200.000 numbers in the string (assuming they are sorted out on value, this throws away the 200.000 brightest pixels in the dark image)

Now you have 2 strings of values to compare.
Now the difference in average of these 2 strings is the difference in brightness...
EDIT: so no pixel by pixel difference, but the average of all "bright" pixels compared to the average of all "dark" pixels.

But I'm probably overlooking something, because you're using 2 points(mid tones and highlights)
So maybe you can take the median(or average) of the first half of the two strings for mid tone point.
And the median(or average) of the second half of the two strings for highlight point.

a1ex

That was one of the first algorithms I've tried, but it's worth revisiting. Updated the log graphs (magenta line is the result of comparing histograms directly at various percentiles).

This curve is biased at the sides (most likely because of different noise amounts in the two exposures), but the middle part seems unbiased. Worth trying on the other tricky test images.

Levas

comparing histograms directly makes more sense to me than comparing interpolated images with each other.
The middle of the magenta curve does a good job  :D, you can see it nails it for frame 200!


Levas

You probably know this already.

But I believe there are formula's to get the exact slope from the middle of the magenta line.
Had this with math in high school (more the 15 years ago  :P ), I believe you need the first derivative and set equalize it with zero ??? or something like that  :P

ed.jenner

Yea, second derivative - or take the pink curve where the second derivative is smaller than a threshold.   Not sue if it is going to wok though - depends on exactly what that curve looks like and how close you need to get.

One way I have gotten around fitting data somewhat like this is iterative least squares.  What I did was:

Do least squares, compute standard deviation.
Remove outliers more that 2 SD from fit
Refit
Remove outliers more than 1.5 SD from new fit (computed SD from initial fit, not new SD)
Refit

This could work depending on the data and how computationally expensive you can work with;  moving in small steps does better, but you may only need 3. at SD 2, 1.75, 1.2 say.

You could also compute the pink curve, then only use the data where the second derivative of the pink curve is below a certain threshold (not sure how variable this will be with different ISO combinations though) and then use iterative least squares on the remaining data.

If the pink and red lines are supposed to always overlap in the center, you could also compute the pink curve and then choose say 40 points (or however many makes sense) on that curve and then use iterative LS to fit a line to those data.  That would be a lot quicker than using all the data and you could do enough iterations to (hopefully) guarantee fitting the center part.

Not sure if any of this is actually helpful, but just in case...




a1ex

Estimating the derivative locally is not exactly accurate.

Another problem with that curve is that - if the two exposures have different amounts of noise or aliasing - it becomes biased. Median is fine, but percentiles at the extreme may not be. This is why my previous algorithm first selected some highlights, then took their median. Here's a test case to show it (magenta curve is biased, but highlight median is not):

Levas

Am I right in saying that dual iso means that you only can compare the green channel, because it's available in both dark and bright lines ?
Red pixels and blue pixels are splitted to only the dark or bright image, so comparing brightness levels of either red or blue pixels is probably meaningless (A.K.A. noise) ?

a1ex

I compare all of them, assuming they are all linear data.

Levas

It's all linear data.
But I can imagine you get biased results, cause they have different filters in front, red and blue light only passes.
A blue sky will appear bright in the blue channel and very dark (if it appears at all)in the red channel.

Let's say you have 3 stops of difference in dual iso, I can imagine for the blue sky, you get more than 3 stops difference between the red and blue channel because of the selective filters.

a1ex

So? I compare reds with reds and blues with blues, not reds with blues.

Levas

But dual iso means alternating lines, so the bright image contains the blue pixels and the dark image contains the red pixels.

You're using interpolated images for comparing, so the difference may be not that big of a problem  ???
And the results you get with CR2HDR-20bit are amazing, so it does a really good job.

But comparing interpolated data still sounds like a crazy idea, using created/fake data and treating it as hard actual data can't be the best solution.
And on the other hand it works like a charm for most dual iso images  :D, weird  ???