How to view RAW histograms after taking the image?

Started by heyjoe, September 23, 2017, 04:56:25 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

a1ex

Before including the updated code in regular nightlies, may I ask for testing feedback from owners of the other cameras present in the crop_rec_4k builds? (700D, EOSM, 100D).

Reason: other sensors might clip to white in different ways, and the current heuristic makes a tight assumption: that the clipping point is harsh and spans on one or two levels (not more). I'm not sure whether this holds true on other camera models.

heyjoe

Quote from: a1ex on October 04, 2017, 05:55:09 PM
Reason: other sensors might clip to white in different ways, and the current heuristic makes a tight assumption: that the clipping point is harsh and spans on one or two levels (not more). I'm not sure whether this holds true on other camera models.

I don't know if this may help but here is a video about RD which mentions highlight histogram shapes (scroll to 2:40). The speaker says "A bell" shape is typical for Canon and also talks about the changing of saturation value at different exposure parameters (which you mentioned in an earlier reply).

DeafEyeJedi

Quote from: a1ex on October 04, 2017, 05:55:09 PM
Before including the updated code in regular nightlies, may I ask for testing feedback from owners of the other cameras present in the crop_rec_4k builds? (700D, EOSM, 100D).

Reason: other sensors might clip to white in different ways, and the current heuristic makes a tight assumption: that the clipping point is harsh and spans on one or two levels (not more). I'm not sure whether this holds true on other camera models.

Done.
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

a1ex

Tested some long exposures (30 seconds at higher ISOs) - it no longer clips harshly, so my algorithm thinks the images are not overexposed (even when it's obvious to the human eye on the histogram).

Will push a fix shortly.

heyjoe

Quote from: a1ex on October 07, 2017, 11:20:15 AM
Will push a fix shortly.
Good. Please let us know if we need to test anything again.

Audionut

Quote from: heyjoe on October 07, 2017, 12:16:22 PM
Good. Please let us know if we need to test anything again.

Test the latest code changes.  Does the latest fix actually work?  Did it brake something? 

Test as if you are doing QC for a company.

heyjoe

Quote from: Audionut on October 09, 2017, 01:31:42 AM
Test the latest code changes.  Does the latest fix actually work?
Where is it? Has it been published? What exactly is new to test?

Quote
Did it brake something? 
I definitely prefer not to test for the purpose of answering this particular question. A non booting camera is the last thing anyone needs.

Quote
Test as if you are doing QC for a company.
I have been doing it so far. Not sure what you are implying.

Audionut

Quote from: heyjoe on October 09, 2017, 02:34:58 AM
Where is it? Has it been published?

Here.  Specifically here and here.

In that first link, on the right hand side next to the date, you can see that the fix was pushed to branches crop_rec_4k and iso_research, not the nightlies (unified branch).  In other words, the semi-stable stable nightly build doesn't yet have the fix applied, because...........it needs people to test it first.

So, if you really want to help the project (no implications, simply statements (perhaps I should find a way to reword this statement)), the next (easy) step is to go here.  If you have a 5D3, 100D, 700D or EOSM, yay, we have a winner, look at "4K raw video recording; lossless compression".  You will notice in the changelog for the latest builds there.

Raw backend: allow non-harsh clipping for white level
(should fix clip warnings at long exposures with high ISOs, e.g. 30" ISO 6400)


So these builds have the latest code change applied.

If you don't have one of the above mentioned cameras, and still (really, really, really*) want to help, then you'll have to learn how to compile the source code, compile the iso_research branch and test as if you were.............

*  If you really want to help, but cannot or will not follow the above, then sorry, you can't help.  Maybe next time.

Quote from: heyjoe on October 09, 2017, 02:34:58 AM
What exactly is new to test?

The raw histogram.

a1ex has pushed some new code changes regarding the raw histograms, to fix an issue he noticed (while testing as if he was doing QC for a company) regarding high ISOs and long exposures. This code change may have fixed the issue, it may have only fixed the issue on his camera, it may have broken something else (maybe the histogram no longer works on a 6D).  And so, to beat that dead horse, this needs testing as if..........

Quote from: heyjoe on October 09, 2017, 02:34:58 AM
I definitely prefer not to test for the purpose of answering this particular question. A non booting camera is the last thing anyone needs.

Whoa, slow down there.  I didn't mean the latest fix might shatter your camera into a million pieces (although it may), only that the histogram might not even work any longer on your camera after this code change.  The histogram might be broken.  Maybe the histogram menu item no longer displays.  Maybe some other minor little thing (that won't result in a non booting camera) might be broken.  Someone needs to test this.  You asked "Please let us know if we need to test anything again".

Quote from: heyjoe on October 09, 2017, 02:34:58 AM
Not sure what you are implying.

Most of the time we focus on something simple, or expect someone else to do all of the work, or maybe we just aren't entirely sure exactly how to help efficiently.  So don't just take a single photo, observe that the histogram appeared to work, and call it a day.  Test that sucker.  Try and find a repeatable test case where the histogram doesn't work.
If you find something broken, good, tell a1ex while he's in the mood to play with this specific piece of ML code.  If you don't find anything broken, even better, make sure you tell a1ex that also, as it will build his confidence to push this fix into the nightly builds so that everyone can enjoy the fruits of your work.

Most importantly, if my post offended you, go back and read it with a sense of humor.  :D  It's not personal.  Perhaps you cannot help further, that's fine.  But someone else quietly reading this thread may be able to help, and so the steps I have outlined may be useless for you, but...........

Audionut signing off, good day Sir.

heyjoe

Thanks for the clarifications. I just needed to know when the changes are complied and ready for testing as I still don't know how this whole system of development works.

Unfortunately I cannot test "as if doing QC test for a company" because:

- companies testing all possible scenarios provide the necessary equipment for it (which I don't have)
- people at companies who test have a lot of fully dedicated time for it (which I don't have)

So it would be quite silly to pretend that I am doing something which I am not. What I can do is to repeat the tests done so far (for consistency). Perhaps I won't be able to set up a scene for testing ISOs above 6400 with exposures longer than 30" as I normally work with sufficient light (e.g. strobes or day light) and such low light scenario would be quite out of my range.

I will test the latest crop_rec_4k build and write again.

Danne

Checking the branches(code) is an easy way to follow what´s happening:
https://bitbucket.org/hudson/magic-lantern/branches/

The recent changes will always be on top. ML is heavily open source and changes to code are published all the time. Here is crop_rec_4k branch for instance:
https://bitbucket.org/hudson/magic-lantern/branch/crop_rec_4k

Learning how to compile and set up a source tree is a great way to start digging into code(I hardly can follow 1 percent of the stuff but it´s nice to read and compile it nevertheless :P):
Mac
http://www.magiclantern.fm/forum/index.php?topic=16012.0
Win
http://www.magiclantern.fm/forum/index.php?topic=15894.0


Audionut

Equipment needed.

- Camera............check
- Grey matter between ears...........check

We are not a company, we are a bunch of guys continually doing stuff on our free time.  a1ex tends to just have a bit more spare time then the rest of us.

The best part of this whole thing, is that we are not some company.  We don't have a hill for shit to roll down when things go wrong, shareholders screaming and bitching and moaning, middle managers over stepping their pay grade, or all of the other negative things associated with a company.  We understand that each person is helping, just because.  Not for monetary gain, but for reasons that supersede money.  So when we say, test as if doing QC for a company, we don't mean, do this and we will employ you as QC, we mean, spend a little extra effort with that grey matter between your ears then you might have otherwise have normally applied.  Test outside of your range and don't just consume.

Cheers.

Audionut

That last post sounds a little harsh.  Feel free to tell me I'm being a prick via PM.  I promise that's not my intention, but I would rather have you vent some steam, and continue to support the project further, then get discouraged and pack your bags.

heyjoe

Tested as promised:

CR2

160 0.8s

160 1s

160 1.3s




100 1.6s

100 2s

100 2.5s



6400 30s 8.0

6400 30s 7.1

6400 30s 9.0

6400 30s 10.0

heyjoe

Tested in the last 3 days in a real shoot (can't provide raw files, sorry):

- Some shots for which ML shows E0.1 are slightly clipped (when viewed in RawDigger afterwards)
- During shooting for some shots ML (QR) shows both not full ETTR and clipping, e.g. E0.3 and at the same time overexposure indication (dots in histogram). The strangest case was E0.9 with OE dots.
- Another case while shooting: I see E0.4 and I increase exposure time with 1/3EV (e.g. from 1/160 to 1/125). Then I take the same picture again (same light and composition, nothing changed) and I get overexposure indication. Expected: E0.1

Audionut

I think we need to remember that WL calculation is being done on a tiny little ARM processor, and needs to be in real time.

Quote from: heyjoe on October 22, 2017, 05:39:51 PM
- Some shots for which ML shows E0.1 are slightly clipped (when viewed in RawDigger afterwards)

I think we need to accept here that E0.2 > E0.0 = clipped.  Maybe only slightly, or maybe not at all.

Quote from: heyjoe on October 22, 2017, 05:39:51 PM
- During shooting for some shots ML (QR) shows both not full ETTR and clipping, e.g. E0.3 and at the same time overexposure indication (dots in histogram). The strangest case was E0.9 with OE dots.

Reproducible?  I hope to have some time in the coming week to actually use my DSLR.

Quote from: heyjoe on October 22, 2017, 05:39:51 PM
- Another case while shooting: I see E0.4 and I increase exposure time with 1/3EV (e.g. from 1/160 to 1/125). Then I take the same picture again (same light and composition, nothing changed) and I get overexposure indication. Expected: E0.1

This one has been around since forever.  Don't forget that electronic aperture is not always consistent.
Again, I think we need to allow E0.0>E0.2 tolerance.  Either settle for very slightly underexposed, or bump shutter 1/3EV and settle for possible very slight overexposure.
Or find someone with coding talent and desire to increase accuracy.  We're really splitting hairs though, since the current implementation is decades better then the JPG based histo that Canon provides.

For 1 and 3, when you have time to pixel peep the histogram, you have time to shoot two images 1/3EV apart, to cover all bases, and get that warm fuzzy feeling inside knowing that you shot ETTR as close as possible.  It's what I do

Thanks for your time, the second bug needs further investigation.

BTW, I caught your last post before deletion.  Thanks for your words.

heyjoe

Quote from: Audionut on October 28, 2017, 02:04:48 PM
I think we need to remember that WL calculation is being done on a tiny little ARM processor, and needs to be in real time.
Of course. But it is not real time in the sense that it happens at the time of exposure - it happens right after capture, right? Or are you saying that better accuracy would need some additional computation power which would make the displaying of clipping and histograms much slower?

Quote
I think we need to accept here that E0.2 > E0.0 = clipped.  Maybe only slightly, or maybe not at all.
The problem is that it actually indicates underexposure (incomplete ETTR) for actually clipped image, i.e. one may think that everything is fine when there is actually a data loss (clipping) occurring.

Quote
Reproducible?  I hope to have some time in the coming week to actually use my DSLR.
Should be. I have seen it at least 50 times.

Quote
This one has been around since forever.  Don't forget that electronic aperture is not always consistent.
Again, I think we need to allow E0.0>E0.2 tolerance.  Either settle for very slightly underexposed, or bump shutter 1/3EV and settle for possible very slight overexposure.
Or find someone with coding talent and desire to increase accuracy.  We're really splitting hairs though, since the current implementation is decades better then the JPG based histo that Canon provides.

For 1 and 3, when you have time to pixel peep the histogram, you have time to shoot two images 1/3EV apart, to cover all bases, and get that warm fuzzy feeling inside knowing that you shot ETTR as close as possible.  It's what I do

Thanks for your time, the second bug needs further investigation.
Considering all you say and what I asked earlier (about the possibility of having raw histograms in Play mode, not only in QR): What do you think about using libraw to the decode the CR2 file? RawDigger is based on libraw so I was just wondering if that could make the whole thing easier to code (and hopefully more accurate).

a1ex

If it's reproducible, please upload some samples. The CR2 that caused the issue should be enough.

For decoding the CR2, I only need some really basic parsing (retrieving the StripOffsets tag from the CR2 header). Decoding the image data is going to be very slow on this CPU (several seconds, maybe even 1 minute), but I'll reuse Canon's decoder (very fast, running on JPCORE - probably some sort of DSP). The code should be small, without dependencies on large external libraries (but you can reuse code from them if the license allows you).

The accuracy issue comes from the clipping point not being known (it's variable). Canon code has a heuristic, but it's pretty conservative; their metadata can be 0.38 EV below the true clipping level (only from testing a few settings; if you brute-force the entire parameter space, you might find even more pathological cases). We have two cases:

- if the maximum value from your image is below Canon's heuristic, we can assume the image is not overexposed (I hope so)
- if it's above Canon's heuristic, it might be overexposed, or it might be not; we don't know for sure. In this case, I use the histogram heuristic to decide whether the image is clipped or not (whether there's a peak on the right side of the histogram). This is not 100% accurate, and I'm looking for counterexamples where my heuristic gives the wrong results.

That was the problem of deciding whether the image is overexposed or not.

The second problem - given an image that's not underexposed, how far you can push it to the right?

This one is harder because... if the image is not underexposed, you don't know the clipping point. We have Canon's heuristic, which is a lower bound (off by some 0.1 ... 0.4 stops, depending on exposure settings). So I just use that level as reference (unless the max value is already above that level).

Of course, you could take test images at any possible ISO x any possible shutter speed, write down the white level, check repeatability (different bodies of the same model might give different clipping points, they might change with temperature and so on), repeat for all other camera models supported by ML. Or you could imagine some sort of learning algorithm that uses the results from past autodetections (on overexposed images) to predict the clipping point on non-overexposed ones.

Note: the white level also varies with the aperture, but these variations are well understood (it's a digital gain that can be canceled). This simplifies the problem by "only" one order of magnitude.

The current approach doesn't learn anything - it attempts to figure out everything from the current image (from scratch).

Audionut

Oh that's right, I forgot about the guessing game you have to play.  When you have all of the data you speak of, raw_diag comes in handy when one knows that actual clipping values.   :)

heyjoe

Quote from: a1ex on October 28, 2017, 03:58:28 PM
If it's reproducible, please upload some samples. The CR2 that caused the issue should be enough.
As I explained I cannot provide CR2 files because it was a commercial shoot and there are legal agreements involved. Sorry. But I am sure you can reproduce it if you try. Most of the time I was shooting at ISO 160 f8.0 and the main variable was exposure time (through which I controlled ETTR). Of course I made several exposures (+ and -) to make sure I have non-clipped files too.

Quote
The accuracy issue comes from the clipping point not being known (it's variable).
Yes, I thought about that and I remember you mentioned it.

Quote
Canon code has a heuristic, but it's pretty conservative; their metadata can be 0.38 EV below the true clipping level (only from testing a few settings; if you brute-force the entire parameter space, you might find even more pathological cases).
I have actually been thinking about making something like this - creating a map of the entire parameter space in order to find the maximum values for all possible combinations of ISO, speed, F-stop. But I am afraid that
1) I may burn my sensor through so many overexposure tests
2) I have no lens wider than 2.8
3) I am not sure if this is lens dependent, i.e. if for example 70-200/2.8 at 70mm will give the same saturation values as 24-70/2.8 at 70mm or if focal length plays a part in all that
4) it may take a long time which I don't have
5) it may turn out to be pointless
6) heuristic based on shape of the highlights may work better (what you currently do afaik)

Didn't you mention that you have found a register which contains the saturation value? Doesn't that help?

Quote
We have two cases:

- if the maximum value from your image is below Canon's heuristic, we can assume the image is not overexposed (I hope so)
- if it's above Canon's heuristic, it might be overexposed, or it might be not; we don't know for sure. In this case, I use the histogram heuristic to decide whether the image is clipped or not (whether there's a peak on the right side of the histogram). This is not 100% accurate, and I'm looking for counterexamples where my heuristic gives the wrong results.
Hm. What about if you loop programatically generated histograms through your algorithm and see if any particular shape gives wrong result? Would that help to improve the heuristic?

Quote
The second problem - given an image that's not underexposed, how far you can push it to the right?

This one is harder because... if the image is not underexposed, you don't know the clipping point. We have Canon's heuristic, which is a lower bound (off by some 0.1 ... 0.4 stops, depending on exposure settings). So I just use that level as reference (unless the max value is already above that level).

Of course, you could take test images at any possible ISO x any possible shutter speed, write down the white level, check repeatability (different bodies of the same model might give different clipping points, they might change with temperature and so on), repeat for all other camera models supported by ML. Or you could imagine some sort of learning algorithm that uses the results from past autodetections (on overexposed images) to predict the clipping point on non-overexposed ones.
Which adds 7), 8... etc. and the whole thing needs many man hours and many camera bodies to test completely. That's what I meant that testing as if for a company is not possible due to limited resources.

Quote
Note: the white level also varies with the aperture, but these variations are well understood (it's a digital gain that can be canceled). This simplifies the problem by "only" one order of magnitude.
Yes, you mentioned that. But if you start to account for that as a well known parameter - would that improve the situation?

Quote
The current approach doesn't learn anything - it attempts to figure out everything from the current image (from scratch).
I understand. Hm. How about a different approach:

Can you make (a menu option for) larger histogram e.g. full screen zoom of the highlight/shadow areas? Then one would be able to see better for oneself what is going on and hopefully manage to read the results of the heuristic using human intelligence instead of relying only on clip warnings and one number (which may not be so accurate as we see)? Currently the histogram is really microscopic and cannot be used for any significant evaluation but if it can - that may be helpful I think. This is similar to the UniWB method but considering that you can show the actual raw histograms, it will be far more accurate. To avoid obstructing the image too much it may draw just the contour of the histogram as an overlay (similar to RawTherapee. What do you think:



Another thing that comes to mind: is it possible to instruct the camera not to change the saturation value when exposure parameters change, i.e. to use a fixed maximum for any set of parameters and then things will be very simple?

Audionut

Quote from: heyjoe on October 28, 2017, 06:50:05 PM
But I am afraid that............and the whole thing needs many man hours and many camera bodies to test completely. That's what I meant that testing as if for a company is not possible due to limited resources.

That's the entire point.  No one wants to do all of that, not just you, for the reasons you have outlined, but because, it's splitting hairs.  Who in their right mind would devote so much time an effort for 1/3EV.  Oh, and then someone has to actually maintain that code (for the life of the project, until someone else gets the shits and removes it, or a better solution presents itself).

In an ideal world with unicorns and fairy's, it would be wonderful to also have an extremely accurate histogram.  In the real world, we have to accept limitations.  Just saying  :D

heyjoe

Quote from: Audionut on October 28, 2017, 07:10:38 PM
That's the entire point.  No one wants to do all of that, not just you, for the reasons you have outlined, but because, it's splitting hairs.  Who in their right mind would devote so much time an effort for 1/3EV.  Oh, and then someone has to actually maintain that code (for the life of the project, until someone else gets the shits and removes it, or a better solution presents itself).

In an ideal world with unicorns and fairy's, it would be wonderful to also have an extremely accurate histogram.  In the real world, we have to accept limitations.  Just saying  :D
Yes, I understand what you are saying and that's why my first suggestion is based on the fact that the code which creates the histogram is already available. So all that may be needed is an option to draw it bigger. Then if the top 1EV of raw channel histograms can be zoomed so that one can see visually the clipping, we can rely on human intelligence, not only on heuristics (which may be limited in particular situations).

Audionut

Pretty sure that has been discussed at some time, but I can't find the discussion with a quick search.
I like the idea.  I think I would stick to that display most of the time, with ETTR hint in case the data falls outside of the display.  Zebras for shadows.



Topic Split:  Full-screen histogram WIP

heyjoe

Thanks for explaining. Sounds good. Please let us know when there is a build to test.

BTW I wonder why the heuristic makes mistakes if it is based on that same info. Does it also use compressed CDF for calculations?

Audionut

Quote from: a1ex on November 02, 2017, 09:11:39 PM
Median marker is 5 stops above the noise floor, shadow markers are at 2 stops above (more or less arbitrary choices).

This one had me scratching my head.  Being stupid has it's advantages, one has so many moments of enlightenment.  "Ah, there it is".  I like the white level indicator on the EV scale also.  Oh, and median markers for both the darkest and brightest channel.

My suggestions.

Move the DR and ETTR hints out of the data area to the top left.  A cleaner solution would be to have transparent backgrounds while in the top left, but the overlay may get lost under a number of circumstances.  Having said this, I don't have my camera here to check, but I can't recall a time where the overlays have been lost in the image.  Otherwise they could be made smaller, but I have 20/20 vision.

I'd like to see the while level printed.  If you decide to move the indicators mentioned above, I could see the value fitting there nicely.  Otherwise, meh, I guess it would just be in a similar vain to yamlmo, and satisfying nitpickers.

Quote from: heyjoe on November 02, 2017, 11:57:58 PM
Could you please explain how we would use those while shooting?

http://www.magiclantern.fm/forum/index.php?topic=8539.msg80044#msg80044

Quote from: heyjoe on November 02, 2017, 11:57:58 PM
That offset is what I was asking for previously. Makes things much more readable. If you can add it globally left and right it would be great.

That offset on the right hand side is data, it's not arbitrary.

Quote from: heyjoe on November 02, 2017, 11:57:58 PM
Another thing: having channel histograms on top of each other makes it a little difficult to say which channels are clipped and which not. Sometimes one may want to clip one channel but not another.

Look very closely at how the color changes in the different examples where clipping has occurred. 

http://www.magiclantern.fm/forum/index.php?topic=12096.0#post_Histogram
Quote from: Audionut on May 29, 2014, 05:41:00 PM
The colors in the histogram, represent the color channel in the camera (Red, Green, Blue).
You will also notice, that ML displays Cyan, Magenta and Yellow.  If you look at the color chart below, you can see that Yellow falls in between Green and Red, and hence, Yellow represents data in both the Green and Red channels.  Cyan being the data from Green and Blue channels, and Magenta being the data from Blue and Red channels.  White indicates data from all color channels.

Consider this example.


Lets use the zoomed histogram top right for simplicity.  You can see where the red channel is clipped (white), the blue channel is clipped (cyan) and the green channel is clipped (green).

Of course, white actually indicates that all of the channels have been clipped in that region, not just red.  But it's the data above the white area that allows you to determine that the red channel clipping has finished with the end of the white marking, because above this white area is cyan (blue and green channels).  And concurrently that blue channel clipping finishes with the cyan indication, because above this area is green (only the green channel is left).

heyjoe

Quote from: Audionut on November 03, 2017, 03:55:15 AM
http://www.magiclantern.fm/forum/index.php?topic=8539.msg80044#msg80044
My question was actually: how do we use 0.1% and 1% or any % values while shooting, considering that we have exposure controls, not %-controls.

Quote from: Audionut on November 03, 2017, 03:55:15 AM
That offset on the right hand side is data, it's not arbitrary.
Of course. I am just asking for offsetting the 16383 and the 0 from the screen edges. A thin dotted line can indicate the 16383 and the 0.

Quote
Look very closely at how the color changes in the different examples where clipping has occurred. 
I know that. But while shooting (especially outdoors or in bright environment) pixel peeping to evaluate the color of a thin line on a small LCD is hardly the best usability. Evaluating shape is much more straightforward. Hence the suggestion (and the whole discussion about being able to evaluate histogram shape as a human, not just heuristically).

Don't you think my sketched suggestion would be much more readable and optimal for a small screen space? (Please bear in mind that if the image is underexposed the highlights histogram will be simply a flat line and the fact that the right edge of the CDF will be lower will not be a problem. And for the shadows vice versa. So it is an "automatic un-clutter".)