Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - heyjoe

Audionut, I was trying to collaborate by sharing things from experience with someone who was interested in turning that into ML functionality and it was going quite well for some time. Then you came in the thread with a series of unrealistic demands, innuendos and all the micro trolling which I was trying not to respond to.

I understand that it is difficult for you to accept that you may actually be talking to someone who can share useful suggestions and you would rather abuse him (with a "level of respect"), call his clarifications "that shit" or "big ego" etc. However that is contrary to the idea of communing and the aggression in your latest reply has gone too far.

Thank you a1ex. Great work. Forgive me if I have said anything inappropriate.
Quote from: Audionut on November 03, 2017, 03:29:02 PM
If you have 500 pixels that are below 10%, and you want those 500 pixels to be above 10%, how do you move those 500 pixels? 
While shooting I don't counts pixels or make calculations based on counts. Personally I watch that my light, composition, focus and ETTR. If I see visually (not numerically as a pixel count) an area which is important and must not be overexposed - I correct exposure, obviously.

Hint:  You already have the controls necessary on your camera, and don't need a special % control.

My red for emphasis.
I have already read the post from the link. I definitely didn't read the next posts.
Thanks for the additional explanations. I am not sure if I will ever use that while shooting.

QuoteWell, 0 is a misnomer.  The black level in Canon's is very close to 2048.  This leaves headroom for proper evaluation of the noise sigma.  The option in rawdigger is subtract black.  This is on by default.  Try turning it off if you want accurate data from the CR2.
It is interesting that you are critical about me not reading thoroughly another thread yet it seems you have not read my posts in the current one (e.g. #8) :) But I appreciate the explanations.

QuoteIn practice Canon's only hit 16383 on fast lenses, when in fast mode.
Which doesn't make it an impossible value.

QuoteWhy do you need some offset on the left hand side anyway.  Is the idea to waste valuable space?
Because the edge of the screen is the border between 2 physically different materials. When you shoot outdoors there are all kinds of reflections and on the edges of plastic they can be even stronger and more obstructing. As someone who has optimized a lot of UI/UX I can say that it is not a good to place important info/graphics in an area with compromised visibility and rely on the viewer to stare harder.

QuoteAt ISO 100 (best case scenario), the bottom three stops are useless anyway.
How is that calculated?

Do you really need to know what's going on there, apart from, those pixels in that area are unusable.

QuoteIf you need to know how many pixels are in that area, use the % indicators.
Maybe I will have to get used to that approach.

QuotePixel peeping on a small LCD is not the best usability, period.  Maybe go see an eye doctor.
Or maybe kindly consider that one has suggested a UI optimization in order to avoid pixel peeping.

QuoteNo, I don't.  I would wonder why the red channel is brighter then the green channel.  The Red channel is around 50% efficient as the green channel, and yet is at some arbitrary level above the green channel in your sketched suggestion.  With the current implementation as shown by a1ex, I can immediately see where each color channel is relative to the others.
Considering that the shape of channel histograms depends on the scene which is shot + the fact that all 4 channels show the same DR when testing, I wonder what you are talking about. I also wonder how this is related to the essence of the suggestion which is about placement and size of UI elements.

If the image is underexposed, then we don't need to worry which color channels are clipped.  Don't you think.
You are mixing my posts and creating a new post from that with a totally different meaning which I never had in mind. So you are missing the point in what I said and trying to ridicule it based on misunderstanding.

The idea of having separate channel histogram is related to highlights clipping, not to underexposure. And my earlier post explains that having clipped only one or two channels may be ok in certain situations.

I should probably add some emoji's in there somewhere, but meh, you don't seem to place much effort in reading the links presented to you.  Tit for tat.  :D
I have read every link shared, even the ones which don't answer clearly a question which was asked.

Considering the jabs you like to make from time to time: If you consider my contribution to this thread worthless, disrespectful to the freedom of software or time wasting I am ready to shut up for good and still be thankful to the developers.
Quote from: Audionut on November 03, 2017, 03:55:15 AM
My question was actually: how do we use 0.1% and 1% or any % values while shooting, considering that we have exposure controls, not %-controls.

Quote from: Audionut on November 03, 2017, 03:55:15 AM
That offset on the right hand side is data, it's not arbitrary.
Of course. I am just asking for offsetting the 16383 and the 0 from the screen edges. A thin dotted line can indicate the 16383 and the 0.

Look very closely at how the color changes in the different examples where clipping has occurred. 
I know that. But while shooting (especially outdoors or in bright environment) pixel peeping to evaluate the color of a thin line on a small LCD is hardly the best usability. Evaluating shape is much more straightforward. Hence the suggestion (and the whole discussion about being able to evaluate histogram shape as a human, not just heuristically).

Don't you think my sketched suggestion would be much more readable and optimal for a small screen space? (Please bear in mind that if the image is underexposed the highlights histogram will be simply a flat line and the fact that the right edge of the CDF will be lower will not be a problem. And for the shadows vice versa. So it is an "automatic un-clutter".)
General Development / Re: Full-screen histogram WIP
November 02, 2017, 11:57:58 PM
It's getting better and better :)

Quote from: a1ex on November 02, 2017, 09:11:39 PM
Some examples with markers:
Could you please explain how we would use those while shooting?

BTW the white rectangles can be visually problematic: imagine shooting architecture - a building with lots of small white tiles or something else with repeating pattern.

Showing 12 stops below 16383. A small space appears on the right because the white level is usually a bit lower (about 13000 - 15000).
That offset is what I was asking for previously. Makes things much more readable. If you can add it globally left and right it would be great.

Another thing: having channel histograms on top of each other makes it a little difficult to say which channels are clipped and which not. Sometimes one may want to clip one channel but not another. How about an option to have them separate (below each other)? Also considering there is so much space on top and left and bottom right of the CDF it seems to me the display can be optimized by using a non-uniform X log scale, i.e. bottom and top 2-3EV stretched horizontally and what lies in the middle more compressed. Then the CDF can be stretched to the top (with small edge offset) and place a huge zoom of the shadow and highlights histograms on top left and bottom right. Quick sketch:

What do you think?
Thanks for explaining. Sounds good. Please let us know when there is a build to test.

BTW I wonder why the heuristic makes mistakes if it is based on that same info. Does it also use compressed CDF for calculations?
General Development / Re: Full-screen histogram WIP
November 01, 2017, 05:47:11 PM
Quote from: a1ex on November 01, 2017, 01:53:28 PM
False, see #86. With log scaling on X, the movement is always a translation (one notch on the grid = 1 EV).
Hm. You are right. I didn't think about the X-log.

The current scaling already lets you evaluate the exposure with a resolution of 1/60 EV (if you count pixels), or ~0.1 EV (if you look at it). I don't see the need for further zooming on X.
Sounds great. Could you at least add a little spacing (few pixels) left and right of the graphics to avoid having vertical lines being placed right at the edge of the LCD?

Already done... 1 month ago ;)

The exposure hints from Auto ETTR are also CDF-based.
That's great! I guess I was confused because of that:
Quote from: a1ex on November 01, 2017, 12:50:56 AM
I'm not used to it either - I've read about it today ;)

BTW - please try cutting down the "splitting hairs".
Please don't look at it this way. The usability suggestions are based on the actual ETTR tests. BTW what is the function of the small vertical colored ticks?

Quote from: Audionut on November 01, 2017, 02:20:09 PM
Compression of the CDF graph does the exact opposite.  It expands the highlights and shadows.
Are you sure? The compressed versions show shorter flat areas for clipped shadows and highlights.
General Development / Re: Full-screen histogram WIP
November 01, 2017, 01:39:32 PM
Quote from: a1ex on November 01, 2017, 12:50:56 AM
Yes, but there are scenes where you have to sacrifice some highlights (think something with strong lights, or any scene where you may want to use Dual ISO). In this case, guess where you have to look - at shadows and midtones!
But if you have to overexpose highlights, then midtones become highlights, so you still evaluate highlights and control how much you overexpose. It is still highlights control, is it not?

QuoteLeft and right translation is essential - you'll know how the histogram will look like after adjusting the exposure. At a glance.
That's true. The problem is that we don't have a 30" LCD on the camera to see everything :) So I am just thinking about using the available space for the most critical things. BTW if CDF is used - the exposure movement becomes more complex, not just left/right.

QuoteI'm not used to it either - I've read about it today ;)

It does! Have you read this?
Yes, we also studied that in the university. I just mean that from usability viewpoint it may need special education for the end user :) Also from which picture is it really simpler to evaluate clipping peaks:

Yes, the CDF shows a flat area at the end and for non-clipped ETTR one should aim for not having any flat area on the right but I suppose it needs testing in real conditions. So maybe the best thing to do is to have an option in the menu what to display: histogram or CDF.

Another thing which comes to mind: perhaps with CDF it will be difficult to evaluate shadows due to the log X axis and the linear nature of raw data which always has less values in shadows. Which again comes to screen space - 80% of the CDF (or histogram) displays midtone data which is not critical for exposure control. Won't it be better to have a smaller full CDF/histo and a zoom of the critical areas?

Linear X is not trivial in the current codebase (the histogram is built with log assumption from the beginning); will leave it as an exercise after committing the code :)
I am sure you know how to make it :)

Quote from: a1ex on November 01, 2017, 01:40:04 AM
Markers on the bottom?
Or maybe a thin diagonal straight line?

Quote from: a1ex on November 01, 2017, 01:40:04 AM
The zoomed view also shares its X axis with the rest of the graph (that's why it's placed above).
Hm. But what is it zooming then? My initial idea was to have it stretched on both X and Y (damn.. I am repeating myself too much :) ) Another thing: having the vertical clipping line right at the edge of the screen makes it difficult to read. Maybe it needs some spacing. Just imagine shooting in a bright sunny day and having to read all that visual info.

Quote from: a1ex on November 01, 2017, 11:19:47 AM
Yeah, that's the difference between log and linear histograms. CDF is linear - maybe it's worth "compressing" it somehow?
I think that compressing CDF in a way which reduces the critical areas (highlights and shadows) is contrary to what we need. It is also contrary to the advantage of CDF to display outliers much better than a histogram.

BTW: How about a new heuristics based on CDF? :)
General Development / Re: Full-screen histogram WIP
November 01, 2017, 12:25:57 AM
Quote from: a1ex on October 31, 2017, 12:59:41 PM
May I have the second image for tests?
Unfortunately I can't send the raw file for which this histogram is shown but I can try to shoot another one with slight clipping of highlights and shadows if that is what you need for testing. Please let me know. Or it may be faster to use one of the studio samples from dpreview or imaging-resource.comThis one seems to have slight clipping of shadows and highlights.

Quote from: Audionut on October 31, 2017, 08:53:17 PM
That's not exactly an accurate example as the display shows 17 stops.
Yes, because as the screenshot shows it is using Auto setting for EV scale on X-axis. With 14bit raw files we obviously can't have values above 2^14.

What is the usefulness of the shadow area being presented in the manner that rawdigger does?
Evaluation of shadow clipping based on the actual raw data. It is not necessary to have it identical (jagged). I just mean to have a "zoom" of the shadows too.

Personally I prefer the grey marking that ML currently displays.  I don't need to see the lack of bits being represented the way that rawdigger shows, only that there is simply not enough bits there, and that there is data being contained within.

I would like to see the current histogram shadow area being user adjustable along the lines of the zebra underexposure option.  So as a user I can decide, I don't want any data contained within the bottom three stops, and the grey overlay on the histogram will represent these bottom three stops, and I can then move the data in the histogram as needed with exposure.
Of course we don't want data in the lowest bits but sometimes it is not possible to avoid it. Example: in an outdoors shoot we can't control the fill light in the shadows of the far away trees like we can in a studio. So being able to evaluate that visually is a plus. Don't you think?

Quote from: a1ex on October 31, 2017, 10:47:46 PM
The linear zoom on the Y axis in the last stop makes sense - it's easier to judge the clipping status in tricky cases.
What cases? Generally log Y is better. If Y is linear it makes small clippings unnoticeable. Example (using this CR2):

Left - log X/ linear Y; Middle - log X/ log Y; Right - linear X / log Y:

However, linear scaling on the X axis does not - at least to me. Why?
Just look at the examples. It shows much better the peaks in clipping. Log X smooths it because logarithm naturally compresses the levels horizontally.

With logarihtmic scaling on X:
- you can always read the offset in EV
- when exposure changes, the entire histogram translates horizontally.
But the whole idea of having bigger histogram of highlights is the visual (human intelligence) evaluation of the shape of the highlights. While shooting we are not really interested in what is in the middle of the histogram and how it translates left or right. We want to maximize the use of the sensor where we have most levels (ETTR), right?

<CDF, smoothing etc..>
I agree that the raw without smoothing creates cluttering, so it looks better with spreading + smoothing (as long as that doesn't affect accuracy).
CDF... I don't know. Personally I am not used to it (perhaps most people too). It looks like a lot of wasted space is wasted just to display a smooth S-like line which doesn't really say what adjustment of exposure might be needed for ETTR. Perhaps it may be useful for shadows only.

Is it possible to see an example using the CR2 from above showing (similar to my ugly sketch):
Left 1/2 of the graphic: LogX/LogY of shadows (with smoothing)
Right 1/2 of the graphic: LinearX/LogY of highlights (the top 1EV, no smoothing)

I am really curious if this will work.
General Development / Re: Full-screen histogram WIP
October 31, 2017, 12:32:22 PM
BTW thinking further: for visual evaluation of histogram clipping in highlights it may be good to have (an option for) linear horizontal scale (like RawDigger) because with logarithmic it may be difficult to evaluate a hard peak vs smooth (non clipped) on the LCD. Example to illustrate what I mean:

Log horizontal and vertical (like ML currently) shows fairly smooth, may be mistaken for non-clipped:

But with linear horizontal see how well the clipping of green shows:

And if that is combined with a display of only the top 1EV I think there is simply no way to make a mistake (vertical grid lines spaced at 1/3EV may be helpful here in order to know how much to change exposure controls):

For evaluation of shadows it may be necessary to display log/log histogram of several EV. This is image is not a good example as it doesn't have much shadows but here is another example of an image which as both highlights and shadows clipping:

So I suppose ideally on the LCD we would need only the last 2 graphics combined in one, e.g. sth like this (rough ugly sketch, text is just for descriptive purpose):

Perhaps the shadows can be smaller, so that there is more room for highlights histogram, for example 25% width for shadows 75% for highlights or something like that.
General Development / Re: Full-screen histogram WIP
October 31, 2017, 09:48:50 AM
Quote from: a1ex on October 30, 2017, 10:26:45 PM
Something like this?
Yes! :)
I am really happy to see how things are improving. Great work. I also like the DR indication.

QuoteA bit cluttered for my taste after enabling the zebras, but I'm not sure what to change.
Perhaps add options in menu, so the user can choose what elements to have displayed to their own taste. FWIW colored zebras don't work really well while shooting. Example: shoot something green (leaves) and see green zebra on top of it is very difficult to see. Pretty much the same happens with other channels, so maybe another form of zebra display could be good to, e.g. inverted colors: magenta for G, green for R, yellow for B. Or maybe blinking.

Quoteedit: the top bar somehow vanished, but to me it looks cleaner without it; what about making the EV grid lines smaller as well?
Yes, looks better.

Can you also make an option to display only the top and/or bottom 1EV of the histogram but zoomed at full screen? While shooting we don't actually need full statistical analysis of the image (full histogram). What we need is to check exposure at the extreme highlights and shadows.

Do you think it would be possible (worth it) to have (an option) for separate display of the two G channels as histograms? Sometimes they are not the same and clip differently so I've been asking myself what the current G channel histogram actually displays.
Quote from: Audionut on October 28, 2017, 07:10:38 PM
That's the entire point.  No one wants to do all of that, not just you, for the reasons you have outlined, but because, it's splitting hairs.  Who in their right mind would devote so much time an effort for 1/3EV.  Oh, and then someone has to actually maintain that code (for the life of the project, until someone else gets the shits and removes it, or a better solution presents itself).

In an ideal world with unicorns and fairy's, it would be wonderful to also have an extremely accurate histogram.  In the real world, we have to accept limitations.  Just saying  :D
Yes, I understand what you are saying and that's why my first suggestion is based on the fact that the code which creates the histogram is already available. So all that may be needed is an option to draw it bigger. Then if the top 1EV of raw channel histograms can be zoomed so that one can see visually the clipping, we can rely on human intelligence, not only on heuristics (which may be limited in particular situations).
Quote from: a1ex on October 28, 2017, 03:58:28 PM
If it's reproducible, please upload some samples. The CR2 that caused the issue should be enough.
As I explained I cannot provide CR2 files because it was a commercial shoot and there are legal agreements involved. Sorry. But I am sure you can reproduce it if you try. Most of the time I was shooting at ISO 160 f8.0 and the main variable was exposure time (through which I controlled ETTR). Of course I made several exposures (+ and -) to make sure I have non-clipped files too.

The accuracy issue comes from the clipping point not being known (it's variable).
Yes, I thought about that and I remember you mentioned it.

Canon code has a heuristic, but it's pretty conservative; their metadata can be 0.38 EV below the true clipping level (only from testing a few settings; if you brute-force the entire parameter space, you might find even more pathological cases).
I have actually been thinking about making something like this - creating a map of the entire parameter space in order to find the maximum values for all possible combinations of ISO, speed, F-stop. But I am afraid that
1) I may burn my sensor through so many overexposure tests
2) I have no lens wider than 2.8
3) I am not sure if this is lens dependent, i.e. if for example 70-200/2.8 at 70mm will give the same saturation values as 24-70/2.8 at 70mm or if focal length plays a part in all that
4) it may take a long time which I don't have
5) it may turn out to be pointless
6) heuristic based on shape of the highlights may work better (what you currently do afaik)

Didn't you mention that you have found a register which contains the saturation value? Doesn't that help?

We have two cases:

- if the maximum value from your image is below Canon's heuristic, we can assume the image is not overexposed (I hope so)
- if it's above Canon's heuristic, it might be overexposed, or it might be not; we don't know for sure. In this case, I use the histogram heuristic to decide whether the image is clipped or not (whether there's a peak on the right side of the histogram). This is not 100% accurate, and I'm looking for counterexamples where my heuristic gives the wrong results.
Hm. What about if you loop programatically generated histograms through your algorithm and see if any particular shape gives wrong result? Would that help to improve the heuristic?

The second problem - given an image that's not underexposed, how far you can push it to the right?

This one is harder because... if the image is not underexposed, you don't know the clipping point. We have Canon's heuristic, which is a lower bound (off by some 0.1 ... 0.4 stops, depending on exposure settings). So I just use that level as reference (unless the max value is already above that level).

Of course, you could take test images at any possible ISO x any possible shutter speed, write down the white level, check repeatability (different bodies of the same model might give different clipping points, they might change with temperature and so on), repeat for all other camera models supported by ML. Or you could imagine some sort of learning algorithm that uses the results from past autodetections (on overexposed images) to predict the clipping point on non-overexposed ones.
Which adds 7), 8... etc. and the whole thing needs many man hours and many camera bodies to test completely. That's what I meant that testing as if for a company is not possible due to limited resources.

Note: the white level also varies with the aperture, but these variations are well understood (it's a digital gain that can be canceled). This simplifies the problem by "only" one order of magnitude.
Yes, you mentioned that. But if you start to account for that as a well known parameter - would that improve the situation?

The current approach doesn't learn anything - it attempts to figure out everything from the current image (from scratch).
I understand. Hm. How about a different approach:

Can you make (a menu option for) larger histogram e.g. full screen zoom of the highlight/shadow areas? Then one would be able to see better for oneself what is going on and hopefully manage to read the results of the heuristic using human intelligence instead of relying only on clip warnings and one number (which may not be so accurate as we see)? Currently the histogram is really microscopic and cannot be used for any significant evaluation but if it can - that may be helpful I think. This is similar to the UniWB method but considering that you can show the actual raw histograms, it will be far more accurate. To avoid obstructing the image too much it may draw just the contour of the histogram as an overlay (similar to RawTherapee. What do you think:

Another thing that comes to mind: is it possible to instruct the camera not to change the saturation value when exposure parameters change, i.e. to use a fixed maximum for any set of parameters and then things will be very simple?
Quote from: Audionut on October 28, 2017, 02:04:48 PM
I think we need to remember that WL calculation is being done on a tiny little ARM processor, and needs to be in real time.
Of course. But it is not real time in the sense that it happens at the time of exposure - it happens right after capture, right? Or are you saying that better accuracy would need some additional computation power which would make the displaying of clipping and histograms much slower?

I think we need to accept here that E0.2 > E0.0 = clipped.  Maybe only slightly, or maybe not at all.
The problem is that it actually indicates underexposure (incomplete ETTR) for actually clipped image, i.e. one may think that everything is fine when there is actually a data loss (clipping) occurring.

Reproducible?  I hope to have some time in the coming week to actually use my DSLR.
Should be. I have seen it at least 50 times.

This one has been around since forever.  Don't forget that electronic aperture is not always consistent.
Again, I think we need to allow E0.0>E0.2 tolerance.  Either settle for very slightly underexposed, or bump shutter 1/3EV and settle for possible very slight overexposure.
Or find someone with coding talent and desire to increase accuracy.  We're really splitting hairs though, since the current implementation is decades better then the JPG based histo that Canon provides.

For 1 and 3, when you have time to pixel peep the histogram, you have time to shoot two images 1/3EV apart, to cover all bases, and get that warm fuzzy feeling inside knowing that you shot ETTR as close as possible.  It's what I do

Thanks for your time, the second bug needs further investigation.
Considering all you say and what I asked earlier (about the possibility of having raw histograms in Play mode, not only in QR): What do you think about using libraw to the decode the CR2 file? RawDigger is based on libraw so I was just wondering if that could make the whole thing easier to code (and hopefully more accurate).
Tested in the last 3 days in a real shoot (can't provide raw files, sorry):

- Some shots for which ML shows E0.1 are slightly clipped (when viewed in RawDigger afterwards)
- During shooting for some shots ML (QR) shows both not full ETTR and clipping, e.g. E0.3 and at the same time overexposure indication (dots in histogram). The strangest case was E0.9 with OE dots.
- Another case while shooting: I see E0.4 and I increase exposure time with 1/3EV (e.g. from 1/160 to 1/125). Then I take the same picture again (same light and composition, nothing changed) and I get overexposure indication. Expected: E0.1
Tested as promised:


160 0.8s

160 1s

160 1.3s

100 1.6s

100 2s

100 2.5s

6400 30s 8.0

6400 30s 7.1

6400 30s 9.0

6400 30s 10.0
Thanks for the clarifications. I just needed to know when the changes are complied and ready for testing as I still don't know how this whole system of development works.

Unfortunately I cannot test "as if doing QC test for a company" because:

- companies testing all possible scenarios provide the necessary equipment for it (which I don't have)
- people at companies who test have a lot of fully dedicated time for it (which I don't have)

So it would be quite silly to pretend that I am doing something which I am not. What I can do is to repeat the tests done so far (for consistency). Perhaps I won't be able to set up a scene for testing ISOs above 6400 with exposures longer than 30" as I normally work with sufficient light (e.g. strobes or day light) and such low light scenario would be quite out of my range.

I will test the latest crop_rec_4k build and write again.
Quote from: Audionut on October 09, 2017, 01:31:42 AM
Test the latest code changes.  Does the latest fix actually work?
Where is it? Has it been published? What exactly is new to test?

Did it brake something? 
I definitely prefer not to test for the purpose of answering this particular question. A non booting camera is the last thing anyone needs.

Test as if you are doing QC for a company.
I have been doing it so far. Not sure what you are implying.
Quote from: a1ex on October 07, 2017, 11:20:15 AM
Will push a fix shortly.
Good. Please let us know if we need to test anything again.
Feature Requests / Automatic UniWB
October 05, 2017, 12:09:19 PM
ML says WB is set to UniWB when I set it to custom WB which is indeed UniWB achieved through Guillermo's method:

dcraw -v -w -q 3 -T -4 _MG_5751.CR2
Loading Canon EOS 5D Mark III image from _MG_5751.CR2 ...
Scaling with darkness 2046, saturation 15488, and
multipliers 1.001967 1.006883 1.000000 1.006883

From that a series of questions arose:

1. How does ML know this is UniWB and not just any custom WB?
2. Does it detect that it is close to perfect UniWB (as in my case the error is under 1%)?
3. Does ML know what is the perfect UniWB for a given camera body (libraw's tools seem to know as FastRawViewer has a setting for it)
4. If the answer to 3 is "Yes, it is possible to read that" - the next logical question is:
5. Is it possible to make a function which automatically sets the accurate UniWB for the particular camera body with 0% error?
Quote from: a1ex on October 04, 2017, 05:55:09 PM
Reason: other sensors might clip to white in different ways, and the current heuristic makes a tight assumption: that the clipping point is harsh and spans on one or two levels (not more). I'm not sure whether this holds true on other camera models.

I don't know if this may help but here is a video about RD which mentions highlight histogram shapes (scroll to 2:40). The speaker says "A bell" shape is typical for Canon and also talks about the changing of saturation value at different exposure parameters (which you mentioned in an earlier reply).
Quote from: a1ex on October 04, 2017, 01:06:27 AM

Obviously some misunderstanding. I thought you were talking about docs of ML. Nevermind. I am quite happy with what this thread has lead to. Hopefully after all the testing this code is ready to go into the regular versions :)

Don't be afraid to read the docs.
I am not and that's why I asked for a link. However all I find is the user guide and the Lua api doc which I suppose is not what you mean?
Quote from: a1ex on October 03, 2017, 10:55:41 PM
ISO 100 1.6" is the only one obviously different - to me - from RawDigger. Taking a closer look:
What does this code do? Looks like some statistical analysis but I can't grasp the details.

I see that ML shows significant overexposure G=29% for ISO 160 1" and G=58% for ISO 160 1.3" but in RD green is not overexposed. What is the reason for this difference? 29% and 58% is quite big. For ISO 100 the values are with not more than 1% difference from RD, so just like in the older test ISO 100 seems to work more accurately.

This is a very easy coding task, does not require any camera programming, just reading existing docs and source codes around the net (and time to debug it).
Can you provide links to the necessary docs? I am curious to see if I have completely forgotten to code in C (it's been many years) :)
CR2 files

ISO 160, 0.8s

ISO 160, 1s

ISO 160, 1.3s

ISO 100, 1.6s

ISO 100, 2s

ISO 100, 2.5s

The result in QR is almost perfect! Great work. The difference seems biggest in the G channel in ISO 160 shots.

There is some discrepancy in LiveView, perhaps due to different calculation.

Do you think you can make the raw ETTR warnings to work in Play mode?
Thanks! I will test a little later and post results.