How to view RAW histograms after taking the image?

Started by heyjoe, September 23, 2017, 04:56:25 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

heyjoe

Hi,

My question is about photo, not video.

I recently tried ML for the first time and I see it displays RAW histograms in live view. Unfortunately this doesn't help me to ETTR with studio strobes.

Is it possible to view raw histogram and or otherwise check for raw channel over/under exposure *after* taking the shot?

The camera is 5D3 in case that matters.

Walter Schulz

Overlay tab:
Global Draw ON, all modes
Histogramm RAW RGB, Log/Linear -> RAW EV indicator ETTR hint

RAW histogram visible in Image Review only. Not via Play button.


heyjoe

Is there anything else I need to set? It seems there is something wrong with these exposure indicators.

I see overexposure indication for image with max channel value 9200 (in RawDigger) but that is definitely far below saturation limit (which is about 11400 for 5D3). Why does ML show overexposure for image which is actually underexposed (not ETTR)?

It also seems these are not really raw because when I change the WB they change too.

I hope someone can clarify.

heyjoe

Here is an example which shows the difference between ML and RawDigger.

The image is correctly exposed to the right without clipping but ML shows that it is overexposed.




a1ex



On this image, ML assumes the clipping point at 9960 for ISO 160 and at 13200 for ISO 100. It doesn't know the true clipping point, which is not be the same across camera models and is affected by many factors, including exposure time and aperture (!), so it's trying to guess it from the brightest pixels in the image - raw.c:autodetect_white_level().

There are useful hints in this thread: http://www.magiclantern.fm/forum/index.php?topic=10111.0.

heyjoe

Thanks for looking at the file. BTW how were you able to view the histogram and overexposure indication? Walter previously said this is possible only during image review, i.e. only after taking the shot.

Quote from: a1ex on September 24, 2017, 12:52:05 AM
On this image, ML assumes the clipping point at 9960 for ISO 160 and at 13200 for ISO 100.
Where do these values come from?

Quote
It doesn't know the true clipping point, which is not be the same across camera models
I downloaded and installed ML 5D Mark III 123. How come a version specific for that model does not know the clipping points for it (considering also the factors you mention)?

Quote
and is affected by many factors, including exposure time and aperture (!), so it's trying to guess it from the brightest pixels in the image - raw.c:autodetect_white_level().
My test confirms that saturation values depend on these factors too but the values for ISO 160 are about 11400 in RawDigger which is quite far from 9960. Why is there such a huge difference?

My goal is to correctly ETTR, using the best of what the sensor is capable of, that's why I need correct feedback while shooting. Using this method I have found that ISO multiples of 160 give slightly better DR compared to "native" ones. I also read your article but I can't find a way to set those "best ISOs" which you recommend. How can I do it? (And is it applicable for photo at all?)

The article also says:
QuoteDial ISOs from ML menu/keys. ISO 160 from ML is better than ISO 160 from Canon controls.
but my test shows that there is no difference. Could you please explain?

Unfortunately using the standard Canon histograms (yes, with UniWB) can't give me accurate feedback - I always find a discrepancy between what they show and what RawDigger shows, so this results either in underexposure or overexposure. So I turned to ML hoping to be able to expose correctly to the right. But now when I see that the raw zebras are not really raw (they change when I change the WB setting) and that difference in saturation values which you explain - I wonder what to do.

What would you recommend to someone who needs correct raw ETTR for photo using the best of what the sensor is capable of?

a1ex

Quote from: heyjoe on September 24, 2017, 03:54:19 PM
BTW how were you able to view the histogram and overexposure indication?

Very easy - I've emulated the image capture process in QEMU and used your CR2 as input data.

Quote
Walter previously said this is possible only during image review, i.e. only after taking the shot.

That's exactly what I did - on the virtual camera.

Quote
Where do these values come from?

From dbg_printf's from raw.c.

Quote
I downloaded and installed ML 5D Mark III 123. How come a version specific for that model does not know the clipping points for it (considering also the factors you mention)?

The findings from the ISO research thread are not yet included in ML, sorry about that. The code is generic; I was hoping to cover all ~15 (soon ~20) camera models with the autodetection.

In theory, the autodetected white level should be exact iff there are overexposed pixels in the image, or underestimated by about 0.3 stops in the worst case, if there is no overexposure. In tricky cases like this, I would have expected the white level to be assumed close to the brightest pixel, therefore the overlays showing very little overexposure (if any). That didn't happen, so I may need to re-think the white level heuristic.

Currently I don't know why the clipping point decreases at longer exposures (back then, when the white level was hardcoded, there was a bug about zebras no longer showing overexposure on long exposures - IIRC on 5D3). For aperture, I know how to find the digital gain and do the math, and I also know how to override it (see iso-regs.mo from the ISO research thread), but it's not implemented in the mainline.

Quote
My test confirms that saturation values depend on these factors too but the values for ISO 160 are about 11400 in RawDigger which is quite far from 9960. Why is there such a huge difference?

Probably the autodetection did not work well on this test image - will take a second look (just can't promise when, as this is a hobby project done on nights and weekends)..

The autodetection looks for some confirmation (it doesn't simply take the brightest pixel, as that one is likely a hot pixel and these can have values above the regular clipping point, at least on some models; don't remember if 5D3 is affected).

So yeah - it's not perfect, manpower is an issue and contributions are welcome.

QuoteI have found that ISO multiples of 160 give slightly better DR compared to "native" ones.
On recent models, they do - about 0.1 stops according to my measurements.

On older models (5D2 generation), they don't - they are the same.

http://www.magiclantern.fm/forum/index.php?topic=9867.0

QuoteDial ISOs from ML menu/keys. ISO 160 from ML is better than ISO 160 from Canon controls.

That's ancient stuff, long before we even had access to raw data in ML. I planned to write a newer version, based on the findings from the CMOS/ADTG thread, just never managed to complete it.

That means, most of that stuff actually applies to how Canon renders the raw data (i.e. picture styles). Back then, I've noticed a nicer highlight rolloff when overriding the digital gain from ML, rather than choosing an intermediate ISO from menu. Now I know there are a lot more parameters that change with ISO, and I've only understood a small subset of them.

When overriding digital ISO gain from ML, the other parameters will be inherited from the ISO selected in Canon menu. The same happens with dual ISO, and that's the reason ISO 100/1600 gives different results than ISO 1600/100.

Quote
but I can't find a way to set those "best ISOs" which you recommend. How can I do it? (And is it applicable for photo at all?)

Movie -> Image fine-tuning. Not applicable to photos.

BTW - are you interested in experimenting with the ISO research tools and hopefully reviving that thread? Most of that stuff applies to photos, and there is a significant DR improvement that can be achieved. You may start with the raw_diag and iso-regs modules, and maybe cross-check the results with other software.

heyjoe

Thanks for the detailed explanations. Sorry to repeat my main question but this is the actual thing I am looking for, I hope you can share some thoughts: :)

Quote from: heyjoe on September 24, 2017, 03:54:19 PM
What would you recommend to someone who needs correct raw ETTR for photo using the best of what the sensor is capable of?

Running ML in QEMU sounds really interesting. I am using KVM/QEMU to run guest OS's on my linux host, so I am curious to learn more about how to run ML too. Is the thread you linked to the thing I need to read? Or is there some sorted procedure?

Quote from: a1ex on September 24, 2017, 09:29:55 PM
Very easy - I've emulated the image capture process in QEMU and used your CR2 as input data.

That's exactly what I did - on the virtual camera.
It would be nice to be able to do it when pressing the Play button of camera. Is that possible to implement? Also is there a way to have bigger histograms? These look super small. I can't really imagine how I would get a visual feedback if I shoot in bright sunny day outdoors. Maybe we need a beep to tell us "It is overexposed"? :)

Quote
From dbg_printf's from raw.c.
I was rather wondering how they were determined before being put in your code, i.e. what physics they are based on and why they are so different from actuality.

Quote
The findings from the ISO research thread are not yet included in ML, sorry about that. The code is generic; I was hoping to cover all ~15 (soon ~20) camera models with the autodetection.
...
So yeah - it's not perfect, manpower is an issue and contributions are welcome.
I understand it is a work in progress. Of course I would be glad to help with what I can. However I am not a programmer per-se (I do some coding for simple scripts and web only). Is it really difficult to put the right saturation values and recompile? Or does it need more research in order to have them really accurate?

Quote
On recent models, they do - about 0.1 stops according to my measurements.

On older models (5D2 generation), they don't - they are the same.

http://www.magiclantern.fm/forum/index.php?topic=9867.0
Yes, I saw this thread which took me to the wikia article. My findings are in this spreadsheet. It seems the difference is smaller than 0.1 stops.

Quote
Movie -> Image fine-tuning. Not applicable to photos.
Is it possible to make it available for photos too?

Quote
BTW - are you interested in experimenting with the ISO research tools and hopefully reviving that thread? Most of that stuff applies to photos, and there is a significant DR improvement that can be achieved. You may start with the raw_diag and iso-regs modules, and maybe cross-check the results with other software.
You mean the thread about iso160 multiples or? Please clarify what is your idea. I have been experimenting with RawDigger for the last few days, trying to answer my main question. If you think any test would help you to make ML do what I want - of course I would be glad to help.


BTW is it possible to program the sensor to capture light logarithmically and record it to raw data, instead of linearly? I mean not the log files which pop cameras output after converting raw data but to create/program a real logarithmic sensor. That would be a revolution. I have been searching for info about logarithmic sensors but the only one I found was about some CCTV cameras which have very high noise.

----
ETA: Why am I not getting any email notifications about updates on the thread? I have the following settings:

Turn notification on when you post or reply to a topic. = On (Instantly, for the 1st reply for replies and moderation).

a1ex

Sorry I didn't include more links - my display is defective, so I am/was using a smartphone to mirror the main desktop (so I can type quickly, but reading or looking up links is slow).

First post from QEMU thread gives you a README.

CMOS/ADTG aka ISO research: http://www.magiclantern.fm/forum/index.php?topic=10111

Hardcoding clipping point: I know how to find it for short exposure times, but I don't know how it changes with long exposures. I only know it gets lower, from past bug reports, but didn't do a controlled experiment to find out by how much.

QuoteI was rather wondering how they were determined before being put in your code, i.e. what physics they are based on and why they are so different from actuality.

The white level detection is also explained in the iso160 thread. It starts from a low guess, then scans the image for brighter pixels, then backs off a bit - all this to make sure it shows the clipping warnings no matter what the correct white level might be. Your example case is either a bug or an edge case (didn't look yet).

In other words, I was just trying to come up with something that works on all other supported ML camera.

QuoteIs it possible to make it available for photos too?

Sure, but the digital gain will get burned in the CR2 (so you'll be losing levels without gaining additional highlight detail in the raw file). You will gain more highlight detail in the JPEG preview, but I don't see it a good enough reason to implement and maintain this "feature". Rather, the CMOS/ADTG tricks do actually capture additional highlights (effectively increasing DR), and that's currently available in the iso_regs module for 5D3 (just not very user friendly, but at least you can tweak all these sensor gains from the menu).

Quote
It would be nice to be able to do it when pressing the Play button of camera. Is that possible to implement?

https://www.magiclantern.fm/forum/index.php?topic=8309.0 (old answer)
https://www.magiclantern.fm/forum/index.php?topic=15088.msg186141#msg186141 (updated)

Quote
BTW is it possible to program the sensor to capture light logarithmically and record it to raw data, instead of linearly? I mean not the log files which pop cameras output after converting raw data but to create/program a real logarithmic sensor.

The closest approximation I can come up, besides dual ISO, would be this:
https://www.magiclantern.fm/forum/index.php?topic=19315.0

Alternating short/long exposures is doable, but not trivial. There are routines for doing arithmetic on raw buffers on Canon's image processor - documented here:
https://www.magiclantern.fm/forum/index.php?topic=13408.

If you can understand the sensor internals, you can probably change all its registers from adtg_gui. However, besides tweaking some gains at various amplifier stages, and overriding LiveView resolution to get 3K/4K/fullres LV, I wasn't able to find anything useful for controlling the clipping point beyond what's already documented in the ISO research thread. I'm not saying there isn't - I'm just saying I'm not familiar with sensor electronics, so maybe I don't know where to look.

There are some CMOS registers that appear to adjust the black sun protection, but didn't look much into them.

So, feel free to grab adtg_gui and understand/document what some of these registers do.

QuoteMy findings are in this spreadsheet.

It is my understanding that log2(max/stdev) is only a rough approximation, especially at high ISOs. Rather, I prefer to read it from the SNR curve - detailed answer: https://www.magiclantern.fm/forum/index.php?topic=13787.0

Also, how are you computing the noise stdev? There are many choices: from OB areas (not reliable, just an extremely rough approximation), from one dark frame (includes both fixed and random components), from the difference of two dark frames (you'll get the random noise component * sqrt(2), assuming it's Gaussian), or from the difference of two regular images (so you can estimate the noise at various signal levels - enough information for plotting the SNR curve).

For DR measurements, I have some confidence in raw_diag's 2-frame SNR analysis (a method inspired from Roger Clark's method); however, white level is still detected using heuristics (that may fail if the clipping is not harsh). For FWC and read noise, I have a feeling finding them from the SNR curve may be an ill-conditioned problem.

heyjoe

Quote from: a1ex on September 25, 2017, 07:35:40 AM
Sorry I didn't include more links - my display is defective, so I am/was using a smartphone to mirror the main desktop (so I can type quickly, but reading or looking up links is slow).

First post from QEMU thread gives you a README.
Thanks. I will look into it.

Quote
CMOS/ADTG aka ISO research: http://www.magiclantern.fm/forum/index.php?topic=10111
I already read this. My test confirms a slight improvement in DR after installing ML. Here are some calculated values and graphs showing the difference (diff = value_with_ML - value_before_ML):



Quote
Hardcoding clipping point: I know how to find it for short exposure times, but I don't know how it changes with long exposures. I only know it gets lower, from past bug reports, but didn't do a controlled experiment to find out by how much.

The white level detection is also explained in the iso160 thread. It starts from a low guess, then scans the image for brighter pixels, then backs off a bit - all this to make sure it shows the clipping warnings no matter what the correct white level might be. Your example case is either a bug or an edge case (didn't look yet).

In other words, I was just trying to come up with something that works on all other supported ML camera.
What do you need to make it work accurately? I would be glad to provide test data if you don't have the time for it. Just let me know what I have to do.

Quote
Sure, but the digital gain will get burned in the CR2 (so you'll be losing levels without gaining additional highlight detail in the raw file). You will gain more highlight detail in the JPEG preview, but I don't see it a good enough reason to implement and maintain this "feature". Rather, the CMOS/ADTG tricks do actually capture additional highlights (effectively increasing DR), and that's currently available in the iso_regs module for 5D3 (just not very user friendly, but at least you can tweak all these sensor gains from the menu).
I am curious to try how it would work on photos but I don't find any module called iso_regs in the ML menu?

Quote
https://www.magiclantern.fm/forum/index.php?topic=8309.0 (old answer)
https://www.magiclantern.fm/forum/index.php?topic=15088.msg186141#msg186141 (updated)
Thanks. So it is the light button that does the thing.

BTW what is the reason for zebras to be affected by the WB setting? This means they are not really raw (although they are set to RGB RAW). Is that a bug?

Quote
The closest approximation I can come up, besides dual ISO, would be this:
https://www.magiclantern.fm/forum/index.php?topic=19315.0

Alternating short/long exposures is doable, but not trivial. There are routines for doing arithmetic on raw buffers on Canon's image processor - documented here:
https://www.magiclantern.fm/forum/index.php?topic=13408.

If you can understand the sensor internals, you can probably change all its registers from adtg_gui. However, besides tweaking some gains at various amplifier stages, and overriding LiveView resolution to get 3K/4K/fullres LV, I wasn't able to find anything useful for controlling the clipping point beyond what's already documented in the ISO research thread. I'm not saying there isn't - I'm just saying I'm not familiar with sensor electronics, so maybe I don't know where to look.

There are some CMOS registers that appear to adjust the black sun protection, but didn't look much into them.

So, feel free to grab adtg_gui and understand/document what some of these registers do.
I am not sure if we are talking about the same thing. The two links don't mention anything about logarithmic raw data. What I am talking about is not to look for a way to HDR but to use the actual DR of the sensor but instead of having it convert the light to linearly encoded data, to have the raw data in a logarithmic way. The advantages of that would be:


  • No need for ETTR because we will have the same number of levels for any f-stop
  • The benefits in post-production which follow that.
  • Maybe even smaller raw files (because we don't really need 4000 levels per f-stop).

Of course that would also require customized raw conversion perhaps. I am getting the idea for this from a program called 3DLUTCreator which you may be familiar with. It has this unique functionality: when you open a raw file in it it

1) converts it to LogC.tiff (using Alexa's logc formula) which is 16-bit tiff with UniWB and logc encoded data
2) it works with that tiff to color grade it

This gives the possibility to encode the whole raw data into the LogC.tiff file (which can contain up to 21EV of DR) and also a fairly uniform number of levels per f-stop. (*fairly uniform - because they are not the same for each f-stop, due to the logc formula itself, it is not an ideal logarithm).

So all that got me thinking that if the raw file itself is logarithmic, that would probably save us from all the issues arising from the linearity of raw data today.

Your links reminded me of another thing though. Have you seen this:

https://www.nasa.gov/feature/revolutionary-camera-recording-propulsion-data-completes-groundbreaking-test

Quote
It is my understanding that log2(max/stdev) is only a rough approximation, especially at high ISOs. Rather, I prefer to read it from the SNR curve - detailed answer: https://www.magiclantern.fm/forum/index.php?topic=13787.0
I don't know why it should be a rough approximation. I am using this method as described by the libraw's developers. You can check the link which I gave, it describes it. I will look at your link too.

Quote
Also, how are you computing the noise stdev?
I don't know how exactly RawDigger calculates the sigma values. So far I have tried two kinds of dark noise samples:
1) from a shot with the lens caps on (used in the test to calculate the differences in the above graphics)
2) from the dark (hidden) pixel areas (recommended in the libraw developer's article, used in the Google spreadsheet which I shared in a previous post)

The difference between 1) and 2) is not very big. 2) seems to give more uniform values across channels while 1) seems to show bigger difference in DR between R, G, B, G2.

Quote
For DR measurements, I have some confidence in raw_diag's 2-frame SNR analysis (a method inspired from Roger Clark's method); however, white level is still detected using heuristics (that may fail if the clipping is not harsh). For FWC and read noise, I have a feeling finding them from the SNR curve may be an ill-conditioned problem.
I am not familiar with Roger Clark's method and I don't know what FWC is. Links?

So considering everything said so far:
Quote
What would you recommend to someone who needs correct raw ETTR for photo using the best of what the sensor is capable of?

:)

I also hope you can check why I am not getting any notifications about replies in the thread. I have them on in settings

a1ex

QuoteI already read this. My test confirms a slight improvement in DR after installing ML.

You may want to double-check your test methods - with regular nightly builds, there is no change in DR at all.

QuoteI am curious to try how it would work on photos but I don't find any module called iso_regs in the ML menu?

Last time I checked, the search box was operational...

Roger Clark: http://www.magiclantern.fm/forum/index.php?topic=10111.msg117955#msg117955

QuoteHave you seen this:

https://www.nasa.gov/feature/revolutionary-camera-recording-propulsion-data-completes-groundbreaking-test

Yeah - want me to prepare that for next year's April 1st? :)

QuoteThe difference between 1) and 2) is not very big.

At least on the sensor used by Apertus, the difference is *huge*. On Canons... check yourself with raw_diag (in particular, at higher ISOs).

QuoteBTW what is the reason for zebras to be affected by the WB setting? This means they are not really raw (although they are set to RGB RAW). Is that a bug?

https://www.chiark.greenend.org.uk/~sgtatham/bugs.html

Enough chit-chat for now.

heyjoe

Quote from: a1ex on September 25, 2017, 02:16:55 PM
https://www.chiark.greenend.org.uk/~sgtatham/bugs.html

Enough chit-chat for now.

That still doesn't answer my main question because obviously ML does not show the real raw histogram clipping. I provided all the info you asked for, so I don't see why this link. The 'chit-chat' arose from the fact that my main question was (and still is) unanswered. I won't repeat it again though. Thank you for your time.

a1ex

You said:

Quote from: heyjoe on September 25, 2017, 01:45:05 PM
BTW what is the reason for zebras to be affected by the WB setting? This means they are not really raw (although they are set to RGB RAW). Is that a bug?

I'm not aware about such behavior, nor I can reproduce it, so it's your duty to provide some sort of proof. That was the reason for the link.

As for your main question, I'm afraid I don't understand it. What should I recommend?

On 5D3, at full stop ISOs, the autodetected white level can be underestimated by log2(15283-3000-2048) - log2(15283-2048) = 0.37 stops (worst case), iff there are no overexposed pixels in the image. So, you may get false clipping warnings within that limit. If that's good enough for you, then just use it.

Otherwise, for short exposures, you can override the digital gain to 512 with iso_regs (to disable aperture-related variations), then hardcode the white level to whatever raw_diag or raw_digger tells you (15283-1 for full stop ISOs), or maybe a few units lower if the clipping warnings are not there. For longer exposures, find out how the clipping point changes, write it down and find the pattern or some limits. For relatively slow lenses (such as f/2.8 ), you can ignore the effects of digital gain - just use the white level obtained with a manual lens. From f/4.0 and beyond, there's no more digital gain trickery.

However, hardcoding a bunch of camera-specific constants may quickly lead to a maintenance nightmare, so I'd prefer to either avoid it, or have some way to check their correctness - e.g. with this. That would require sample images at all the relevant ISOs, covering various exposure times and apertures, and from more than one camera, to ensure repatability; then integrating all this stuff in the test suite. Or, a way to fix the white level to some constant value (like 16000 or whatever) regardless of the other parameters (model-dependent, and the influence of exposure time is not yet understood).




Edit: looking through raw_diag sources, there is a register that appears to return Canon's white level (which probably ends up in the CR2 EXIF - didn't check that). It's the second number displayed by raw_diag:

int white = autodetect_white_level();
int canon_white = shamem_read(0xC0F12054) >> 16;
...
bmp_printf(..., "White level: %d (%d) ", white, canon_white);


Some values (autodetected by raw_diag vs Canon, manual lens, 5D3.123):

15283, 14582 (ISO 1600, 1/8" - off by 0.08 EV)
14090, 11995 (ISO 1600, 30"  - off by 0.28 EV)
13307, 11388 (ISO 160, 1/8"  - off by 0.27 EV)
13308, 10694 (ISO 160, 4"    - off by 0.38 EV) [!]
12620, 10694 (ISO 160, 30"   - off by 0.29 EV)
15284, 14582 (ISO 100, 30"   - off by 0.08 EV)


Would that be a better approximation? If this trick works on other camera models, I'm tempted to use that instead of my autodetection (as Canon's heuristic does not depend on the image contents). My heuristic gives exact results if there are overexposed pixels, and underestimates otherwise; Canon's always underestimates by the amounts from the above table, regardless of whether the image is overexposed or not.

Do you have the patience to get a matrix of these values? You may use a Lua script if you wish.




FYI, the max values from your raw file are:


octave:1> a = read_raw('_MG_5911.CR2');
octave:2> prctile(a(:), [10 50 90 99 99.9 99.99 99.999 100])'
ans =
    3353    8279   11886   12639   12848   12943   13006   13102


and here's how the clipping warnings would look with white level 11388 (Canon's heuristic for your settings):


heyjoe

Quote from: a1ex on September 25, 2017, 04:41:53 PM
You said:

I'm not aware about such behavior, nor I can reproduce it, so it's your duty to provide some sort of proof. That was the reason for the link.
Obviously neither you, nor I are mind readers, so it would be easier if you ask directly to avoid misunderstanding. Anyway here is the proof:

1) WB set to UniWB:

In LiveView:



After taking the shot:



2) WB set to 5000K:

LiveView:


After taking the shot:



As you can see:

- the zebras and histograms in image review are different for different WB
- the zebras in LiveView are different (histograms look the same) for different WB
- the zebras, histograms and values for the same WB are different in LiveView and in image review (Play > Light).

CR2 files:

https://drive.google.com/open?id=0B2Mb7hSVSnnFN3NuSkVVNjdRUTQ

Let me know if you need any more info about this.


Quote
As for your main question, I'm afraid I don't understand it. What should I recommend?
A way to correctly ETTR (even if it would mean not using ML).

Quote
On 5D3, at full stop ISOs, the autodetected white level can be underestimated by log2(15283-3000-2048) - log2(15283-2048) = 0.37 stops (worst case), iff there are no overexposed pixels in the image. So, you may get false clipping warnings within that limit. If that's good enough for you, then just use it.
I can get within that range without ML (using UniWB JPG histograms) but I am looking for something more accurate as the JPG histograms are really bad (I have tried so may variations in Picture style but still they don't show accurately the saturation).

Quote
Otherwise ...
Unfortunately I am not a programmer and if I dare to touch your code I am risking to damage something. As you correctly pointed out - this would be a maintenance nightmare and there would be more and more questions. That's why considering your much bigger expertise I was hoping that we can diagnose the issue together and hopefully you, knowing your code, could fix it properly (or advise for a way to correctly ETTR in a different way).

BTW I am thinking about another solution that may work for ETTR: Wouldn't it be much easier to simply display a zoomed portion of the rightmost zone of the raw histogram? Then one can simply evaluate by the shape of it (e.g. a spike) and see if there is clipping of a particular channel or not. In that case you won't need to detect or fix any values. It would be camera independent. All you would have to do is to display the portion of the histogram bigger and zoomed. What do you think?

Quote

13307, 11388 (ISO 160, 1/8"  - off by 0.27 EV)

For which channel is that? Compared to previously mentioned max of 9960, this looks much closer to what RawDigger shows.
BTW in the spreadsheet which I shared (shot at ISO 160, 0.5") I notice a pattern in the saturation values (without black subtraction):

- for all ISO x160 multiples: 13519-13526
- for all other ISOs and ISO 20000: 15518-15530
- for ISO 25600: 16376

Does that mean they are pretty much fixed (and can probably be hard coded?)

Quote
Do you have the patience to get a matrix of these values? You may use a Lua script if you wish.
I don't know Lua and I don't understand everything you do (I installed ML just 2 days ago). So please provide the steps for what you need me to do.

Quote
FYI, the max values from your raw file are:


octave:1> a = read_raw('_MG_5911.CR2');
octave:2> prctile(a(:), [10 50 90 99 99.9 99.99 99.999 100])'
ans =
    3353    8279   11886   12639   12848   12943   13006   13102


and here's how the clipping warnings would look with white level 11388 (Canon's heuristic):


I don't know why but RawDigger shows different values:



and again according to RD this shot is a tad underexposed (considering RD's saturation at 11477) about 0.05EV. So if such overexposure is shown on LCD screen, that would make the photographer underexpose it even more which is contrary to ETTR.

a1ex

Quote
- the zebras and histograms in image review are different for different WB
- the zebras in LiveView are different (histograms look the same) for different WB
- the zebras, histograms and values for the same WB are different in LiveView and in image review (Play > Light).

You are looking at YUV-based zebras. Try setting "Use RAW zebras: Always".

Also, when the histogram doesn't have vertical bars (stops), it's YUV-based.

QuoteWouldn't it be much easier to simply display a zoomed portion of the rightmost zone of the raw histogram? Then one can simply evaluate by the shape of it (e.g. a spike) and see if there is clipping of a particular channel or not. In that case you won't need to detect or fix any values. It would be camera independent. All you would have to do is to display the portion of the histogram bigger and zoomed.

Good point; however, I have some ideas to detect the presence of such a spike when checking the white level (a better heuristic). On Canons, the clipping is harsh (many pixels at the maximum value), with the possible exception of a few hot pixels.

heyjoe

Quote from: a1ex on September 25, 2017, 09:20:23 PM
You are looking at YUV-based zebras. Try setting "Use RAW zebras: Always".
I have "Use RAW zebras: Photo" and the help info says "Will use RAW RGB after taking a pic". After reading your current reply I set it to "Always" but looking at the shots which I shared with Play->Light still shows different zebras and histograms.

Quote
Also, when the histogram doesn't have vertical bars (stops), it's YUV-based.
In Histogram type I have RAW-based (RGB) for which the help info says "Will use RAW RGB in LiveView and after taking a pic"

Quote
Good point; however, I have some ideas to detect the presence of such a spike when checking the white level (a better heuristic). On Canons, the clipping is harsh (many pixels at the maximum value), with the possible exception of a few hot pixels.
Are you suggesting to simply wait for (the) next version?

a1ex

See replies #1 and #11.

About hardcoding:

Adobe assumes a white level of 13100 for your CR2, and 15000 for most (all?) CR2's from 5D3 taken at full-stop ISOs. They get it wrong at long exposures (tested with Adobe DNG Converter 8.2 and 9.12). This shows they are not looking at Canon's white level tag (whatever that might be).

Dcraw/ufraw use 15488 (0x3c80) - obviously wrong (only correct at full-stop ISOs with fast apertures, and maybe some extremes like ISO 25600). They probably just took it from the first 5D3 sample they've got.

Rawspeed (used by darktable) has:

<Camera make="Canon" model="Canon EOS 5D Mark III" decoder_version="2">
<Sensor black="2060" white="15700" iso_list="400 500 1600"/>
<Sensor black="2060" white="15200" iso_list="100 125 200 250 800 2000"/>
<Sensor black="2060" white="15100" iso_list="2500 10000"/>
<Sensor black="2060" white="13700" iso_list="160 320 1250"/>
                <Sensor black="2060" white="14200" iso_list="640 5000"/>


Unfortunately, these are not correct either.

If the white level used by the raw processor is higher than the true value, it will result in pink highlights; if it's too low, it will clip useful highlight detail (so, ideally it should be just a bit below the real value). Cross-check with:

15283 (ISO 1600, 1/8")
14090 (ISO 1600, 30")
13307 (ISO 160, 1/8")
13308 (ISO 160, 4")
12620 (ISO 160, 30")
15284 (ISO 100, 30")


Other raw processors likely hardcode their own values.

So, no matter what method we choose for white level, it probably won't match what most common raw processors will actually use, unless you override it manually with exiftool (after converting the CR2 to DNG). Besides, many raw processors use the wrong white levels, and the only workaround I know is to convert to DNG and set the white level manually.

(that was yet another can of worms I forgot about...)

heyjoe

Quote from: a1ex on September 25, 2017, 11:44:29 PM
See replies #2, #11.
Reply #1 is from Walter.
Reply #2 is from me.
If my replies are not counted, reply #2 is from you: "CR2, please"
If only your replies are counted, then I can't relate reply #2 to anything regarding histograms/zebras discrepancies.
Considering that I can't be sure which is #11.
Please disambiguate.

I don't use ACR (I use libraw through 3DLC) and I am not looking to calibrate a perfect match between raw converters but:

Has what the raw converter assumes as white anything to do with correct ETTR? To my mind: As long as we can tune exposure in raw conversion for correct histogram of the output file it should not matter what the absolute values at the input are, so all that matters is right exposure. Or am I missing something?

ETA: You edited your answer, so I will have to add too:
What have histogram/zebras displayed as a visual feedback on camera LCD to do with raw converters? When you display the zebra/histogram you don't change the values in the CR2 file, you just compare them to a value and determine whether it must show as clipped or not. Right?

a1ex

Typo - reply #1 (as printed by the forum).

Check the crop_rec_4k builds in a few hours.

DeafEyeJedi

5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

heyjoe

Quote from: a1ex on September 26, 2017, 01:31:52 AM
Typo - reply #1 (as printed by the forum).
Thanks.

So if I understand correctly: the raw histogram and raw clip warnings show only in LiveView and in the instant image review (but not in Play+Light).

Unfortunately that doesn't explain why in LiveView the zebras are non-raw because it contradicts what one has set in the menu options: Use RAW zebras = Always. I do read that the help info says "if possible" but it is not quite clear on what that "if" is based (and I couldn't find anything in the user guide). From a user perspective one should get what one has set in the menu.

I also think it would be useful to have an option in the menu to enforce raw histograms and clip warnings even in Play+Light menu. Can you do this?

Quote
Check the crop_rec_4k builds in a few hours.
It seems you have put the new heuristic in code which is great and I am curious to test it.

Do I simply download and unzip  magiclantern-crop_rec_4k.2017Sep26.5D3123.zip onto the card? My concern is the warning on the top of the page:

"The following builds are works in progress, known to have rough edges.
Please test thoroughly before considering them for serious work."


Are those crop_rec_4k less safe (potentially being able to cause damage)? Would you recommend to rather wait for the new functionality to be put in the main builds? I am just cautious not to damage my camera with something not fully tested.

a1ex

In your LiveView screenshots, the histogram is raw; just double-check "Use RAW zebras" is really "Always". If that still doesn't give raw zebras, it's a bug, but I'm unable to emulate LiveView in QEMU yet. A video might be useful to figure out what's going on.

Safety-wise, they are about the same. These systems don't have a MMU, any task can write anywhere in the RAM, and Canon code saves their settings at shutdown by... reflashing the ROM.

The crop_rec_4k branch has an experimental safety check (as a bug there caused Canon settings to be overwritten on a few cameras, and a few of them did not boot as a result - luckily all recovered). That check will not prevent Canon code from writing garbage into the ROM (I still don't know how to prevent that), but will make it a little less likely to do so (by disabling their ROM reflashing after a crash, or when you take the battery out). So, until that safeguard gets into mainline, crop_rec_4k is *probably* a little safer.

In any case, the strongest safety net we have is the ability to emulate the firmware in QEMU (with the user's ROM), so if something goes really wrong (such as camera not booting or acting weird), I should be able to look into it. And, of course, the ROM dumper from bootloader.

Quote from: heyjoe on September 26, 2017, 10:09:31 AM
Would you recommend to rather wait for the new functionality to be put in the main builds?

Well, somebody *has* to test this before it goes into mainline :D