I have written this to hopefully help the broader ML community better understand some of the ‘feature requests’, in this case ETTR.
Obviously it is written from an ‘IMO stance’ as ETTR and other strategies to maximize dynamic range, or captured image data fidelity, are not universally agreed upon.
First, to maximize the captured data’s fidelity for post processing I believe we need to try and accomplish several things with our exposures (other than ensure they are in focus etc): minimize noise, maximize S/N and capture the most tonal information on each sensor element (RGBG). However, trying to accomplish all these at once, for a real world scene, is near impossible.
For example, to minimize noise we should only shoot with a camera cooled to its lowest operating temperature, eg to minimize dark current noise. The longer we shoot and if we shoot on hot days this noise contribution will increase, just like entropy.
To maximize S/N we should seek to capture the maximum number of photons, and no more, ie achieve a Full Well situation. However, although we may be able to do this for a real scene, it will only be achieved in the few sensor elements in the brightest part of the scene, it a very small % of the overall statistics of the captured image. For instance the subject/focus of the scene may be in the mid tones or lower, ie not a specular highlight that is creating the Full Well situation.
I believe we all now know that DSLR cameras do not capture and process light like our eyes or film. The process is linear and thus this is why there is apparent merit in ETTR and bracketing strategies. That is trying to get the maximum tonal graduation into the capture, without ‘blowing out’ important data.
So far so good.
I think bracketing is not ‘contentious’ as we usually are on a tripod and at the base/lowest ISO, ie where we can guarantee that some of the sensor elements capturing the scene information we deem important will be at their Full Well level, albeit only a few %, unless we ‘over bracket’.
I think the issue comes when we introduce the ISO factor, ie when doing a handheld bracketing set or seeking a handheld ETTR single exposure. In both cases we may need to increase the ISO to achieve a good shutter speed, eg the slowest bracket greater than 1/FL, say OR greater than 1/50, say.
I for one take a lot of handheld 3-brackets on my 5DMkIII and have confidence that increasing the ISO will not create too many issues in post processing. However, from my reading on sensors etc, I will not increase the ISO above about 1600-3200, as this will transition me from the region where the camera noise sources dominate to where the sensor limitation dominate, ie I’m just not capturing enough photons at high ISO. This transition will vary for each camera, but the bottom line is, that if we follow an ETTR strategy, there is an upper limit (ISO) we should all be aware of.
In conclusion, I believe ML is on the right track by giving the user choices to maximize DR and S/N wrt the scene, ie extended bracketing (although with my 5DMkIII this is less importance compared to my 50D) and a RAW LV histogram (a transformational feature).
Finally, IMHO, using all the ML features without understanding the camera-system’s limitations could bring disappointment.