Yeah, that's probably the easiest.
Will double check everything again and then send you samples of the unprocessed sequence and the processed as x264 encode, okay?
That sounds really complicated, and without much benefit. You'd also need to analyze the raw data, I only have access to a preview. However, if you have a proposal or description of how to do that mathematically, I'd certainly be willing to try and implement it.
AFAIK white balancing is shifting the multiplier (<1, 1 or >1) of each channel in linear gamma, right? I just read this thread again and at the beginning you say that assume a gamma curve for the deflickering, is that correct? Why don't you use that curve to get somewhat close to linear and then compare the histograms of adjacent images with each other to find out if there is an offset? This is basically the same as deflickering, just separately for each color channel, right?
Please see the first 50secs of this video to illustrate the issue I am talking about - I am aware that you need to much more funky processing to get the results they are getting but fixing the WB would be a good start for less severe sequences.
PaulJBis already implemented undo:
https://github.com/davidmilligan/BridgeRamp/pull/1
I noticed that after I installed the latest version of the script, but I am not sure how to use that really. Maybe Paul can chime in to explain it to me?
There's no way to tell the difference analytically, so there's no way to do anything other than simply remove the flicker. Seems like it would look better without flicker anyway, even if the flicker being eliminated was 'natural'.
I don't think flicker should be completely removed in those cases because then it looks too unnatural.
Just for my understanding: The amount of passes run by the script mostly targets the precision of the deflickering and not the averaging, right?
I see that an algorithm has a hard time distinguishing between the two cases, that is why I am asking for the option to adjust the deflicker strength. I will try to rephrase the post I linked to.
Red graph = average image brightness (per frame)
Green graph = brutally averaged brightness target
Blue graph = gentle brightness target for pleasing and natural results©
Black bars = keyframes 1., 2. and 3.
With the current algorithm, everything between the keyframes 1. and 2. would get averaged out (represented by the green graph), right?
The blue graph represents my idea of the deflickering strength, when the strength is 100% the blue graph will look like the green one, when the strength is 0% the blue graph will be no different to the red one. When adjusting the strength to 40% the graph has about the appearance like in my illustration.
Maybe it is possible to be able to adjust high frequenzy and low frequenzy smoothing (I hope this is the right terminology for this) separately:
High frequenzy (HF) flicker would be single frames that are just off (because the AV mode made a bad decision or whatever the reason is). These spikes need to be eliminated obviously.
Low frequenzy (LF) flicker would be the smoothing of the curve on a wider scale, when the sequence gets darker because of bypassing clouds for let's say 50 frames between keyframes 1 and 2, we want it to be a bit darker (.5EV) but not as dark as the RAW files (1.5EV), also there should be gentle roll-off into this.
Removing HF flicker would only take into account the brightness of adjacent images while LF deflickering also takes into account the original brightness of the RAW file.
If the amount of passes is doing what I assume above, running multiple passes would improve the precision in the high frequenzy (and to some extent also in the low frequenzy) deflicker but not average out everything untill it has the same brightness.
Hopefully my idea is clearer now.
JB