Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Joachim Buambeki

Pages: [1] 2 3 4
Thanks for your reply and the insight. I could live with the gradient (if it acts like a real graduated ND and doesn't degrade the image in any other form -I use ND grads all the time anyway) but a Rolling Shutter of 1/4 sec is unacceptable.
According to this post, the 5D3 is only twice as fast and all the other are equally bad or even worse. This pretty much makes my idea unfeasible I think... :'(

I might have to bite the bullet and save up for an URSA Mini or something in that league.

Does someone have any other ideas regarding this?

Hey guys,

I am asking myself if I can get 360° (or very close) shutter with full res silent pictures and intervals of around .5sec. to capture true continuous motion.
Is there a significant processing (down) time between the shots or is it possible to achieve this? Given I use a card fast enough, data rates will not be the issue, right?
Rolling shutter should not be an issue with such shutter speeds, or am I mistaken?

I would like to go full frame and have my eyes set to a 5D2 (preferably) or 6D but I am open to other suggestions in a similar price range as well if said cameras arent't suitable for this task.
Why would you pick the 6D over the 5D2 and vice versa? I assume the 5D2 has faster write speeds with its CF cards despite the 6D having the better sensor overall.

Looking forward to your replies and suggestions! TIA :)


There's no need to use exiftool. The script is already using Adobe's built in XMP library.
If Exiftool works reliably and Adobe's API doesn't I would consider using EXIFtool. Not saying that EXIFtool does but I have never managed to get this script working reliably over a course of more than a year on totally different systems.

Exiftool is able to do that.
Now that you say it, I remember that commercial software also relies on Exiftool. Would it be possible to call an external app like Exiftool from within the script?
Or maybe there is some documentation that says what Exiftools does and then this can be recreated in the script?

Yes, this is well known issue. I am used to create xmp files using LightRoom rc_xEmP plugin. which works fine, and AE accepts them.
Either having XMP files directly generated by dmilligan script or to figure out how to avoid ACR processing in AE.
Would it be possible to create those sidecar files with the script instead of another Plugin for another software?

Ramping temperature isn't working because you need to set it to "Custom" for all photos (this doesn't happen automatically, yeah I know: "Can you make this automatic?")

For DNGs Adobe stores the metadata in the DNG itself so there will be no sidecars. If it's not working here's what I would do: select all the DNGs in the ACR dialog. Make some type of change and synchronize it to all of them. This will sort of "initialize" the XMP metadata within the DNG, otherwise the script may not be able to write the metadata correctly (yeah I know: "Can you make this automatic?" Well I'm already calling the function to init the metadata in the xmp API, and it basically seems to not always work. I have no idea why and there's nothing I can do about it, but at least this workaround seems to work)

As for ramping color temperature - that explains why the setting did not work until I did set in ACR to a certain value for all pictures. maybe this should be an option in script, set it to custom if not set?

Commercial software seems to have found a way to correctly initialise metadata, so there might be a way.

I've never seen the purgeFolderCache error and I have no idea what it could be.


Analysis size refers to how big the image is scaled down to before computing the histogram. The smaller you make it the faster the script but the histogram will be less accurate. (... rescaling the image down makes the histogram calculation much faster)
Did you try cascading the resolution when multiple passes are going to be used? Make the the first pass faster by running it with a lower resolution and the latter passes with higher resolution for histogramm calculation. Maybe this saves some time in the end (and increases precision in the last pass if run with an even higher resolution)?
Also: Is it possible to let the run as many passes as needed instead of giving out the warning that more passes are needed?

I recently encountered a glitch with the script behavior, it either did not adjust or deflickering was poor. It took me a while to run down the bug, but basically in adobe bridge you have to set "Options for thumbnail quality and preview generation" to "always high quality".


Any other setting gave me either unpredictable results (like constant +1 stop in exposure for embedded thumbnails if I had shot -1EV) or left a lot of flickering. I have not done more research why, but just for someone who encountered the same problem. (Bridge CS6 + ACR 8.4).
Maybe it would be good to put things like that and the DNG and colour temp issue on the main page of the script under known bugs and limitations.

Also, why don't you explain the settings and options on the site? The documentation is bit lacking at the moment IMO.


I also get this error after the first pass now:

After pressing OK it stays stuck with purging the cache. (I have plenty of free HDD space if that is a possible cause).

I am also trying to break the script in other ways:
When picking three keyframes (start+middle+end) and choose ramp multiple it only ramps the exposure and clarity but not the temperature that was also changed in the middle keyframe. I tried that a few times and made sure everything is checked in the "Ramp Multiple" dialogue.

When you want to ramp and deflicker you have to ramp mutliple first and then deflicker, right? What about combining that into one process?

EDIT - More testing: 3 keyframes, start underexposed, middle overexposed and high colour temp, end underexposed and same colour temp as start.
When applying the script to CR2 files instead of DNGs the ramping is working as expected when inported into AE (except colour temperature like mentioned above). I assume AE needs the sidecar files for some reason, otherwise it applies the settings of the first frame to the entire sequence. I can't verify that because I don't know how to create .xmp files when processing DNG files.

In case you missed that in the last posts: CC2014, OSX

I am not sure if this is the RAM cache, but the problem is still there after cleaning the "Media & Disk Cache" in preferences. If there would be still something in the disk cache it should be visible by beeing blue lines instead of green, right? I don't see those blue lines after importing the sequence again in a new project.

Had you already loaded and viewed part of the sequence in AE? If so, it's because you are seeing AE's preview cache (notice the green areas above the timeline). When you make changes inside AE, it knowns to clear the preview cache. But you made changes outside of AE that it wasn't aware of. Simply clear the preview cache.

I suspected something like that and made sure this isn't what is causing it. I quit AE and imported the sequence again (actually did that multiple times) but the sequence was still screwed up.

Looks like you selected the undo data file to be deflickered (which is not an image).
Of course, I just pressed Cmd+A to select all files...
Could you modify the script to only process image files and ignore everything else (like undoData, subfolders, etc.)?

I tried it again and now it gets interesting:
In Bridge you can see the adjustments made by the script in the thumbnails and when you open the files in ACR but After Effects ignores the adjustments - not all of them its really weird, the adjustments seem to be taken into account for about half of the sequence but then it stops. Do you have any explanation for that behaviour?
I then rendered the images from the ACR dialogue in Bridge and put the image sequence through compressor to see if the changes are applied - they are (though there still is some shimmering - not sure if I would call it flicker).

I can send you the DNGs for examination via Dropbox if you need them, they are only 200 MB.


Hi David,

I just wanted to process the files to show that deflicker doesn't work but the problems with the script and me seem to never end no matter what OS or version of the software:

Before applying deflicker, all I did was processing the downrezzed DNGs (that were created from CR2 files), the first and last frame were marked with 1 star and their exposure was lowered (small amount for the first keyframe and larger amount for the second one). After pressing OK, the script stays stuck at the process shown.
Also, Bridge is behaving very buggy (with DNG, CR2, NEF files) - but I don't think it did when installed but only since a few days. It almost always crashes at some point. Restarting the computer also doesn't help. It always asks me to purge the cache when started again.
I don't remember it doing that the first time I was writing in this thread again two weeks ago.

Any ideas, David? I would really get this working because it is basically all I need to process my TLs but it seems it is not meant to be. :-(


PS: Did you get my PM?

Yeah, that's probably the easiest.
Will double check everything again and then send you samples of the unprocessed sequence and the processed as x264 encode, okay?

That sounds really complicated, and without much benefit. You'd also need to analyze the raw data, I only have access to a preview. However, if you have a proposal or description of how to do that mathematically, I'd certainly be willing to try and implement it.
AFAIK white balancing is shifting the multiplier (<1, 1 or >1) of each channel in linear gamma, right? I just read this thread again and at the beginning you say that assume a gamma curve for the deflickering, is that correct? Why don't you use that curve to get somewhat close to linear and then compare the histograms of adjacent images with each other to find out if there is an offset? This is basically the same as deflickering, just separately for each color channel, right?
Please see the first 50secs of this video to illustrate the issue I am talking about - I am aware that you need to much more funky processing to get the results they are getting but fixing the WB would be a good start for less severe sequences.

PaulJBis already implemented undo:
I noticed that after I installed the latest version of the script, but I am not sure how to use that really. Maybe Paul can chime in to explain it to me?

There's no way to tell the difference analytically, so there's no way to do anything other than simply remove the flicker. Seems like it would look better without flicker anyway, even if the flicker being eliminated was 'natural'.
I don't think flicker should be completely removed in those cases because then it looks too unnatural.
Just for my understanding: The amount of passes run by the script mostly targets the precision of the deflickering and not the averaging, right?
I see that an algorithm has a hard time distinguishing between the two cases, that is why I am asking for the option to adjust the deflicker strength. I will try to rephrase the post I linked to.

Red graph = average image brightness (per frame)
Green graph = brutally averaged brightness target
Blue graph = gentle brightness target for pleasing and natural results©
Black bars = keyframes 1., 2. and 3.
With the current algorithm, everything between the keyframes 1. and 2. would get averaged out (represented by the green graph), right?
The blue graph represents my idea of the deflickering strength, when the strength is 100% the blue graph will look like the green one, when the strength is 0% the blue graph will be no different to the red one. When adjusting the strength to 40% the graph has about the appearance like in my illustration.
Maybe it is possible to be able to adjust high frequenzy and low frequenzy smoothing (I hope this is the right terminology for this) separately:
High frequenzy (HF) flicker would be single frames that are just off (because the AV mode made a bad decision or whatever the reason is). These spikes need to be eliminated obviously.
Low frequenzy (LF) flicker would be the smoothing of the curve on a wider scale, when the sequence gets darker because of bypassing clouds for let's say 50 frames between keyframes 1 and 2, we want it to be a bit darker (.5EV) but not as dark as the RAW files (1.5EV), also there should be gentle roll-off into this.

Removing HF flicker would only take into account the brightness of adjacent images while LF deflickering also takes into account the original brightness of the RAW file.

If the amount of passes is doing what I assume above, running multiple passes would improve the precision in the high frequenzy (and to some extent also in the low frequenzy) deflicker but not average out everything untill it has the same brightness.

Hopefully my idea is clearer now.


Hi Paul,

I just re-discovered this thread and found your reply to me burried a few pages ago, so please excuse my late reply to this!

I'm late to this, but...: if I understand you correctly, the underlying problem here is that Adobe Camera Raw can't pass to AE a true 32-bit image with all the dynamic range of the original RAW file. Hence your desire to do everything (even things like gradients) "before" AE, in the RAW processing phase.

Well, a possible solution might be here. In the last comment to this thread:

Stu mentions a tutorial about how to get the most dynamic range from Camera Raw:

I wonder if you might find it useful.

First, let me say thank for those two links. I am not sure how exactly the info in the second link can be translated to the recent version of ACR. The problem is that when the sliders are at zero the image is far away from beeing linear. The problem was discussed by Christian Bloch here aswell. I assume the settings he recommends approximate a linear gamma pretty well tough. Writing this post I would think that one should be able to find out pretty close what settings create a linear gamma on PV2012 with Photoshop and the colour picker when comparing the same image processed with PV2010. Need to look into that when I find time...


David, I would like to revisit the deflickering topic again. Please excuse me if I just don't understand it what you are saying:

Technically it's not the average, but the median (or some other percentile of your choosing, median is simply the 50th percentile), which is much more statistically robust than the mean. Notice how the median is the same between the two different statistical distributions on this graph, while the mean and mode are different (a histogram is a type of statistical distribution)

Your two different parameters are exactly the same thing in terms of this algorithm. You can either correct the exposure completly to the target or not by some amount. Change this by adjusting the coefficient I mentioned. Make it smaller. 2 / log(2) is the best estimation of the ACR black box that I've found, it would represent 100% deflicker (or at least as close as we can get by guessing). 1 / log(2) would be like 50% deflicker. 0 / log(2) would be no deflicker at all.
Yes, that should be possible assuming Br always uses the sidecars and doesn't ever store the metadata directly in the files, which has always been the case in my experience.

There are basically two different scenarios for deflickering:
1. A sequence where you have exposure changes that are totally unwanted, for example if shot in AV mode. Here you want to totally eliminate the changes in brightness of the sequence.
2. A sequence that is technically perfect (shot in M, lens with no aperture flickr, etc) but there is natural light flicker caused by passing clouds that make the image too dark.

I understand you can ellimate flicker for 1. with the script but with 2. I don't want the changes in brightness to be totally eliminated but just weakened (not 1 or 2 EV as in the RAW sequence but .4 to .8 approx in the render) to recreate how the human eye/brain perceives it when beeing on location. How does that work with the current algorithm? This what I was talking about in this post and the illustration of the curve.


Will take a while, sorry.
I just realised my CPU fan isn't working anymore and probably didn't in the last days aswell....
So my general lagginess in Bridge and the random crashes I had were likely caused by that.
I'll get back to you ASAP.

Hi David,

took me abit longer than expected to get everything running again... I hope you excuse my slightly delayed update on my problem.  ::)
I see you kept working on the script in the meantime, really glad to see it still beeing maintained.
The good news are that I got a new machine that should serve me much better than the old one. Bad news is that I still have the same problem with the same sequence I tried last time I wrote in this forum. I did a three pass deflickering with a percentile of .7 and .25 (that is the value that targets the even foreground) but the flickering is still much worse after deflickering than with the original sequence (where there was *almost* none). Everything on a fresh installation of CC2014 on OSX.
If you need it I can send you a small encode of the sequence via dropbox if you need it.

Did you ever think about a feature to stabilize/deflicker white balance? When the sun is covered by clouds the colour temperature becomes cooler (right?!), could that be eliminated by looking at the histogram to adjust the WB?

Could you implement a feature to auto create a certain amount of keyframes automatically to the context menu? Like selecting it and then it evenly creates n amount of keyframes evenly over the whole sequence. For a sunset, editing the start and end point just isn't sufficient.

What about an option to initialise metadata? That would make it easier to deal with animating the gradient but you could also add the author of the image and other metadata (location for example) without Lightroom.

What about an option to create a backup of the XMP data within the sequence folder that is just called "XMP Snapshot - date and time" or something? That is the way I backup my xmp files and would save me quite a few clicks in the process. :-)

About the context menu:
IMHO it would easier to spot the script options if they weren't divided by the dividing lines. If you don't want to change that, can you tell me how to do it (I would also add an asterisc or something for myself), please? I guess different colours are out of the questions because Adobe doesn't allow that?!

Keep up the good work, David!


Will take a while, sorry.
I just realised my CPU fan isn't working anymore and probably didn't in the last days aswell....
So my general lagginess in Bridge and the random crashes I had were likely caused by that.
I'll get back to you ASAP.

Fixed the white thingy
Deflickering is still producing random results with the latest version.

UPDATE 2: Seems like Lightroom can not read .XMP files created. Just doing nothing importing files, or updating from xmp metadata. Unreadable XMP+RAW

Please try "Metadata->Read Metadata from files" and see if that works.

added backup xmp option.
This keeps getting better and better!

It seems I still have issues with the deflickering not working correctly, but I need to rule out errors on my side first before starting to complain.

Your two different parameters are exactly the same thing in terms of this algorithm. You can either correct the exposure completly to the target or not by some amount. Change this by adjusting the coefficient I mentioned. Make it smaller. 2 / log(2) is the best estimation of the ACR black box that I've found, it would represent 100% deflicker (or at least as close as we can get by guessing). 1 / log(2) would be like 50% deflicker. 0 / log(2) would be no deflicker at all.
If I understand you correctly you would just need a "strength slider" that changes the coefficient in the script and that would take care of my "blue graph" in my improvised painting, right?

Yes, that should be possible assuming Br always uses the sidecars and doesn't ever store the metadata directly in the files, which has always been the case in my experience.
That should only be true if you work with DNG files, right? With native RAW files like CR2 or NEF Adobe can't store any metadata within those file formats and has to store them in the sidecar files if I am not mistaken.

I can create and synchronize the radial gradients just fine (ACR 8.2.094). They actually don't work with the script though, I figured the tag names would be the same, and I didn't actually test the radial, but they're not the same, so I'll add them, should be pretty much the same code as the linear ones, just with slightly different tags.
Thanks for letting me know. I will wait for the next update with the radial filters and the fix for itsskins and my problem with the deflickering before I start trying to isolate the problems only I seem to have.

I can't help with any debugging info, because I still don't get how to create it but I am having the same issue as itsskin.
My keyframes are in the range of +-0.3EV and the deflickered frames are SEVERELY underexposed in one pass and overexposed in the next one.
See here.

I also noticed that Bridge is seems very unstable/unresponsive since the iteration of your script with gradient ramping.

If the last bug would be addressed when scrip ignores the first keyframe as per my previous post - it would be 100% awesome.
That's what I managed to get with current version:
Lookin' good!

I am still trying to get the animated radial gradient to work, the linear one works as expected based on my obersavtions so far.
I am starting to suspect there is a bug in Bridge itself:
If I create a radial gradient (for this particular example I chose one with -3EV to make it REALLY visible) and press "sync everything" (including local adjustments) NOTHING happens.

Can you replicate that issue in some way David?

Says here you get it bundled with adobe premiere pro cs6
I have it in my application folder, maybe it could work as a standalone application?
What version of adobe are you uisng? Mac?
Thanks, I'll look into it, stuff like that is always giving me a headache...
I am on Win8.1 64bit , first CC version.

Two feature requests:

Could you implement a check for settings that can't be ramped?
If there are conflicting settings, a dialogue pops up where the user can decide which one to keep. If that is not possible with the script then a reminder what to change would be good.
Camera calibration
Lens profile
If you forget to sync those earlier this screws up your shot.

Would it be possible to create snapshots of the recent .xmp data? If I decide to alter the settings but want to keep the old ones aswell it would be nice to back them up with a right click.
If that is possible, what do you think of the idea to create a folder with the .xmp files and implement a dialogue where one can to load stored sets of .xmp files?
That would be handy to try and compare different developing styles quickly.

In terms of the way this algorithm works, I can't make sense of what you mean by this. The algorithm simply matches where a particular percentile falls on the histogram from one frame to the next. I'm not sure what you mean by 'exposure graph'.
Okay, let me call it average brightness of each frame.
I f I am not mistaken if there is flicker between keyframes, the algorithm evens that out - no matter what, especially with multiple passses.
With my proposed method, it would just reduce the flicker leading to a more natural result. This is important when there are clouds blocking the sunlight and the exposure decreases significantly, for a natural result one would like to have a darker image in the shadow, but not 3EV darker as it was in reality.

Red graph = average image brightness (per frame)
Green graph = brutally averaged brightness target
Blue graph = gentle brightness target for pleasing and natural results©
Black bars = keyframes
The blue graph could be adjusted to anything in between from beeing like the red one (no changed brightness target) or the green one (smoothed out totally).
The deflickering strength then determines how hard the algorithm tries to match that target graph.

Do you understand now what I mean?

I could easily implement something like this (see the 'evCoefficient' at the top of the script, simply make it smaller), but...
If you're already making two different layers, you could just only partially blend the ground of fully deflickered footage (i.e. your layer mask makes the sky of the original 100% opaque, and the ground 50% or whatever). That should be basically the same I should think.
That would be possible, but in my case that means adding 50% more render time to that particular render and having to manage another iteration of the same sequence (3 instead of 2). I am also not sure if the blending would lead to the same natural looking results that you get with proper deflickering because that exposure slider of ACR acts really filmlike (in the sense of how chemical film behaves).

Pages: [1] 2 3 4