Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - skrk

#1
I did some searching re: decimal seconds with the intervalometer, and found some threads, but they are mostly ~4 years old, and I'm wondering what the latest information on this subject is.

My understanding is that the available clock is only a 1-second resolution, so the intervalometer can't be more precise than that; is that still the case?

My understanding is that there are script-based (or custom code-based) ways around this issue, but that they won't be time-accurate; is that still the case?

The main concerns for me are the typical: the difference between e.g. 1s and 2s is a huge halving of the rate of motion, and post-processing the resulting speed is obviously not a good option. It's also nice to be able to precisely time the resulting length of the time lapse when you know how long you will be filming (e.g. this event will take 20 minutes and I want the time lapse to be 30 seconds long at 30fps, so I want  1.33s per image.) It's also nice to get the max frame rate from the camera (my 100D can't do 1s, but can do 2s, so it'd be nice to try 1.5.)

I have done some raw video recording, and am aware of the FPS override, but AFAIK I can't shoot anywhere close to 4K in that mode (on my 100D). That method also means I can't post-process the RAW files in Canon DPP (which means no lens optimizations for me). It also limits the time I can record on a given card due to file size (when compared to shooting JPG).

Any hope? Maybe this has been figured out and I'm just not aware of the fix/script/etc?

Thanks for any ideas!
#2
Hi - I am doing some simple processing of frames from raw video taken with ML.

Currently I use MLVProducer to remove the focus pixels and convert all the frames to 16bit TIFF, sRGB.

I process these with custom code: this code converts the 16bit integer TIFFs into floating point, does math, and then writes them out as 16bit integer TIFFs again. The noise floor is a big issue (long story: low-light exposures, so most frames are very dark). I do lots of work to remove the noise, deal with it, work around it, etc. (FPN noise removal, adjustment curve, etc.)

Everything works fine, but I want to make sure I'm doing as much as I can in terms of the noise floor.

It occurred to me that I might be throwing away dynamic range by converting the RAW video to sRGB -- is this correct? Does this happen when MLVProducer converts RAW video to sRGB, or is that compression (or non-compression) just controlled by the sliders? In other words: is a (small) dynamic range built in to the definition of the sRGB color space? (My impression is that Rec.709 does have DR built in to it?)

If DR is being compressed, would it be smarter for me to instead convert the frames to some kind of log format? Then when I convert to floating point I could reverse the gamma curve and process as normal without having thrown away dynamic range?

My main concern is that when I convert to sRGB, I'm squashing the dynamic range and thereby combining the noise floor with the lowest end of my image data. Am I crazy?

Any help is appreciated, thanks!