Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - Jonneh

#1
Through a strange (and wonderful) turn of events, I've ended up with two 5D3s in my hands, and I plan to have some fun with them for an upcoming short film. Things I'd like to play with range from high dynamic range capture, stitching for double resolution, fusing lens effects, to fusing different parts of the EM spectrum. Most of these require close to pixel-level alignment of objects in the frame for good results. In this context, I'm not too worried about parallax error since most scenes would be shot at long distance, but inter-frame synchronisation between the two cameras is going to be an obstacle.

Based on examples I've seen (e.g. https://joancharmant.com/blog/sub-frame-accurate-synchronization-of-two-usb-cameras/), I think I'll need sub-millisecond synchronisation to have acceptable alignment and avoid excessive loss of detail/double images. At 25fps, I can expect anywhere from 0 to 20ms of misalignment with close to random initiation of recording at this temporal resolution. Launching randomly (that is, random with respect to this high temporal resolution), I make it that I will have to make about 50 attempts to have a 90% chance of happening upon a sub-millisecond inter-frame difference between the two cameras' streams --- not practical.

Two questions then arise: 1) how I can know whether I've achieved this and 2) how this hit rate can be improved upon.

For 1), I can use the process described in the above link: set up a strobe synced to the frame rate and play with the duration such that I can align the resulting banding between the two cameras.

For 2), does anyone have any ideas about which of ML's features can be leveraged to get pretty close to sub-millisecond synchronisation, such that I might only require a few attempts to get a very small gap between the two streams? I'm wondering if the pre-record option in the RAW video menu combined with the recording trigger option might ready the buffer such that on launching recording with a Y-split remote release cable or similar, the latency might be low enough to consistently get low single digit offsets, for example, from whence I can simply retrigger manually until I get something acceptable.

P.S. I'm optimistically assuming that there won't be any precession of one stream in relation to the other over time, but I have no idea whether or not this will indeed be the case.
#2
Raw Video / Post-hoc dark frame generation
May 26, 2021, 07:05:24 AM
I have seen good results from dark-frame subtraction (in MLV App) using either frames generated immediately before or after the clip to be processed or using frames generated some significant time afterwards, so this is something of a theoretical aside, but what are the factors that determine the pattern of noise on the sensor, such that using different conditions for dark-frame generation from those of the clip of interest will produce suboptimal results?

I imagine area of sensor being sampled (resolution and centering), shutter speed, and ISO will be the most important factors (all possible to reproduce at any time), but what influence do factors such as temperature or long-term variation in noise patterns over time play, such that long-after-the-fact darkframe generation and subtraction might produce suboptimal results?