Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Tom C. Hall

#1
General Help Q&A / Sync'ed timelapse
November 04, 2014, 11:01:59 PM
I have two 60Ds that Id like to do sync'd timelapses with. If I let them both run they drift over time. Is there a way I can run the shutter port from one to another so one does the self intervolmeter and the 2nd just fires in step?
#2
Quote from: g3gg0 on November 08, 2013, 12:49:50 PM
the main problem i see is to measure the delay of the audio driver etc (DMA, I2S, DAC and the other way respectively ADC, I2S, DMA).
i dont know how to *reliable* quantify the delay in every single step.
we barely know how they work and getting a clear picture of all the delays would take about 2 weeks i guess.
plus we would need measurement equipment (e.g. a picoscope)

when knowing all this, all we can do with software and the current understanding would be some kind of soft sync, which is not a real genlock.
softsync means, we measure how much we are off the source clock and try to correct the delays so the exposure start is close to the audio trigger.
still the real exposure start can also jitter and we dont know how much that is again.

audio signal --> ADC --> I2S --> DMA --> software --> exposure timer --> software --> exposure start

and every single element in that queue is probably running asynchronously with unknown delays and jitter.

if the delays are constant it would be pretty straight forward to dial in the offset, maybe with a menu function. The trick I used to test sync is a swinging weight on a string, the difference in timing is very easy to see as one camera will see it going one way and the other camera will see it changed direction and going the other way.

Would something like a PID work for calculating the difference? It also might not be nessesary to run it over sound, what about through the shutter port?
#3
Duplicate Questions / Re: Magic Lantern and 3D
December 13, 2013, 03:18:49 AM
I think they are writing some kind of paper on it. I'm waiting to hear more.
#4
Quote from: g3gg0 on November 08, 2013, 12:49:50 PM
the main problem i see is to measure the delay of the audio driver etc (DMA, I2S, DAC and the other way respectively ADC, I2S, DMA).
i dont know how to *reliable* quantify the delay in every single step.
we barely know how they work and getting a clear picture of all the delays would take about 2 weeks i guess.
plus we would need measurement equipment (e.g. a picoscope)

when knowing all this, all we can do with software and the current understanding would be some kind of soft sync, which is not a real genlock.
softsync means, we measure how much we are off the source clock and try to correct the delays so the exposure start is close to the audio trigger.
still the real exposure start can also jitter and we dont know how much that is again.

audio signal --> ADC --> I2S --> DMA --> software --> exposure timer --> software --> exposure start

and every single element in that queue is probably running asynchronously with unknown delays and jitter.

Soft sync sounds much better than the current turn over and hope tha we have now. Unfortunately I have been very busy lately so I haven't made much progress. I am interested to hear what the ECUAD people have done, Patryk.
#5
Duplicate Questions / Re: Magic Lantern and 3D
November 15, 2013, 11:56:00 PM
Quote from: S3Dcentre on November 06, 2013, 11:56:01 PM
We have had development of this feature over the past months over here at the S3Dcentre. Once we have been able to confirm this data as a reliable workflow, we will be publishing our results on the forum with the developers of ML and on our website for the stereoscopic cinema community.

get in touch with us, patryk.

What method are you using for genlock? I was beginning to embark on my own side project but I'd definitely like to hear about what you guys have done.
#6
Duplicate Questions / EXT Master Clock
October 30, 2013, 09:12:46 PM
I have been wishing to use Canon EOS cameras with genlock setting for awhile now. Being able to use them cheaply for Stereo 3D as well as 360 video panorama, lightfield research, video stitching, time slice, HDR video and other odd applications could be revolutionary.

Both cameras in order to scan continuously in perfect sync require an external reference clock. While I have ton plenty of tests with the dual photo trigger method, for anything more than slow moving subjects it's unusable. Using audio input on a single channel you could feed the cameras a square wave at the appropriate Hz. (the drop slope being the trigger point for the clock.) Signals for other frame rates could also be easily generated. I found this to generate short test tones.

The camera listening to this audio input would only trigger photos or video start recording in time with the pulse. For video the camera could be trusted to run an arbitrary number of frames (24) before waiting and rechecking the pulse during a closed shutter. I'm not sure if that is possible without some very complicated timer0 / timer1 mathematics or just stop/rec - new video file mess. With this in theory the cameras sensor phase should be very close. It may be nessesary to include very fine phase adjustments anyway to account for lags from sending the signal through DAs, wireless sound or very long cables. Adjusting the phase would be very easy by just viewing a strobe and comparing the rolling shutters.


  • External Clock source via audio input
  • Switchable sync modes. NTSC 23.976hz, 29.98hz PAL 25hz
  • per camera phase adjustment
  • Video and timelapse intervalometer slaving
  • A more advanced function would have headphone jack generating a sync signal

An interesting application would be to slave the intervalometer to this sync for locked timelapse photography if the cameras are too far apart to be tethered, for photographing very time sensitive moments and multiple focal lengths or positions.

I am not a programmer in the slightest, just a Camera Technician and Stereographer but I have some programmer friends and am trying to wrap my head around how these cameras work. I am still trying to read very heavily into the magic lantern wiki to pull as much understanding as I can and approach this in the most intelligent way.
#7
I was thinking of alternates to running the pulse for every frame. It may make it difficult in the case of power failure that the camera may behave in unexpected ways. Knowing from experience shooting with DSLRs in Stereo myself that they drift a whole frame pretty consistently over 5 minutes. By pure bullshit guess work that makes me feel like the first 24 frames if started from the same start signal would be for all purposes locked.

Would it be possible to wait and check an external audio clock for their start pulse, then recheck every 24 frames by holding and waiting for the next pulse to drop?

Of course that limits the option to speed ramp or use alternate frame-rates. (other than ones that are multiples of 23.976 or 29.98) but that's also how it works on a system like Red Epic. I'm also reading about hardware options to convert a standard genlock signal into something that can go into MIC LEVEL. That way off the shelf gen lock devices could lock the cameras. Of course just playing an MP3 from an Iphone is also a very appealing aspect.

I am not a programmer myself but I am trying to read code and convince my programmer friends to help me with this project. I have access to half a dozen professional 3D beamsplitters at any given moment but getting two epics or alexas for low budget projects is always more trouble than it's worth. Being able to shoot with DSLRs would really open up 3D to the low budget market.
#8
Duplicate Questions / Re: Magic Lantern and 3D
October 29, 2013, 07:45:55 PM
The way I suggested doing it would be via audio input as a sort of genlock pacekeeper that would record frames based on it's timing. I am not a programmer so I am still not sure about how to implement it. I've read there are plenty of ways to adjust the movie recording timing, but nothing about controlling when the frames are actually fired from the camera.

http://www.magiclantern.fm/forum/index.php?topic=9069.0
#9
Another idea i had today would to have a function if the camera has a headphone jack to play this sync signal out as master clock, with the other cameras as a reciever. Also an function to adjust the timing on this signal input to fine tune the delay to get perfect sync.
#10
Would it be possible for the audio input frequency be used for live FPS override? Kind of like a Sync plus generator but over sound instead of an electrical input.
http://en.wikipedia.org/wiki/Video-signal_generator#Sync_pulse_generators_.28SPG.29

A sound file patterned off the trilevel sync wave would be pretty easy to generate and then playback as an MP3.


Possibilities would be interesting, such as 3D Sync, or any sort of multi cam shooting, as well as by ramping the wave from go from very low FPS to the highest FPS possible. Using any live pitch adjustment tools you could speed ramp the FPS in real time. Would be like ramping the motor on old ARRIFLEX S for sudden slow motion. The movie Hotfuzz used the trick a lot in the final shootout.

It could be played back from any sort of audio device, like an Iphone, or sent wirelessly via normal audio transmission.

Of course it would disable any sound recording. Or maybe it just uses one side of the stereo, and the second stereo channel would still be open to normal audio. Also it would have to be limited to what the cameras are capable of doing frame rate wise.