Can I manipulate pixels when recording video based on another input?

Started by plaurits, August 27, 2015, 01:29:55 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

plaurits

Hi,

I have just come across magic lantern and I think it might be the solution to my problem. Can someone please help me evaluate the feasibility of this?

I'm building a 3D scanner that is using a laser and an old 550D (T2i) together with some simple triangulation to calculate the distance to objects. For this I need to turn on and off the laser and know exactly in which frames the laser is turned on and in which frames the laser is turned off. I could just take still images, but that is too slow and it would ware the camera out in no time, so I need video. However I need to somehow record into the video when the laser is turned on and off.

So my question is, can I manipulate a pixel area on the actual video being saved by using an input like the mic? like changing colour of a 10*10 area in the top left corner depending on input from the right or left channel in the microphone jack? Or maybe from another input like USB perhaps?

I really appreciate any help you can provide.

dmilligan

If you can pipe your signal to the mic jack, then why not just use the recorded audio track to figure it out?

plaurits

Because I would prefer to process the recorded data as a frame stack only and not have to mess around with audio/video libraries and syncing problems and what not.

dmilligan

"Messing around" with audio libraries (which would be nice, well documented APIs, running on a computer you can easily debug on), is far easier than trying to do anything on the camera. Doing anything in camera is extremely difficult. Nothing is documented and everything requires reverse engineering, debugging is extremely difficult, everything is real-time, etc. If you can avoid doing stuff in camera, you certainly should.

You will have to deal with exactly all the same issues you speak of anyway ("mess around with audio/video libraries, syncing problems, and what not"), except they will all be extremely much more difficult to deal with, because you have no good API documentation and you're running on an real-time embedded computer with very limited debugging capabilities.

plaurits

I was just naively hoping that something similar to what I was describing already existed for magic lantern. I don't know, something like a stereo vu meter directly saved in the video stream. Since that doesn't seem to be the case, then of course you are right.