Silent Pictures with the EOS M

Started by Meloware, February 22, 2017, 03:14:23 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Walter Schulz

Thanks, Daniel!
Will test it ASAP and report back.

Walter Schulz

@a1ex: Quick question ... should it look like this on 7D and 650D after power on?
No settings changed, just copied build to bootable card and started up in photo mode.
-> Flickering sound meters at bottom.

Download (Sorry for language mixup

EDIT: Visible in all ML menus in photo mode, GD off/on doesn't matter (restart applied).

a1ex

audio_meters_are_drawn_common() always returns 1 (true), so the audio meters are always displayed after this change.

They may fight with other GUI elements when they are not expected.

Walter Schulz

Thanks!

First test with 650D and 7D:
FRSP with Audio RemoteShot working.

Meloware

Walter Shulz:
QuoteFirst test with 650D and 7D:
FRSP with Audio RemoteShot working.
I there a build, for the 650D, which implements Forced Live view (with audio trigger) 650D? Did I miss this?

I am trying to determine the status of  dfort's Forced LiveView modifications. I am currently using dfort's
magiclantern-Meloware.2017Mar03.EOSM202 build.

Is the modified source for this build available?
Are these features going to make it into the unified branch?


This build fixed the problem of the camera dropping into standby (while in FRSP) from the timer set by the LCD display powersave value. I have found that same problem exist with Silent Pictures in Simple mode.

Is it easy to prevent the Powersave timeout on Silent/Simple resolution, as well?

Currently, with the EOS M, the total exposure cycle for FRSP (recording to MLV) is about 6.5 seconds. I'd love to have this faster, but it is actually similar to the cycle time of my 650D and a true remote half-shutter trigger. The Simple resolution (with the EOS M and audio trigger) cycle time is about 2.6 seconds. It is harder to measure with the 650D, but it may be faster. I realize the memory chips used may affect this, but both of these cameras only seem to support first generation SD cards.

Is there any likely way of improving this cycle (or write) time, in the near future?

The write cycle times for these SD chips increases as the memory fills. Without access to a "camera ready" signal, the worst case scenario needs to be guessed at and set. I had hoped to modify the use of the flash shoe to re-task as a "camera ready" status output. Any flash related code I can find is intimately tied to the shutter operation.

Is there any known possibility of forcing a flash event, without involving the shutter?

The only other possibility I might see, is in generating some kind of flashing semaphore with the status LED, and detecting it with a photocell and external electronics.

The Lua scripting has a well documented API. I have not seen any documentation for an API which directly accesses the Canon functions (presumably as would be needed in writing the modules).

Could you please tell me where the Canon functions are defined? Are they all known?

My camera had to sit on the shelf for two years, before I found my needed functionality in Magic Lantern. Thank you for the support and I hope to see great stuff continue to be developed.

dfort

Quite a few questions. I can answer some.

Quote from: Meloware on March 11, 2017, 12:26:15 AM
...a build, for the 650D, which implements Forced Live view (with audio trigger) 650D? Did I miss this?

I am trying to determine the status of  dfort's Forced LiveView modifications. I am currently using dfort's
magiclantern-Meloware.2017Mar03.EOSM202 build.

Is the modified source for this build available?
Are these features going to make it into the unified branch?


We are running a test for a1ex to see if the camera can still "listen" while in paused LiveView mode. So far it is looking good. I experimented substituting a call to SoundDevActiveIn instead of force_liveview but haven't found the right way to do this yet.

For now I'd recommend keep using that build I made with the force_liveview for your film restoration project even though it probably takes a bit longer before the camera is ready for the next exposure.

The source I'm working on is here: https://bitbucket.org/daniel_fort/magic-lantern/branch/silent_FRSP_fix

This needs some more tinkering with the code before it is ready for a pull request and then tested under various conditions before merging into unified.

a1ex

Cycle time: your best bet is to record raw video at some low frame rate - this method is already optimized pretty well for file writing speed. If your hardware cannot work with exact timing (that would allow some open-loop sync), you can probably discard some frames in post, but you'll need to find out transitions between frames (basically some sort of clock recovery from the video stream). An advantage of this method would be increased dynamic range (don't know if you need that, but since you asked for flash, I assume you do) if you average multiple digitally recorded frames (of the same static scene - in your case, one film frame) to get one output frame (with less noise, therefore more DR).

There is even hardware support for adding video frames in Canon's image processor (therefore, averaging a few LiveView frames to save a single frame can probably be done) - see EekoAddRawPath.

I'm also considering a modification to raw recording to save only those frames where half-shutter is pressed (or even only one frame on half-shutter transition).

Flash event: no idea how to trigger it without a picture; this may help, but I'm afraid the decision appears to be taken by the MPU (secondary processor that controls a lot of I/O), probably to sync it with shutter motion. There might be some debug commands, but currently the MPU firmware not very well understood.

Where Canon functions are defined? The sticky topics from the Reverse Engineering board should answer this, however it's probably best to get started with something easier.

Meloware

A1ex: thank you and dfort's patience with my somewhat ignorant questions. I very much appreciate your efforts to solidify the use of Live View and audio triggers, and didn't fully realize the recent posts on this thread were working on just that.

I am not sure you understand my context in mentioning "Cycle Time" and the use of a flash shoe without also triggering the shutter. My work with these Canon cameras and ML are to produce a poor man's movie film transfer and digital restoration system. ML allows budget minded, professional quality transfer, and Blender's compositor feature allows an amazing set of post production tools, before the result is sent for final rendering in AVI Demux.

A ten minute sound film requires 14,400 individual frames to be captured in the camera. This number of cycles would destroy the shutter of my 650D, and to my horror, the mirror-less EOS M still employs mechanical components which would quickly wear out under this kind of use. My only option, with these cameras, is ML and the Silent Pictures feature.

Although I am recording raw video, the system is synchronized to only take one exposure for each frame of film.

With so many frames, it takes days of frame by frame image capture to digitize an average movie. My definition of the term "Cycle Time" refers to the interval required for the camera to capture an image, save it, and then recover and be ready for the next frame. The MLV_Rec feature is great and allows frame capture at a rate superior to saving individual DNG images. My film capture system would still benefit greatly from any improvement in the time it takes to save images.

The flash shoe question was related to my quest to find a way for the camera to indicate to my external equipment that it is ready for the next exposure. As a SD memory fills, it takes longer and longer to save each successive frame. Right now, my system has to "guess" when the camera is ready, before it initiates an exposure. This worst case guess is what defines my cycle time, for each frame. If I could generate a machine readable "yes, I'm ready" signal, that would be my trigger to initiate another exposure (rather than using a worst case dumb timer). I had hoped I might use the flash shoe switch closure to signal this. If that use isn't possible, my only other option would be to flicker the camera's status LED in a machine readable pattern, and have a photocell or photodiode sense the event and signal my equipment.

Between you and dfort, I appreciate you addressing most of my earlier questions. I will wait now for you guys to have something to test.

The only remaining issue from my earlier post, is the problem of preventing my EOS M from dropping into Power Save standby, when recording in Silent Pictures/Simple resolution. This mode would be very useful and fast for 8mm capture. Simple resolution may be about all standard 8mm needs, given it's very small frame area. It seems that the camera drops into power save standby, as defined by the LCD timeout value. Could this be over-ridden when using the camera in forced Live View? Keep your priorities, but please also consider this one in due time.

a1ex

This mod might be useful as well: http://www.magiclantern.fm/forum/index.php?topic=17069.msg181236#msg181236
(half-shutter trigger for mlv_lite; experimental build already available). You should be able to simply connect your trigger signal to half-shutter, and mlv_lite will record one frame on every half-shutter transition (with buffering, fast write speeds etc). In mlv_lite, powersave is not an issue.

To get a machine-readable signal that camera is ready, you can probably use the display backlight, the card LED or a beep. This may require minor changes to the silent picture code. Not sure if there is any usable signal on the external monitor cables - it's pretty easy to switch the display between external and internal monitor from code, so if you can get an electrical signal out of this, it might be a good option.

On old cameras there is a blue LED, which is pretty strong and not used by Canon firmware under normal conditions. That might be an option as well.

It's also possible to interface the camera with an Arduino over USB.

dfort

Those are some good suggestions -- for other cameras. The EOSM is severely limited when it comes to doing a remote trigger. There is no jack to plug in a remote and I don't think there has been much success using the USB port. There is the IR remote but most users seem to be using AUDIO_REMOTE_SHOT.

According to @Meloware it takes longer to save an image as the card fills up so perhaps the camera should be the master and the film scanner the slave? Not sure about the EOSM having a blue LED but maybe there is a way to turn the focus light on and off? I got this idea from a movie I worked on years ago where they used a laser pointer synced to the camera's shutter so that they could get the actors' eye line to follow a CGI character that obviously wasn't there during the shoot.

I have been experimenting with SoundDevActiveIn but haven't been successful. The only thing that is working for me is force_liveview. I was thinking of creating a menu option for Full-res that always goes to LiveView instead of leaving the camera in QR or paused LV.

Maybe call it "Full-res with LiveView" ?

Meloware

I am beginning to think about building other render stations for some of the 8mm formats. My controller was built out of my TTL parts drawer and it proves how far behind the times I am. A Raspberry Pi (or Arduino as a1ex suggested) would be far easier to use, these days, and I would be able to publish a design which would be easier for everyone to duplicate. All the clock, sense and control signals could be done, with the Pi, and could even support different modes of operation. The only other components would be the projector's optical shutter position sensor, the stepping motor, the $12 Chinese stepping motor controller, and power supply.

A big question is if the camera can be made to send ML control messages out of it's USB port. I agree with dfort that it should be the camera that needs to be master in this system. Everything else can wait until the camera is ready, but without a "camera ready" signal, the system needs to rely on a "worst case" dumb timer. Has anybody been able to make any use of the camera's USB port? All of the cameras, in the ML family, have USB, right? This has some real potential. Please speak up if you have some experience accessing the camera's USB interface. I am a long ways away from trying to tackle this, myself.

As an alternative, the code-pulsed LED is awkward, primitive, but doable with known ML function calls. A Raspberry Pi could help here, too, but a true USB connection would obviously be superior. The IR remote is only an input, right? Even if it was to be used as a exposure trigger, it still wouldn't serve for a "camera ready" status signal.

In the mean time, thanks for the continued effort in dealing with the Live View issue.

dfort

Ok -- here we go:



This works with audio remote. It is also great if you want to keep an eye on the intervalometer counter. Getting this working was challenging and it could probably use some finessing but it is working so I'm putting up an EOSM test build on my download site:

https://bitbucket.org/daniel_fort/magic-lantern/downloads/

Source code is here:

https://bitbucket.org/daniel_fort/magic-lantern/branch/silent_FRSP_fix

I'll make a pull request once I test it out on the 700D and 5D3.

a1ex

What about one of those?

- add an option that would be valid only when full-res is selected (similar to slit-scan mode)
- listen to Powersave in LiveView setting (avoids YAMLMO, but may be less intuitive and increases coupling)

In both cases, you avoid the need to check for both full-res modes.

We can also hide the options that are not valid in the current mode, rather than just graying them out.

Also it's probably best to keep the image review delay, as configured in Canon menu. However, that's a bit tricky, as the delayed_call routine runs from a timer interrupt, rather than a regular task. That means, checking whether we are in LiveView and sending a fake keypress are OK, but waiting for the action to finish, or any kind of delay (msleep, waiting at semaphores or long computation) are not.

So, you'll need a routine similar to display_off_if_qr_mode() - except the actual switching to LiveView would probably have to be delegated to silent_pic_polling_cbr. I somehow doubt "pressing" the LiveView button will cause Canon firmware to exit the paused LiveView mode, but might work for QuickReview (so I'd expect this method - of sending a LiveView press from the timer interrupt - to work only with image review enabled). The other one - delegating the LV switching action to the polling CBR - should work in both modes.

dfort

Had to Google YAMLMO - Yet Another Magic Lantern Menu Option.

I'll check into your suggestions. Tested it with various image review settings on the Canon menu and it is working fine on all settings.

Meloware

This works great for my purposes! Using my EOS M, dfort's build, and a SanDisk Ultra HC1 SD memory:
Full Resolution cycle time (from audio trigger to ready for next exposure) is about 6.47 seconds (saved to MLV video file).
Simple Resolution cycle time is about 2.7 seconds.

Theoretically, this means a 10 minute sound film can be copied in 26 hours (in full resolution). (Don't gasp! That is actually practical when expensive equipment isn't tied up in doing the work).
8mm movie frames are much lower resolution, and those transfers may not benefit from FRSP. A Silent/Simple resolution transfer will be 60% faster than FRSP.

With Silent/Simple mode (and LiveView), the camera is no longer dropping into standby, based on the LCD powersave timer. This can't be an accident! You fixed that, as well, right?

I'd like to develop this application to the point where I can define it and publish it. With these features, ML can now offer an important, affordable way for historic, non-profit organizations to perform digital transfer of their movies and create digital collections of fragile books and documents! My earlier post mentioned using an Arduino or Raspberry Pi, as a next generation system controller. Arduino seems to be more inclined to market kits with specific functions, where I need to quickly design and build a hardware controller. Pi may be the way to go.

I now have a Oracle VM now capable of building ML. I was able to compile unified and run it in my camera. I haven't had the nerve to modify any code yet, but as I learn more I may hope to have some intelligent optimization questions. Unfortunately, the older cameras don't seem to support the newer generation, faster SD memories' high speed functionality.

Thank you again for the great support, and I hope to soon be able to offer some video and documentation of the valued uses for this feature!

dfort

Quote from: Meloware on March 16, 2017, 12:45:41 AM
With Silent/Simple mode (and LiveView), the camera is no longer dropping into standby, based on the LCD powersave timer. This can't be an accident! You fixed that, as well, right?

a1ex fixed that and it is now in unified.

https://bitbucket.org/hudson/magic-lantern/commits/c46ffb21e7a7bb2fab56f83f97b1fd121a8dfcd6?at=unified

He also started a topic on writing modules starting from a simple "Hello World" module. I already got a lot out of it.

http://www.magiclantern.fm/forum/index.php?topic=19232.0

Meloware

An update:
I am grateful for the modifications in ML made to support this thread. I am still planning to make a video explaning my setup, but in the mean time it might be okay to post a link to an 8mm movie film I copied with the EOS-M, using these special Magic Lantern builds. My final post-processing was done in Blender, and the resulting frame sequence was encoded with Avidemux and uploaded to YouTube. Please enjoy!  https://youtu.be/CZevC9i2JGU

Teamsleepkid

amazing you could do that. capture 8mm film to archive it. looks great.
EOS M

DeafEyeJedi

Amazing work @Meloware and thanks for sharing us with what @dfort could help make possible on your end. An excellent way to archive ancient films. This is evolution in the making!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109