Custom PTP commands (on EOS M)

Started by natschil, September 25, 2019, 11:19:52 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

natschil

First of all: thank you for making magic lantern, it is incredibly useful and I couldn't imagine buying a camera that doesn't support it.

I'm trying to build a setup for digitalizing old slide film. I have an  EOS-M (original) running Magic Lantern, and  a slide projector that I've modified so that I can now easily step through slides. I've bought an old manual focus macro lens, for which I intend to build a mechanism for autofocussing via a computer and an electric motor. My only problem: it seems like the EOS-M does not work well with certain ptp commands. So I've decided to write my own ptp handler that does the things I need. I essentially need only (a) a way to trigger taking pictures from my laptop. (b) a way to get a small piece of the live view (for focussing) to the laptop.

I've successfully compiled the ptp code, and written a "Hello-World" handler with which I can communicate with the camera. This works. However, it seems like the camera goes into some kind of ptp-lockup-mode whenever it is plugged into usb, which means that lens_take_picture() doesn't work - the picture gets taken only once the usb cable is unplugged. This is kind of annoying. I am able to take pictures in bulb mode, but doing take_pulb_pic(1) results in a very long exposure, and takes so long that the ptp call times out.

Before I start trying to reverse-engineer the EOS-M firmware in the manner of https://research.checkpoint.com/say-cheese-ransomware-ing-a-dslr-camera/, I was wondering if anyone here could offer some suggestions/insight. I'm aware that I could work around (a) by audio remote shot/IR remote and (b) by doing some kind of focus-checking on-camera and filming the live-view with a  webcam, but both of those approaches are *very* hacky and I feel like there must be a more elegant way to do this via ptp.

a1ex

Things to try:
- the simplest way to take a picture is: call("Release");
- alternative way: trigger the shutter button with SW1/SW2 + some delays (may have to be done from another DryOS task)
- try schedule_remote_shot() to offload the  lens_take_picture() to shoot_task
- debug lens_take_picture, maybe it's waiting for some preconditions (such as "busy" status, as reported by Canon firmware)

If you get it working, I'm pretty sure others will be interested :)

natschil

>-The simplest way to take a picture is: call("Release");

Yes, that worked! I've now also implied a command to return the raw live view, and apart from some minor issues that seems to work also. I'm note sure what the magic lantern philosophy to this kind of code is - it does seem to be in some sense reimplementing features that were potentially disabled for the EOS M. But if this is not a problem I'm happy to clean up and share my code.

I was quite surprised to find how well the ptp code worked. According to some old forum posts the ptp code should not work with new version of magic lantern (there was even a 200$ bounty posted in one thread) , but I had no problems using it whatsoever.

natschil

I'm now able to capture live view and take pictures.

However, it turns out that after the first (live view captured) picture, the images generated are corrupted (visually very similar to the corruption seen in the noisiest images of https://www.magiclantern.fm/forum/index.php?topic=8190.0). Weirdly enough, if I restart the PTP session (i.e. run the python program I have that calls my ptp handler), the first image is again fine, and the others are corrupted.

I currently capture live view via:

    force_liveview();
    raw_lv_request();
    msleep(50);
    raw_lv_release();
    msleep(50);
    raw_update_params();
    return &raw_info;

And then call "create_dng_header(&raw_info);  create_thumbnail(&raw_info);", reverse the bytes in the frame,  after which I memcpy everything into a buffer that gets sent back via ptp.

Am I doing something obviously wrong? I've seen a number of posts in the forum regarding corrupted images, but it's not clear to me what the underlying cause is.

a1ex

Quote from: natschil on November 17, 2019, 11:27:40 PM
Am I doing something obviously wrong?

Yes.

First one - raw_lv_request() should be called before using the live raw stream, and raw_lv_release after you are done with it. Keep in mind there may or may not be other tasks using this raw stream - such as the raw-based image overlays, if any of them is enabled. The LiveView raw stream is a shared resource, managed with reference counting (using these two functions).

Second one - by default, there is only one Bayer buffer, written over and over, from a DMA channel. If you turn off the raw stream, and no other tasks are using it, you can get away with that - which appears to be what you are trying to do. You can, however, redirect the image buffer for the next frame, into your own buffer(s).

The silent picture module allocates memory for each frame, so you can use that as reference.

Third, reversing the bytes (for DNG) is done in-place, destroying the original image. The corruption you are seeing might be that - easy to check. If you want to reuse the old image after reversing the bytes, you may need to memcpy first into your buffer, and reverse only that. If you also need to modify the raw_info structure (which is global), you will need a local copy of that structure, too.

Fourth, you may want to use the lossless compression (which is much faster, doesn't destroy the input buffers, and the result is directly usable in a DNG). You will need to work on top of some crop_rec_4k* branch for that.

And while you are at it, you may also want to check the full sensor readout presets from Danne's builds.

natschil

Thanks very much for the detailed response! Thanks also for the reference to lossless compression and full sensor readout presets!
Currently I'm only trying to get somewhat reliable transfer of live-view images, so that I have a proof of concept system working, which it now does. It currently is quite slow, so I will look at making it faster/better if need be (the end goal is to move the camera forwards and backwards on a rail based on the contrast of the live-view image (computed on an external computer), in order to have a rudimentary autofocus).

I tried registering a vsync hook using the code below, but that did not work:

MODULE_CBRS_START()
    MODULE_CBR(CBR_VSYNC, get_single_frame, 0)
MODULE_CBRS_END()

Then I put my callback into vsync_func() of state_object.c, and that worked. However, I feel like this isn't the cleanest way to do things. Is there something like aforemented macro that didn't work, but which works outside of modules?

a1ex

The MODULE_CBR code only works for modules; if you are modifying core code, it won't work. Do you have a repository with your changes?

There is another CBR backend (ml-cbr), that could be useful for refactoring all these hardcoded callbacks. That code is very much untested (only used in one place), and a bit too complex for my taste. Also, for many of these callbacks, their order matters, so... without a good test suite in place, it's hard to start doing major refactors.

Quote
(the end goal is to move the camera forwards and backwards on a rail based on the contrast of the live-view image (computed on an external computer), in order to have a rudimentary autofocus).

There is a "contrast" estimation computed by Canon firmware very quickly, for every single LiveView frame; the silent picture module uses it for the "best shots" mode, but... last time I checked, I couldn't find that info on 700D, EOS M and the like. Probably moved somewhere else. In any case, implementing a simple image processing algorithm directly in the camera, operating at reduced resolution or whatever, might be faster than transferring every single frame via USB.

So... what about interfacing with a RPi or some other development board with USB host support? Maybe you could drive the rail directly from there. What's your current hardware setup?

natschil

>The MODULE_CBR code only works for modules; if you are modifying core code, it won't work. Do you have a repository with your changes?

I currently don't; all my code is currently very hacky (see https://gist.github.com/natschil/800920c9adbf30d375494c73a7676377) , so I haven't made a repository yet (also I have zero experience with mercurial)

>In any case, implementing a simple image processing algorithm directly in the camera, operating at reduced resolution or whatever, might be faster than transferring every single frame via USB.

I agree, my reasoning for transferring it via usb has mainly just been that the development pipeline on the pc is shorter, as I can use any toolkits etc, and don't need to recompile + transfer data to card every time I make a change (I wasn't able to get qemu to run successfully to emulate the eos m).

>So... what about interfacing with a RPi or some other development board with USB host support? Maybe you could drive the rail directly from there. What's your current hardware setup?

Yeah that makes sense. Currently my hardware setup consists of an arduino to move the rail + control image switching, but really all the arduino does is essentially map usb from my pc to digital out pins..

names_are_hard

@natschil - you don't have to use Mercurial for building magiclantern.  I found mercurial horrible to use too, so I pulled out the dependencies.  Here's a git based repo:
https://bitbucket.org/stephen-e/ml_200d/src/master/

This is hackish work, I didn't know how to export from hg to git at the time, so there's not much commit history etc.  I suppose I should do a cleaner job and submit a PR for removing the hg dependencies from the build system.  It's kind of crazy that the build system is dependent on which source control tools you used!

natschil

@names_are_hard: thanks, I will have to look into that!

natschil

> There is a "contrast" estimation computed by Canon firmware very quickly, for every single LiveView frame;

@a1ex: Is this the code in focus.c, or more specifically "PROP_HANDLER(PROP_LV_FOCUS_DATA)"? It seems like "update_focus_mag()" is never called, and focus_value_raw() is always zero.  I can't seem to be able to tell why exactly; what do I need to do to get focus_value_raw() to be called?

a1ex

Short answer: I don't know. This works on older models, such as 5D3 or 60D. Very old models (5D2, 550D) report this once every few frames, but during AF, this gets reported once every frame, so the processing power is there.

I can probably figure it out if I had a couple of days to look into it (which I don't have atm).

natschil

Ok, makes sense. My lens doesn't have AF (or an AF chip), so maybe that is part of what is causing the issue. How are the "PROP_HANDLER_FOO_BAR" type functions actually "called"? does the PROP_HANDLER macro somehow register a function to the kernel to be automatically called at certain times, or does a user have to somehow call the property handler? It seems like this particular property handler is never called, as debugging code I put into it never gets run.

>I can probably figure it out if I had a couple of days to look into it (which I don't have atm).

That's perfectly understandable, I'll keep prodding around a bit maybe I'll find something.

natschil

btw, is there some kind of "higher level" code documentation somewhere that provides a rough overview of how magic lantern works? If so, I should probably read something like that first.  I've been looking in the wiki, but can't seem to find anything.

a1ex

Yes, properties are hooks (functions) called by Canon firmware when various events happen. They are used for settings, but also for internal parameters or events. There are some notes on the old wiki: https://magiclantern.wikia.com/wiki/Properties

Some lower-level notes are in the QEMU guide: https://bitbucket.org/hudson/magic-lantern/src/qemu/contrib/qemu/HACKING.rst

natschil

Since it seems like the focus doesn't really change between slides, I'd now like to implement something like the ettr module does. However, I don't want to be working in RAW mode (and I have no control over the aperture as I'm using manual focus lenses) so it seems easiest to just reimplement what I need.

I'm able to compute histograms and change exposure fine, via

prop_request_change( PROP_SHUTTER, &newshutter, 4);

However, it seems like it takes a while for these changes to propagate, while raw_shutter gets updated almost instantaneously, the histograms I see take around 0.1-0.3s to have the "correct" shutter value. This is despite calling "hist_build();". Is there some way to check whether or not the histogram is for the old shutter values? Alternatively, is there some other way to change the shutter/compute histograms? It seems like on the EOS M can_set_frame_shutter_timer() returns 0, the relevant line is :

    if (!RECORDING_H264) return 0;  /* EOS-M is stubborn, http://www.magiclantern.fm/forum/index.php?topic=5200.msg104816#msg104816 */

though the link doesn't seem to explain what is going on.

troma

Hello Natchild, I am working in a 3d scanner with multiple EOS M (right now working with IR signal) and I will also work modifying PTP commands and will push an experimental branch for it. I have no experience with Mercurial at all, but I do have with git, so I am studying Mercurial right now and doesn't look that hard for me so far.

I only can see one single file of your work so I will probably end up duplicating lots of work that you have already done and at the end if those changes end merged they will be discarded.

It would be a great idea if your work becomes a new branch so other interested people could work over your work. If you contact me(you can email this account) I can do the Mercurial branch work(or at least try it).

Greedence

Hi @troma, out of curiosity did you create the PR ?
EOS M