Astrophotograhy Ideas - Autoguide output

Started by cgapeart, July 27, 2012, 12:18:10 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

cgapeart

I will be the first to admit this is probably pushing the envelope, but what the hey...

Astrophotographers like me love ML because it gives us a remarkable amount of control over the camera -- very useful when you are already pushing the limits.

As a backgrounder, an autoguider is basically a loop which identifies stars in the images, and tracks them from frame to frame.  If the star shifts, that means that the tracking mount (telescope or otherwise) is not perfectly aligned or not running at the right speed.  The autoguider generates a signal which is sent to the tracking mount to correct the tracking errors and allow the final images to be crisp, clear, and free of star trails.  Commercial and open source solutions involving laptops, webcams or astronomy specific USB cameras, and

Now, I have been in the code enough to know that writing a guide routine hooked into the intervalometer would be possible.  The camera has more than enough computing power, and often a small selected portion of the frame is all that is needed. There are many examples of code and the mathematical work behind it for me to work from.  The question then is this:  How can a get a signal out of the camera to some kind of glue electronics (that I will have to invent)?

Obviously I don't want to tear apart the camera to do this - The idea here is to strap on one more feature and needing one less set of cables, batteries, computers, etc when out trying to image.

My best guess at the moment would be something on the sound output side.  Frequency or voltage signals could be read by a microcontroller such as an arduino, which in turn would communicate with the tracking system.  Alternately, I was thinking about toggling the microphone phantom power on and off, as I should be able to see that as a voltage signal.

I have started working through the audio IC documentation, but so far I haven't seen anything that makes it clear how to feed the audio IC sound data -- it's got to be there, but I don't have it yet.

While I am still doing some research, I was wondering what any of the other developers thought in general of other ways of getting signals out of the camera?

a1ex

One easy method to get some signal out of the camera is to toggle mic power. Plug the headphones in the mic jack, toggle mic power from ML menu and you'll see what I mean.

If your other device can do USB hosting, PTP is a great option - ML can do this.

cgapeart

From what I have seen in my research, I am leaning that way.  I only need to send 5 bits of information -- toggling the mic power to send rs232 style info is probably the way to go.

A sound signal would be too hard to do without alot of reverse engineering.  I found the code for making the camera beep - call("StartPlayWave"), but a part of the debugging code using that command has a comment that it you do that 105 times on the 60D, the camera locks up.

In theory, there must be a way to fill a memory buffer, and set up the processor and sound chip to play back that sound -- i.e. what would happen when you play back a movie.  In practice I would have to figure that out (which would make many people happy...), whereas toggling the phantom power on the external mic jack should be easy to do.

USB PTP hosting would be possible, but would take quite a bit more work on the controller side - the goal is to not have a full computer involved.  I suppose a RaspberryPi could work, mind you.

I think this weekend I will experiment with being able to send a signal this way, and report back.  If I can get that to work, I will have to dive into the r&d on the star detection and tracking code.


a1ex

Quote
A sound signal would be too hard to do without alot of reverse engineering.  I found the code for making the camera beep - call("StartPlayWave"), but a part of the debugging code using that command has a comment that it you do that 105 times on the 60D, the camera locks up.

Right, figuring out how StartPlayWave works and how to do it without memory leaks or whatever is going on will be a very nice addition. Maybe even enabling beeps on old cameras without this function?

Quote
USB PTP hosting would be possible, but would take quite a bit more work on the controller side - the goal is to not have a full computer involved.  I suppose a RaspberryPi could work, mind you.
+1 about RaspberryPi. I have one, so probably it's a good idea to research it.

QuoteIf I can get that to work, I will have to dive into the r&d on the star detection and tracking code.

This sounds to be very interesting - main challenge IMO is aligning the reference frames between the image and the autoguider.


cgapeart

On the sound signal side, it looks like it is some kind of DMA style transfer.  The audio chip itself has a serial data line -- once it is set up and all the bitrates are set, the data is pumped in or out of it.  Searching through old archives, it sounds like that task might be what the ASIF is all about.  In any case, I am not going to go there for the moment.

The raspberry pi as a USB host might make a lot of sense, using the PTP extensions.  The CHDK PTP libraries already exist, etc.

I don't have one, but I do have a number of arduino's -- guess what I will likely stick with ;)

In terms of the image analysis and tracking, I was looking at the open source code version of PHD.  It's really straight forward and well commented.  Porting it would be mostly a matter of working with the VRAM buffer instead of PDH's image buffer.  http://code.google.com/p/open-phd-guiding/source/browse/trunk/image_math.cpp is my starting point.

The hard part will be balancing the long-exposure time of the astro-photos with the need to make tracking adjustments.  The more I look at that problem, the more I see that the camera can be either taking the long exposure images or using the liveview buffer to track, but not both at the same time.

Since I don't happen to have 2 cameras, that leaves me wondering if I should bother.

None the less, this is what I got to so far for generating a simple serial output with the mic power port.  This is untested, but you can see where I am going:

//Added to the end of audio.c

#define SendBit(x) audio_ic_write(AUDIO_IC_SIG1 | 0x10 | (x?0x00:0x04)); msleep(10)
void startMicPwrSerial()
{
// Configure for external, with the mic power on to hold the signal high

        audio_ic_write( AUDIO_IC_SIG1
                   | 0x10
                   | 0x00;
                   ); // power up, no gain
   
        audio_ic_write( AUDIO_IC_SIG2
                   | 0x04 // external, no gain
                   | ( lovl & 0x3) << 0 // line output level
                   );

//Set the line idle
SendBit(1);
}


void micPwrSerialsendByte(unsigned char msg)
{
// 100 baud = 10ms per bit. (MIN_MSLEEP)
// Idle is high
// n81 = 10 bits total.  1 start, 8 data, 1 stop.
// LSB first

//start bit
SendBit(0);

for(int i = 0; i< 8; i++)
{
SendBit(msg & (1 << i);

}

//Stop bit, leaves line idle.
SendBit(1);

}
void endMicPwrSerial()
{

        audio_ic_write( AUDIO_IC_SIG1
                   | 0x10
                   | (mic_power?0x4 :0x00)
                   ); // power up, no gain
   
        audio_ic_write( AUDIO_IC_SIG2
                   | 0x04 // external, no gain
                   | ( lovl & 0x3) << 0 // line output level
                   );
}


Essentially it is an RS232 100 baud N81 style signal.  mic pwr on is  mark, off is space.
The start bit is a transition from mark to space, 1 bits are mark, 0 bits are space, and then a stop bit of mark to put the line idle.

100baud is chosen because each bit is 10ms long - which is defined as MIN_MSLEEP.  I have no doubt this can be sped up and made to work at a more standard baud, but this might also be kinder to the audio chip.  I really have no idea what the circuitry behind the mic power line is, and if there is any capacitance on the line it will need a certain amount of time to switch. 

I am planning on attaching the mic output to an analog input on the microcontroller.  Based on the spec, it seems that mic power is probably around 2.5 volts.  That won't hit the hi threshold for a digital input (roughly 3 volts using 5 volt TTL logic), so an analog voltage measurement is needed.  If I used a 3.3 volt microcontroller, I could probably get away with it.  The line would need a 20kOhm or so pull down resistor - the goal is not to pull any current from the camera, just get a voltage signal.  However, with the mic power is off, the pin is labeled as hi-Z, which is equivalent to not-connected to anything - neither an analog input or digital input will see that as either a 1 or zero clearly.

a1ex

So you can get true one-way RS232 communication? That's great!

nanomad

EOS 1100D | EOS 650 (No, I didn't forget the D) | Ye Olde Canon EF Lenses ('87): 50 f/1.8 - 28 f/2.8 - 70-210 f/4 | EF-S 18-55 f/3.5-5.6 | Metz 36 AF-5

cgapeart

I wouldn't call it true 1-way RS232.  For starters, the signal levels are not RS232 compatible, and as I mentioned need some tweaks to get access to it.

Also, it's at a non-standard baudrate.  That could be improved with more knowledge about timing on that processor.  At 10ms per bit - aka 100 bits per second is pretty damn slow for the moment.  N81 rs232 serial is 10 bits per character transmitted, so a best case scenario is 10 characters per second.

The msleep command appears to have a commented minimum of 10ms (MIN_MSLEEP).  If msleep could run at 1ms, 1000baud is possible.  Also, there is a limited amount of tolerance for jitter in the timing.  msleep, if I don't miss my guess is a hand off to the operating system, and there is probably not much guarantee about how close to the mark the sleep timer hits.  To bit-bang serial data reliably, a tight loop or interrupt following the timers on the process would be needed.  More precision would be needed to generate baud rates that could be read by a normal rs232 uart (assuming the line levels were adapted).  Normally, baudrates are generated by dedicated timers with a combination of scalars and maximum values to divide the processor clock rate to match the timings needed for the desired baud.

For reference:
100   baud = 10 millisecs
1000 baud = 1 millisecs
2400 baud = 416.666667 microseconds
4800 baud = 208.3333 microseconds

and so on.  The numbers are ugly in decimal, but if you use a power-of-2 processor speed (16mhz, 32 mhz, etc), it's not to hard to make these into straight base 2 divisions and counts.

Most receivers sample the value of the bit 1/2 way from when the bit starts to when it ends.  The bit timer is started on the receiver with the start bit is received, so it resets for every byte.  The amount of allowable timing error is based on ensuring the sampled value of the last bit of the data will still be well inside the window established by the timer.  The last data bit is the 9th transmitting, and the stop bit just sets the line back to idle until the next transmission.

If 9* abs(timer error) > 0.5 * bit time, then it will be possible to get incorrect data.

Anyhow, the point is that it might be possible to bit-bang out data at a reasonable rate, but there are lots of pitfalls:

  • How accurately/precisely can we measure or delay a fixed amount of time to clock out the bits without breaking the OS by taking over a processor timer that is already in use
  • How long does it take to change the mic power register in the audio chip, and is it a constant delay, or is it something that depends on the OS getting around to passing the value on
  • Can the audio chip handle turning the power on and off like that? - if the signal is handled with a high-Z input with practically zero current flow, it should be possible provided that the power supply to the mic-power pin is not bothered by being turned on and off like that.  It would really suck to destroy the audio IC from the experiment.  Best case scenario, only the mic power would be damaged.  Worst case scenario would mean that the entire audio IC is broken and the camera is unusable because the audio chip isn't responding to the OS.

In some ways, I would feel more confident if a solution could be made that uses a structured sound sample that is played out with an appropriate interface circuit.  That would allow the code to use the audio chip to simply play back the sound instead of making it do something bizarre by rapidly turning on and off the mic power.

You can play with the code I provided if you like, but the standard you-get-to-keep-both-parts warranty applies.  I am not confident enough to consider trying it out on my camera for the moment. 

Plus, I haven't reviewed the code carefully against the RS232 signalling documentation, so it's likely I don't have the mark/space stuff reliably aligned with the 1/0 state.

a1ex

There are timer interrupts that can be used here - g3gg0 investigated them a while ago. Msleep is not precise.

I saw quite a bit of time-critical code done with semaphores - probably a give_semaphore called from some interrupt, and a take_semaphore with the meaning of "wait until some operation is done" (stuff like DMA transfers, SIO communication, memory allocation via RscMgr).

But figuring out how to playback a custom sound is much more elegant and without these timing issues.

a1ex


cgapeart

Now it's just a matter of encoding.

Also, for some of the long standing questions about making an MP3 player, that code is a big part of what you would need.  Is there any kind of event from the audio system to let you know when the buffer is complete?  For the serial code, that would indicate when the byte is sent, so that the next one can be prepared.  For a general audio playback, it would tell you to swap to the (pre-prepared) next buffer.

I will have to actually look at how to encode the data into the sound, and how to decode it.  I hadn't expected that to become an option so quickly.



cgapeart


cgapeart

Just a thought for Raspberry Pi owners...  If the output is made to match a standard HAM radio AFSK 1200 baud signal, there should be pre-existing software modems available for linux that should be portable to the Raspberry Pi.  HAM (amatuer radio) operators use this kind of encoding for old-school style radio BBS and satellite uplinks.  There is an active community, and I have in the past seen lots of examples of using the computer sound card to both receive and transmit data, instead of dedicated hardware. 

That would be awesome, because the data sent from the camera that way could be readily useable with any linux based serial program, without extra programming work on that side.

It also has the advantage for developing and testing the modulation code on the camera without risking the audio IC on untested hardware.  Connect the audio monitor cable out of the camera and into the sound card on the PC.

1200 baud is still weak, but it's the limit for the standard FSK system.  There may be other modulation techniques that can be used to get higher baud.

1200 baud would be plenty for what I had in mind.  It would also be usable (but slow) for a ML debug style logging console. 2400 and 4800 baud might be possible, but 4800 is probably the upper limit for any kind of simple FSK encoding because each tone has to play long enough to be detectable, and the upper limit for tone generation on the camera will be 24khz (based on the 48khz sample rate).

I will be looking at what my encoding options are based on the goal of having a PC or something like a Raspberry Pi running readily available HAM modem/TNC software as the receiving end.

While I am on that, I am curious what other uses having a serial line out would have, and just how much baud would be needed for those ideas?

I suspect to make this really workable without holding up regular camera operations, it will be necessary to create a new task as well.  Data transmission would be handed off to that task and buffered up, to be sent as soon as possible.

Still, this is all just a playground for thoughts and ideas. 

Pyrofer

I thought the standard way to get serial out of a camera was to flash a LED on the body?
We have one easily available for that.
You can also just flash the screen or part of the screen, that works too. :)

Any more progress on this? I like the idea of auto-guide.

Could you grab the framebuffer when recording video at all for tracking?

a1ex

@cgapeart: custom beeps are fully working on 60D :)

I don't know how to play a continuous sound, but a 5-second wave works very well.

g3gg0

@cgapeart:

can you live with a LED-driven (optocoupled) solution?
but first we have to check whether the LED pin is low pass filtered with R/C or not.

if yes, can you give this code a try? dont have enough time at work to check that.
- add it to debug.c (replacing run_test)
- click "don't click me"
- attach a photo transistor to the LED
- measure if the LED is pulsed with 9600 baud alternating bits (4800 Hz)


uint32_t timer_hook_func = 0;
uint32_t timer_hook_counter = 0;

uint32_t timer_hook(uint32_t parm1, uint32_t parm2, uint32_t parm3, uint32_t parm4)
{
    uint32_t (*callee)(uint32_t, uint32_t, uint32_t, uint32_t) = timer_hook_func;
    uint32_t *led_port = ((uint32_t*)0xC0220134);
    uint32_t led_status = 0;
   
    timer_hook_counter++;
   
    /* blink LED */
    led_status = *led_port & (~0x02);
    led_status |= ((timer_hook_counter % 2) << 1);
    *led_port = led_status;
   
    /* get as close to 10 ms as we can get. for longer use, we need fraction correction */
    if((timer_hook_counter % 96) == 0)
    {
        timer_hook_counter = 0;
        return callee(parm1, parm2, parm3, parm4);
    }
}

void run_test()
{
    /* let system timer tick with 104 µs tick rate */
    timer_hook_func = *((uint32_t*) 0x40000720);
    *((uint32_t*) 0x40000720) = &timer_hook;
    *((uint32_t*) 0xC0210208) = 103;
   
    while(1)
    {
        msleep(50);
        bmp_printf(FONT_SMALL, 10,10, "Function: 0x%08X", timer_hook_func);
        bmp_printf(FONT_SMALL, 10,30, "R0:%08X R1:%08X R2:%08X R3:%08X", hijack_parm1, hijack_parm2, hijack_parm3, hijack_parm4);
        bmp_printf(FONT_SMALL, 10,50, "Calls: %d", timer_hook_counter);
    }
   
    return;
}



if this works, we have quite a good solution for two reasons:
- we are optocoupled and dont have to mind about destroying the audio IC
- we can do it without a lot of work to make audio ic transfers working from high resolution timer interrupts
- it is simple ;)

br,
g3gg0
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

cgapeart

Quote from: g3gg0 on July 31, 2012, 09:18:14 PM
@cgapeart:

can you live with a LED-driven (optocoupled) solution?
but first we have to check whether the LED pin is low pass filtered with R/C or not.

if yes, can you give this code a try? dont have enough time at work to check that.
- add it to debug.c (replacing run_test)
- click "don't click me"
- attach a photo transistor to the LED
- measure if the LED is pulsed with 9600 baud alternating bits (4800 Hz)

Yah, that would be simpler, wouldn't it!  I will give it a shot at some point in the future, but I am heading out on a trip tomorrow, and I won't have a chance to play with it for a few weeks.

I am in 100% agreement that this makes more sense -- given the 'flasher' ROM dumps, it's even a proven solution. 

Quote from: Pyrofer on July 31, 2012, 02:03:24 PM
I thought the standard way to get serial out of a camera was to flash a LED on the body?
We have one easily available for that.
You can also just flash the screen or part of the screen, that works too. :)

Any more progress on this? I like the idea of auto-guide.

Could you grab the framebuffer when recording video at all for tracking?

Yes, I was thinking along those lines.  The problem is that using video on the DSLR for image stacking isn't really that useful:  The h264 video compression throws out most of the fine detail on a frame to frame basis, and the SNR when trying to record dark skies at high ISO means that there really isn't much signal left.  The camera could be used as a dedicated guide camera, but you wouldn't be able to use it to capture the pictures.  It would only make sense if you had an even better DLSR or dedicated astronomy camera for the imaging. 

I am afraid the auto-guider idea is probably going to gather dust at this point.

In terms of something that could be of value to ML developers and users, I can suggest that creating a working serial port, even if it's only one way makes sense - it never hurts to have one more tool in the toolbox

Otherwise, my DSLR is just a little to expensive to be used as a glorified micro controller for robotics projects. The camera needs to be able to take pictures, and I don't see a way to do that and track at the same time.



cgapeart

Once the serial stuff is written, I do have another direction that might be worth looking at:  With my camera attached to a telescope, it is very hard to achieve good focus.  The knob is very touchy, and everything shakes, so you have to wait for it all to settle.

At one time I had a motorized focuser on my telescope.  In live view while setting up the shot, if an ML based software based auto focus were available  -- as discussed in http://www.magiclantern.fm/forum/index.php?topic=1492.0 -- having the camera control the motorized focuser instead of sending commands to a non-existent EOS lens might just do a better job than I can by hand.  Actually, might is an understatement.  I frequently spend 5-10 minutes playing with the focus only to be dissatisfied with the results.

g3gg0

ok i tested my blink function.

a) the frequency seems to be 4800 Hz, so we have 9600 baud (~2% accuracy), 19200 might also be possible using that method
b) about the LEDs maximum frequency, i cannot tell if it will work or not. just had an IR LED as "detector" on oscilloscope.
its producing that little energy when the red LED lights it, i think the input capacitance of my oscilloscope is causing the slopes on the plot.


i think i should get some powered photodiode instead of that LED ;)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

g3gg0

here the try to make some RS232 compatible transmit function.
will build an optocoupler to verify if it works.



#define BITBANG_BAUDRATE       9600
#define BITBANG_TIME_NORMAL    10000
#define BITBANG_TIME_TRANSFER  (1000000/BITBANG_BAUDRATE)

uint32_t bitbang_orig_handler = 0;
uint32_t bitbang_time = 0;
uint32_t bitbang_bits = 0;
uint32_t bitbang_tb = 0;
uint32_t bitbang_hook_counter = 0;


void bitbang_main()
{
    uint32_t *led_port = ((uint32_t*)0xC0220134);
    uint32_t led_status = 0;   
   
    led_status = (*led_port) & (~0x02);
    led_status |= ((bitbang_tb & 0x01) << 1);
    (*led_port) = led_status;
   
    bitbang_tb >>= 1;
    bitbang_bits--;
}

void bitbang_transmit(uint32_t data, uint32_t bits)
{
    /* called from user space. we have to wait until transfer is finished */
    while(bitbang_bits)
    {
        msleep(1);
    }
   
    bitbang_tb = data;
    bitbang_bits = bits;
}

void bitbang_transmit_rs232(uint8_t character)

    uint32_t data = 0;
   
    /* invert data bits */
    character ^= 0xFF;
   
    /* we transmit in RS232 mode 8,N,1. add 4 idle bits bit and start bit */
    data |= (1 << 4);
    data |= ((uint32_t)character << 5);
   
    /* no parity, stop bits are zero, so we are done */
    bitbang_transmit(data, 5 + 8 + 1);
}

uint32_t bitbang_timer_hook()
{
    uint32_t *timer_reg = ((uint32_t*)0xC0210208);
    uint32_t (*handler)() = bitbang_orig_handler;
   
    /* for debugging */
    bitbang_hook_counter++;
   
    /* check if we have to transmit data */
    if(bitbang_bits)
    {
        /* use high timer rate while shifting out the bits */
        *timer_reg = BITBANG_TIME_TRANSFER - 1;
       
        /* call the bitbanging routine */
        bitbang_main();
   
        /* get as close to 10 ms as we can get. for longer use, we need fraction correction */
        if(bitbang_time > BITBANG_TIME_NORMAL)
        {
            bitbang_time -= BITBANG_TIME_NORMAL;
            handler();
        }
       
        /* when we get called the next time, time has advanced by BITBANG_RATE_TRANSFER */
        bitbang_time += BITBANG_TIME_TRANSFER;
    }
    else
    {
        /* no data to transmit, low timer rate, directly call original handler */
        *timer_reg = BITBANG_TIME_NORMAL - 1;
        handler();
    }
   
    return;
}

void run_test()
{
    /* install bitbang handler */
    bitbang_orig_handler = *((uint32_t*) 0x40000720);
    *((uint32_t*) 0x40000720) = &bitbang_timer_hook;
   
    int loops = 0;
    while(1)
    {
        msleep(100);
        bmp_printf(FONT_SMALL, 10,50, "Calls: %d", bitbang_hook_counter);
       
        if((loops++ % 10) == 0)
        {
            bitbang_transmit_rs232(0xAA);
        }
    }
}


Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

g3gg0

success.
i am able to access the LED to drive some photodiode/opamp circuitry that amplifies the signal.
still have to increase amplification, cancel overshoot, add schmitt-trigger and feed that to some MAX232, but as proof of concept this is fairly good :)

not sure if i will do that. i dont need it ;)
http://www.youtube.com/watch?v=ZSymgFdn4Fc

circuit:
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

g3gg0

Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

cgapeart

I just got back from vacation, and I see that this has gone from idea to proof of concept very quickly.

It's been a long time since I played with op-amps, and I have to get out to the electronics store to pick one up.  Were you planning on working up the circuit in a single stage to TTL levels for a max232 solution? or were you thinking a second stage?  For TTL, if I can remember correctly, the high signal has to be greater than about 3.5 volts, and the low side below 0.7 or so.  A second stage wired as a straight comparator with appropriate source Vs+/Vs- voltages might work just fine.  The LM358 is a dual op-amp -- should be straight forward from there.

g3gg0

yeah, looks doable.
i would take the 2nd opamp (currently it inputs are GNDed) for configuring the decision point with a voltage divider using potentiometer.
optionally drive that as schmitt-trigger. (signal + feedback to minus i think)

i would feed that to a MAX232 which creates the right voltages.

the feedback capacitor should be somewhere around 1pF, optionally filter the output of the 1st opamp.
thats required to cancel that ringing.

i never made anything with opamps before ;)
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

SDX

Tried the optocoupler technique. The only problem that I'm facing is that the camera has some issues doing the timing when sending signals. I'm only working with milliseconds, which is ways to imprecise to work with. Well, in the end didn't manage to get anything out of it. I didn't get anything usable on the expected baud rate - and guessing it wasn't possible.


Now, today I found something that makes me want to work with this again. Check this out: https://github.com/felis/PTP_2.0

It is a implementation of PTP for Arduino in combination with a USB host shield. If you aren't that much for Arduino boards, just take a ATmega, burn arduino bootloader on it and combine it with a MAX3421E.
If you like Arduino, get this 20$ host controller (basically just a MAX3421E breakout).


The guy behind this has made some very impressing stuff with this. Eg. a focus stacking assistant and a wireless camera controller.
His blog: http://www.circuitsathome.com/camera-control/

Now imagine all the possibilities with this and MagicLantern on the camera.
- Using a distance sensor for proper autofocus while filming
- Connecting a GPS receiver and save GPS data as log for each photo (since we can't write into exif, I think)
- Using external controls for your camera. A potentiometer as followfocus anyone?
- Let the camera control external devices, such as timelapse sliders. Simply enter the stepsize and movement speed on-camera and start ML intervalometer. Or for autoguide ;)
- Make your camera and other attached equipment (sliders, pan-tilt heads) controllable trough almost everything. Bluetooth, wifi, oldfashioned radiowaves...
- Much, much more

Well, I was just thinking out loud..