Sending full silent picture over USB( EOS M)

Started by troma, June 14, 2021, 01:24:32 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

troma

Hello, I am working on using several EOS M for 3D scanning with digital projectors.

I have used @natchil  https://www.magiclantern.fm/forum/index.php?topic=24498.msg223063#msg223063 code and I can control the camera from the USB using PTP commands just well.

I want to modify the code so it sends the entire full res raw picture over the cable so I have full online scanner without having to store anything on the card, even when it will take probably a long time to do so.

There are a couple things that I don't understand. E.g the code works if I modify ptp-chdk.c/h code replacing it with my own code in the src folder, but exactly the same code does not work as a module. I simplified the code removing all code but the version and the shooter code(3 lines of code) and the problem remains: As a module the ptp code does not work, the handler is not activated, but in src it does.

Probably somebody knows what is happening.

Another thing that is happening is that silent picture dng file has the wrong dimensions and as a result is corrupted on the EOS M compared with the mlv file that looks right. I have not found a post that explains what is happening.

I plan using a square buffer that I reuse in small chunks in order to send the big picture buffer.

Odds are someone has done something similar before so any hints are welcomed.

troma

Ok, I have studied different source codes. I use mlrviewer because it is easy to understand, but the program looks outdated with the last commit done in 2014.

I see there is a talk about adding compression to DNG files in camera that starts in 2015, and that this option is going(was going in 2015) to be added to silent mode. So I suspect DNG files are compressed and MLV files are not and that is the reason I can not see DNG files.

Alex talks about opening the compressed DNG with Adobe converter that is a beast of 600MBs and no Linux support. I have old macs with "preview"(MacOs Sierra) that could not handle the compression albeit in theory mlrawwiever could handle JPEG lossless compression.

I want to open those files in Linux, and have a visible output of the transferred picture so I would try to update mlraawviever source code to python3, and see if I can decode those DNG.

troma

Ok, I made some progress thanks to the info I received in Discord. I can make silent pictures and send the raw image buffer over the USB port. It takes 3 seconds.

Now I am recreating the DNG in the computer. Instead of creating a DNG in the camera and sending it I prefer to have the camera ask for things, like exposure, focus , white level, and reconstruct it on the computer, because I am only really interested in the buffer.

I am not interested in the DNG at all because I process the buffer and get the info I want (the light patterns, checkboard coordinates and so on) and then I discard it.

But I am generating the dng so I can test the validity of the buffer and to make mlrawviewer capable of reading those.

Now I know unified silent mode is not compressing images and the problem with mlrawviewer is probably related with the reversing of half word that ML does. Soon I will know what is happening as I dive deeper in the code.

troma

I have studied a little more.

One of the interesting things of the official DNG specification is that you should be able to read in any endianness. But it fixes the endianness in two specific places, and it must be "Big endian" in:

1.The Tiff tag with the opcodes info.
2.The image data(stripes data) in case number of bits is not multiple of 8, like 8, 16,32.

As we use 14 bits per channel, we have to use Big Endian, so I suppose that is the reason for the "ReverseBytesorder()" function in chdk-dng.c as the camera works natively in little endian.

Personally I don't like using camera CPU cycles to convert a big amount of data, for undoing that later on the computer, so I am converting the image buffer I get from the USB to "wasteful" 16 bits, so I can use little endian, and then compress it.

I suspect there is going to be way more software able to handle 16bit little endian compressed dng that 14 bits. Anyway I need this thing to work as fast as possible and don't have time to deal with endianness, at least until it works.

heder

QuoteThere are a couple things that I don't understand. E.g the code works if I modify ptp-chdk.c/h code replacing it with my own code in the src folder, but exactly the same code does not work as a module. I simplified the code removing all code but the version and the shooter code(3 lines of code) and the problem remains: As a module the ptp code does not work, the handler is not activated, but in src it does.

Hi.

Post the mini module code please. Modules are installed runtime and called via hooks runtime. Your metadata section needs to be correct.
... some text here ..

troma

Hello heder.

Right now I am working in the computer side, creating dngs in the computer form the image buffer sent from a ptp-chmk.c/h derivative, but with my own custom PTP_OC_EOSM command and handler.

Once this is working I will go back and will try to make it a module again. My idea was using a lisp program to inspect what is happening in an emulator like QEMU but there is so much things that I ignore about ML architecture for that.

Please, Could you tell me what are you calling "metadata"?

I create a module creating a new directory(ptp_eosm) and adding it to Makefile.modules.User.

I copied another Makefile and changed the name. It compiles and loads in the camera well with make and cleans with make clean.

I don't know if I have to so anything else.

It must be something really stupid because the minimal, most basic thing did not work, like sending the version alone. I made the minimum program possible in order to isolate the problems but could not find what they were.

If I read and study the ML makefiles and files I will probably understand what is happening on my own but I don't know what to search, or where to start.

When I try again I will post my attempts.

troma

A very strange thing is happening.

I am creating a DNG from scratch loading the data from a DNG I took with the EOS M, and combining it with the buffer I download from the USB.

So basically I take most default values from that DNG, take the buffer, reverse the byte ordering and create a new DNG following the TIFF specs(a DNG is a TIFF). I preserve the 14 bits per sample.

The order I save things is different from the one used by chdk.

What is the strange thing? The strange thing is that IT WORKS!! I can see it is Shotwell (linux default viewer), I can see it in mlrawviewer quite fine. I can see it in rawtherapee!!

kayman1021

Very nice!
Just curious, is any focus pixel present on the pictures captured this way?
EOS 100D, Xiaomi A1

troma

@kaymain: Focus pixels are only present in liveview images(lower resolution and video).
In fullraw I suppose it is eliminated by the hardware automatically. That is at least what I have read from the forum topics.

I personally don't see any focus pixels, but I am not an expert, I will post some pictures so you can view them in the future.

Danne

Interesting. Could you upload  dng file produced this way?

troma

@Danne. I will when I end my vacations. Right now I don't have enough bandwidth.