5DII as scientific instrument

Started by ManlyC, October 25, 2013, 12:02:13 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

ManlyC

I'm currently working on a PhD  in the field of microfluidics.
To measure the dispersion in microfluidic chips I'm currently making movies of fluorescent dyes using a terribly expensive Hammamatsu camera.
This camera has a 14-bit 128 by 128 pixel sensor and it is generating text files which can easily be opened in calculating software like Matlab.
Each number in the text file representing a intensity at a given location of the sensor.
From these time stack of matrices I make very interesting graphs :-)

I can't use normal video function since it gives me only 256 color levels.
So I was trying to use my 5DII camera to make 14-bit 720p movies at 10 frames per second or so.
This giving me RAW files I have to convert into multiple DNG files and after that extracting the numbers from this file.
This is a rather time consuming procedure.

So my question is, how to develop 5DII firmware making raw videos that saves every frame in a text file?
Where to start?

a1ex

My suggestion is to learn to work with binary files. Let dcraw be your friend, especially dcraw -4 -D.

ManlyC

Quote from: a1ex on October 25, 2013, 12:21:51 PM
My suggestion is to learn to work with binary files. Let dcraw be your friend, especially dcraw -4 -D.

Dcraw works fine, it makes a TIFF file from an DNG.
The TIFF file I can easy read in Matlab to make graphs from it.
But I still need to run raw2dng first and dcraw for every frame.
New questions:
Is there a way to convert RAW files in to a multipage TIFF?
Why -C with dcraw?

Remaining question:
Where to start If I want my camera recording in tekst, multipage TIFF or even Matlab .mat files?


g3gg0

some hint:
when you make a .bat, .cmd or .sh it is no real effort for you to process the files.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

maxotics

If I can elaborate on A1ex's suggestion, which I completely agree with!  Part of the expense of some of those specialized chip cameras is they test and calibrate every pixel.  You know you're getting 128x128 near-perfect readings. So, you may need to do some calibration work to get good results from any consumer camera.

Are you familiar with any development environments, Windows, Linux, Mac?  Any languages? 

maxotics

Here is some code I'm working on to work with RAW image data.  It's cribbed from of A1ex and g3gg0's work (which I have to dumb-down for me to deal with).  This code requires some code from g3gg0's MLV viewer.  Anyway, if this looks like something you can work with then there is almost nothing you couldn't do!  Their code is 100x better by the way, but mine might be easier to learn with.

The purpose of my code is to work with the RAW data so I can isolate and interpolate around "focus pixels" of the EOS-M.  It may be over-kill for what you need, or it might be important if you need to isolate each pixel color channel. 

As they said, a thorough understanding of dcRaw is good.  I recommend this article: http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm



  // STRUCTS OF RAW
    [StructLayout(LayoutKind.Sequential, Pack = 1)]
    struct raw_footer
    {
        [MarshalAsAttribute(UnmanagedType.ByValTStr, SizeConst = 4)]
        public string magic;
        public int16_t xRes;
        public int16_t yRes;
        public int32_t frameSize;
        public int32_t frameCount;
        public int32_t frameSkip;
        public int32_t sourceFpsx1000;
        public int32_t reserved3;
        public int32_t reserved4;
        public raw_info_2 raw_info;
    }

    [StructLayout(LayoutKind.Sequential, Pack = 1)]
    struct raw_info_2
    {
        public int api_version;            // increase this when changing the structure
        public uint32_t buffer;               // points to image data

        public int height, width, pitch;
        public int frame_size;
        public int bits_per_pixel;         // 14

        public int black_level;            // autodetected
        public int white_level;            // somewhere around 13000 - 16000, varies with camera, settings etc

        public raw_info_crop2 jpeg;
        public raw_info_active_area2 active_area;

        [MarshalAs(UnmanagedType.ByValArray, SizeConst = 2)]
        public int[] exposure_bias;       // DNG Exposure Bias (idk what's that)
        public int cfa_pattern;            // stick to 0x02010100 (RGBG) if you can
        public int calibration_illuminant1;

        [MarshalAs(UnmanagedType.ByValArray, SizeConst = 18)]
        public int[] color_matrix1;      // DNG Color Matrix

        public int dynamic_range;          // EV x100, from analyzing black level and noise (very close to DxO)
    }

    struct raw_info_crop2
    {
        public int x, y;           // DNG JPEG top left corner
        public int width, height;  // DNG JPEG size
    }

    struct raw_info_active_area2
    {
        public int y1, x1, y2, x2;
    }
    // END STRUCTS FOR RAW





      private void btnLoadFrame1Proc_Click(object sender, EventArgs e)
        {
            String FileName = "C:\\Files2013_RawFootage\\EOSM\\20131022WalkSigma\\M22-1252.RAW";
           
            raw_footer Footer;
           
            TB_Status.AppendText("Opening: " + FileName + "\n");

            if (File.Exists(FileName))
            {

                BinaryReader Reader;
                Reader = new BinaryReader(File.Open(FileName, FileMode.Open, FileAccess.Read, FileShare.Read));

                 /* file footer data */
                int headerSize = Marshal.SizeOf(typeof(raw_footer));
               
                byte[] buf = new byte[headerSize];
               
                Reader.BaseStream.Position = Reader.BaseStream.Length - headerSize;

                // Should read headersize into buf
                if (Reader.Read(buf, 0, headerSize) != headerSize)
                {
                    throw new ArgumentException();
                }

                Reader.BaseStream.Position = 0;

                string type = Encoding.UTF8.GetString(buf, 0, 4);
                if (type != "RAWM")
                {
                    throw new ArgumentException();
                }
                TB_Status.AppendText("type: " + type + "\n");

                Footer = RAWHelper.ReadStruct<raw_footer>(buf);
 
                TB_Status.AppendText("Frame width: " + Footer.xRes + "\n");
                TB_Status.AppendText("Frame height: " + Footer.yRes + "\n");
                TB_Status.AppendText("Frame size: " + Footer.frameSize + "\n");
                TB_Status.AppendText("Frame count: " + Footer.frameCount + "\n");
                TB_Status.AppendText("Frame info: " + Footer.raw_info + "\n");

                /* go to first frame */
                Reader.BaseStream.Position = 0;

                FrameBuffer = new byte[Footer.frameSize];


                for (int campos = 0; campos < camMatrix.Length; campos++)
                {
                    camMatrix[campos] = (float)Footer.raw_info.color_matrix1[2 * campos] / (float)Footer.raw_info.color_matrix1[2 * campos + 1];
                }

                Bitunpack.BitsPerPixel = Footer.raw_info.bits_per_pixel;

                Debayer.Saturation = 0.12f;
                Debayer.Brightness = 4000;
                Debayer.BlackLevel = Footer.raw_info.black_level;
                Debayer.WhiteLevel = Footer.raw_info.white_level;
                Debayer.CamMatrix = camMatrix;

                /* simple fix to overcome a mlv_dump misbehavior. it simply doesnt scale white and black level when changing bit depth */
                while (Debayer.WhiteLevel > (1 << Footer.raw_info.bits_per_pixel))
                {
                    Debayer.BlackLevel >>= 1;
                    Debayer.WhiteLevel >>= 1;
                }

                // These arrays need to be size of frame
                // before going to Bitunpack and Debayer
                PixelData = new ushort[Footer.yRes, Footer.xRes];
                RGBData = new pixelType[Footer.yRes, Footer.xRes, 3];

                /* read RAW block into byte[] array */
                Reader.Read(FrameBuffer, 0, Footer.frameSize);

                /* first extract the raw channel values
                 * This stuff is complicated, unpacked Canon
                 * blocks of pixel data.  See "BitUnpackCanon.cs"
                 * into ushort[,] array, PixelData
                 */
                Bitunpack.Process(FrameBuffer, 0, Footer.frameSize, PixelData);

                /* then debayer the pixel data
                 * we take the ushort[,] PixelData
                 * and put it into
                 * [r,g,b] array
                 */

                // We send an array 720x1280, and get back 720x1280x3
                Debayer.Process(PixelData, RGBData);

                /* and transform into a bitmap for displaying
                 * See LockBitmap.cs
                 */
                CurrentFrame = new System.Drawing.Bitmap(Footer.xRes, Footer.yRes, PixelFormat.Format24bppRgb);
                LockBitmap = new LockBitmap(CurrentFrame);

                LockBitmap.LockBits();

                ushort myushort;
                double mydouble;

                int posimage = 0;
                for (int y = 0; y < Footer.yRes; y++)
                {
                    for (int x = 0; x < Footer.xRes; x++)
                    {
                        myushort = PixelData[y, x]; //720x1280

                        /* now scale to TV black/white levels */
                        myushort *= (235 - 16);
                        myushort /= 256;
                        myushort += 16;

                        mydouble = (byte)Math.Max(0, Math.Min(255, (int)myushort));

                        //mydouble = (myushort / 255.0);
                       
                        if (mydouble > 255)
                        {
                            mydouble = 255;
                        }
                        if (mydouble <0)
                        {
                            mydouble = 0;
                        }

                            // Are we on a red row?
                            if (x % 2 == 0)
                            {
                                //Red or Green
                                if (y % 2 == 0)
                                {
                                    // green
                                    mydouble = 100;
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // blue
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // green
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // red

                                }
                                else
                                {
                                    // red
                                    LockBitmap.Pixels[posimage++] = (byte)(0); // blue
                                    LockBitmap.Pixels[posimage++] = (byte)(0); // green
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // red
                                }
                            }
                            else // blue row
                            {
                                //Blue or Green
                                if (y % 2 == 0)
                                {
                                    // green
                                    mydouble = 100;
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // blue
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // green
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); // red

                                }
                                else
                                {
                                    // blue
                                    LockBitmap.Pixels[posimage++] = (byte)(mydouble); //
                                    LockBitmap.Pixels[posimage++] = (byte)(0); // green
                                    LockBitmap.Pixels[posimage++] = (byte)(0); // red
                                }

                            }
         
                        }
                   
                }


                LockBitmap.UnlockBits();

                Bitmap bmp = new Bitmap(CurrentFrame);
               
                pictureBox1.Image = bmp;

                TB_Status.AppendText("bmp pixelformat is: " + bmp.PixelFormat + "\n");
                TB_Status.AppendText("frame width is: " + bmp.Width + "\n");
                TB_Status.AppendText("frame height is: " + bmp.Height + "\n");

                Reader.Close();

            }


ManlyC

Quote from: maxotics on October 25, 2013, 02:17:43 PM
If I can elaborate on A1ex's suggestion, which I completely agree with!  Part of the expense of some of those specialized chip cameras is they test and calibrate every pixel.  You know you're getting 128x128 near-perfect readings. So, you may need to do some calibration work to get good results from any consumer camera.

No need for color callibration, I have a bandpass filter where only the fluorescent signal can pass.
Linear respons and 16384 discrete levels instead of 256 are important.

Quote from: maxotics on October 25, 2013, 02:17:43 PM
Are you familiar with any development environments, Windows, Linux, Mac?  Any languages?

I'm on Debian Linux.

maxotics

Quote from: ManlyC on October 25, 2013, 03:05:26 PM
No need for color callibration, I have a bandpass filter where only the fluorescent signal can pass.
Linear respons and 16384 discrete levels instead of 256 are important.

I'm on Debian Linux.

I work on Windows, so can only be of limited use to you.  As g3gg0 says, you need to create/find/modify some script to process your DNGs through dcRAW in a way to meet your needs.  I've read that article I posted above, many times.  I believe, if you master dcRaw (or some variant)  you can do what you want to do. 

ManlyC

Ok I understand, no need for writing custom firmware to satisfy my needs.
Just play with raw2DNG and dcraw, maybe some scripting.

Thanks to the Magic Lantern developpers and the forum members for this useful information to turn my 5DII into a scientific instrument.

CBGoodBuddy

Quote from: ManlyC on October 25, 2013, 03:05:26 PM
No need for color callibration, I have a bandpass filter where only the fluorescent signal can pass.
Linear respons and 16384 discrete levels instead of 256 are important.

Wavelength calibration is only one issue.  There are others.

The 5DII has a Bayer pattern filter, which is how red green and blue colors are measured.  Depending on the wavelength of interest, you will probably want to only use pixels of the same color as your fluorescent light.  For example if your light is 550 nm, then only include 50% of the green pixels and exclude the 25% red and 25% blue pixels.

In video mode, there is also line-skipping, unless you can go into crop mode, so consider that.

Each individual pixel might have a different response than the others.  You need a way to normalize that out.  In my field of work we call that flat-fielding.  At the very least you need to be able to recognize dead or hot pixels and exclude them.  Really there are two effects: pixel sensitivity (flat field effects) and dark current.   Pixel sensitivity produces varying output samples in different pixels under the same light stimulus.  Dark current produces different output samples in dark conditions (due to varying leakage current in the sensor elements).

In short, if laboratory accuracy is important, you should spend some time characterizing your instrument in dark and in uniform lighting conditions, and also understanding the spatial sampling you lose due to Bayer and line-skipping.

CB

SpcCb

Agree with CB.
We use a similar process here in astronomy with bandpass filter (ie. Ha, OIII, etc.) through the Bayer matrice with a 5D mkII.
Plus, don't forget the RON in the process.