Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - g3gg0

Pages: [1] 2
General Help Q&A / MOVED: EOS 60D Dead
« on: November 06, 2021, 11:26:07 AM »

Camera-specific Development / Canon 5DS / 5DS R
« on: February 10, 2018, 01:14:29 PM »
just for the record, i pushed some 5Ds experiments.

booting the firmware is a bit different now, but works.
the code is hardcoded right now and just meant as experiment / documentation.

i wasnt able to display something meaningful yet.
could write into some YUV buffers or modify the graphics processor's RAM.
but not really usable at all.


It is possible to compile magic lantern and qemu on windows, without any third-party-programs like cygwin, MSYS, VirtualBox etc by solely using windows' native linux compatibility layer.

Magic Lantern

For those who didn't know, microsoft added wrappers to allow linux code to execute properly.
You have just to enable it, as described on microsoft's website.
This gives you "bash" the famous native linux shell executable directly within windows.

OS Preparation

After you installed ubuntu, you should install a few standards tools.

Depending on the Windows 10 installation you have, you might be able to simply execute "bash" via Win+R or a menu entry called Bash or Ubuntu etc.
Then, in bash run: 

Code: [Select]
sudo apt-get update
sudo apt-get install make gcc gcc-arm-none-eabi mercurial gcc-mingw-w64 python3-docutils zip

There were also cases when you had to install python2 - your mileage may vary.

Code: [Select]
sudo apt-get install python2


directly clone magic lantern from the mercurial repository using this command:
Code: [Select]
hg clone -u unified
it will download the latest version, the unified branch.


first determine the exact arm-gcc compiler version you have either by executing
Code: [Select]
ls /usr/lib/gcc/arm-none-eabi/
or by entering
Code: [Select]
arm-none-eabi-gcc- [TAB] [TAB]

then use your favorite text editor in either linux or windows and create a file named Makefile.user with only this content:
Code: [Select]

open a windows shell at the folder where your makefiles are and run 'bash'.
and you should be able to compile Magic Lantern on windows with *native* compile speed :)

here an "all-in-one" script by a1ex, a bit modified:
Code: [Select]
# prepare system
sudo apt-get update
sudo apt-get install make gcc gcc-arm-none-eabi mercurial gcc-mingw-w64 python3-docutils zip

# download and prepare ML
hg clone -u unified
cd magic-lantern
echo "GCC_VERSION=-`ls /usr/lib/gcc/arm-none-eabi/`" > Makefile.user
echo "ARM_PATH=/usr" >> Makefile.user

# preparation complete, now build ML
cd platform/5D3.123
make zip

# desktop utilities
cd ../../modules/mlv_rec
make mlv_dump.exe
cd ../../modules/dual_iso
make cr2hdr.exe

# ports in progress (100D, 70D)
hg update 100D_merge_fw101 -C # use TAB to find the exact name
hg merge unified # or lua_fix or whatever (optional)
cd ../../platform/100D.101
make zip

# 4K with sound
hg update crop_rec_4k_mlv_snd -C
cd ../../platform/5D3.123
make clean; make zip

# quick build (autoexec.bin only, without modules)
cd ../../platform/5D3.123

# recovery (portable display test, ROM dumper, CPU info...)
hg update recovery -C
cd ../../platform/portable.000

QEMU (or: how to run Canon OS within qemu within the linux environment within windows 10 on a x64 CPU)

If you were successful with compiling magic lantern, then why not start compiling qemu?

install missing packages (review those please)
Code: [Select]
sudo apt-get update
sudo apt-get install zlib1g-dev libglib2.0 autoconf libtool libsdl-console flex bison libgtk2.0-dev mtools
sudo apt-get install libsdl-console-dev

the last one - libsdl-console-dev - caused some trouble. i could not download some (unnecessary) drm graphics drivers.
i used aptitude to inspect the status and don't ask me what i did, but aptitude asked me if i want to examine its recommendations and i accepted them.
suddenly libdrm was held back and all other packages got installed.

you probably have to switch to the qemu branch
Code: [Select]
hg update qemu

then it is time to compile qemu using the script in contrib/qemu/
make sure your magic lantern path is named "magic-lantern" else the script will abort.

hint by a1ex, doesn't happen on my system:
for some reason, the output from is truncated
opening a new terminal appears to fix it (?!)
if it still doesn't work: ./ |& tee install.log
then open install.log in a text editor to read it

when it's done, do what it says:
    a) cd `pwd`/some_path_here"
    b) ../"
    c) make -j4   (or the numer of cores your CPU has)

if you now run the you get an error telling you:
Code: [Select]
qemu-system-arm: -chardev socket,server,nowait,path=qemu.monitor,id=monsock: Failed to bind socket to qemu.monitor: Operation not permitted

my assumption is, that either unix domain socket implementation in WSL is buggy or at least incompatible to qemu.
so the script needs some patches before it runs - remove those lines:

Code: [Select]
    -chardev socket,server,nowait,path=qemu.monitor,id=monsock \
    -mon chardev=monsock,mode=readline \


Hardware and Accessories / MOVED: I'm selling my VAF-5D2b in Europe
« on: July 12, 2017, 02:43:03 PM »
This topic has been removed. No selling threads.

Raw Video / Solar Eclipse MLV filming?
« on: June 11, 2017, 04:20:55 PM »
Hello there,

inspired from this SmarterEveryDay video i am really excited if someone plans to
catch some cool phenomena during that eclipse on August 21st in the states using their canon cameras and make a cool video from it?

e.g. the so called "Shadow Bands" would for sure look better with MLV than with a iphone camera as seen on youtube :D

Reverse Engineering / Register Map - We need your support!
« on: March 03, 2015, 09:23:12 PM »
Hi there.

i recently decided to make a "clean" database of the register map from our wiki.
i defined the data format and made some (win/.net) tool that prints a pretty representation of the registers, like it is in datasheets.

Code: [Select]

        <EngineDescription Name="SDCON">
                <Register Offset="0x000" Name="" Text="Unknown" Description="Set to 0x00 on init"/>
                <Register Offset="0x004" Name="" Text="Unknown" Description="Set to 0x01 on init"/>
                <Register Offset="0x008" Name="" Text="Unknown" Description="Set to 0x00 on init, 0x01/0xF1 before read/write, not used for status block. means: use DMA?"/>
                <Register Offset="0x00C" Name="" Text="Unknown" Description="Set to 0x14/0x13/0x12/0x11/0x02 on command, after writing regs +0x024, +0x020 and +0x010, with 0x11, registers +0x028/+0x02C is ignored probably"/>
                <Register Offset="0x010" Name="" Text="Status Register" Description="">
                        <RegisterField xsi:type="Bit" Pos="0" Name="" Text="Transfer finished" Description="" />
                        <RegisterField xsi:type="Bit" Pos="1" Name="" Text="Error during transfer" Description="" />
                        <RegisterField xsi:type="Bit" Pos="20" Name="" Text="DAT transfer data available in reg +0x06C?" Description="" />
                        <RegisterField xsi:type="Bit" Pos="21" Name="" Text="DAT transfer finished?" Description="" />
                <Register Offset="0x014" Name="" Text="Unknown" Description="Set to 0x03 before transfer start, 0x00 on ISR"/>
                <Register Offset="0x018" Name="" Text="Unknown" Description="Set to 0x08 on init"/>
                <Register Offset="0x020" Name="" Text="Command frame lower 32 bits" Description="needs 0x0001 being set (end bit)"/>
                <Register Offset="0x024" Name="" Text="Command frame upper 16 bits" Description="needs 0x4000 being set (transmission bit)"/>
                <Register Offset="0x028" Name="" Text="Unknown" Description="Written with 0x88/0x30/0x30 before CMD"/>
                <Register Offset="0x02C" Name="" Text="Unknown" Description="Written with 0x7F08/0x2701/0x80000000 before CMD"/>
                <Register Offset="0x034" Name="" Text="Data received lower 32 bits" Description=""/>
                <Register Offset="0x038" Name="" Text="Data received upper 16 bits" Description=""/>
                <Register Offset="0x058" Name="" Text="SD bus width" Description="">
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---0  1 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---1  4 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="1---0  8 bit" Description="" />
                <Register Offset="0x05C" Name="" Text="Write transfer block size" Description=""/>
                <Register Offset="0x064" Name="" Text="SD bus width" Description="">
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---0  1 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---1  4 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="1---0  8 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="20" End="27" Name="" Text="01100000  1 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="20" End="27" Name="" Text="01100000  4 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="20" End="27" Name="" Text="01110000  8 bit" Description="" />
                <Register Offset="0x068" Name="" Text="Read transfer block size" Description=""/>
                <Register Offset="0x070" Name="" Text="Some flags" Description="set to 0x39 before transfer">
                        <RegisterField xsi:type="Bit" Pos="0" Name="Transfer running" Text="" Description="" />
                <Register Offset="0x07C" Name="" Text="Read/Write transfer block count" Description=""/>
                <Register Offset="0x080" Name="" Text="Transferred blocks" Description=""/>
                <Register Offset="0x084" Name="SDREP" Text="Status register/error codes" Description=""/>
                <Register Offset="0x088" Name="SDBUFCTR" Text="Buffer counter?" Description="Set to 0x03 before reading/writing"/>

and the output will be
Code: [Select]
SDCON Engines
  0xC0C10000    SDCON0
  0xC0C20000    SDCON1
  0xC0C30000    SDCON2
  0xC0C40000    SDCON3
    +0x0000    Unknown
                Set to 0x00 on init
    +0x0004    Unknown
                Set to 0x01 on init
    +0x0008    Unknown
                Set to 0x00 on init, 0x01/0xF1 before read/write, not used for status block. means: use DMA?
    +0x000C    Unknown
                Set to 0x14/0x13/0x12/0x11/0x02 on command, after writing regs +0x024, +0x020 and +0x010, with 0x11, registers +0x028/+0x02C is ignored probably
    +0x0010    Status Register
      -------- -------- -------- -------X     Transfer finished
      -------- -------- -------- ------X-     Error during transfer
      -------- ---X---- -------- --------     DAT transfer data available in reg +0x06C?
      -------- --X----- -------- --------     DAT transfer finished?

    +0x0014    Unknown
                Set to 0x03 before transfer start, 0x00 on ISR
    +0x0018    Unknown
                Set to 0x08 on init
    +0x0020    Command frame lower 32 bits
                needs 0x0001 being set (end bit)
    +0x0024    Command frame upper 16 bits
                needs 0x4000 being set (transmission bit)
    +0x0028    Unknown
                Written with 0x88/0x30/0x30 before CMD
    +0x002C    Unknown
                Written with 0x7F08/0x2701/0x80000000 before CMD

now my question is, will there be some helpers wo try to transfer the information from our wiki and from alex' adtg_gui into a single XML file?

if you want to help, then
 * pick the example registermap.xml
 * and the pretty-printer for win32.
 * go to the wiki/adtg_gui
 * and add a missing section to the XML file
 * check how it looks
 * post it here in the forum :)

its enough if you just post the

<EngineDescription Name="SIO">

and the corresponding group

<Group Name="SIO Engines" Engine="SIO" Device="Digic">
        <Engine Address="0xC0820000" Name="SIO0"/>
        <Engine Address="0xC0820100" Name="SIO1"/>
        <Engine Address="0xC0820200" Name="SIO2"/>
        <Engine Address="0xC0820300" Name="SIO3"/>

everyone can then merge the new ones into their XML.
its not too complicated and you can get some insight into what is happening in those registers.
maybe some of you have some findings that will improve the register map?

thanks :)

Reverse Engineering / FRSP related infos
« on: January 22, 2015, 10:55:35 PM »
some reverse engineering notes.

for FRSP we call first  FA_CreateTestImage  to get a prepared image job.

Code: [Select]
struct struc_JobClass *cmd_FA_CreateTestImage()
  struct struc_JobClass *job; // r4@3
  unsigned int length; // [sp+Ch] [bp-44h]@3
  unsigned int *data_ptr; // [sp+10h] [bp-40h]@3
  struct struc_ShootParm tv; // [sp+14h] [bp-3Ch]@3

  DryosDebugMsg(0x90, 0x16, "FA_CreateTestImage");
  if ( !word_2771C )
  PROP_GetMulticastProperty(PROP_SHUTTER, &data_ptr, &length);
  tv.Tv = *data_ptr;
  tv.Tv2 = *data_ptr;
  PROP_GetMulticastProperty(PROP_APERTURE, &data_ptr, &length);
  tv.Av = *data_ptr;
  tv.Av2 = *data_ptr;
  PROP_GetMulticastProperty(PROP_ISO, &data_ptr, &length);
  tv.ISO = *data_ptr;
  tv.PO_lo = 185;
  tv.PO_hi = 0;
  tv.TP = 153;
  job = CreateSkeltonJob(&tv, FA_CreateTestImage_cbr);
  DryosDebugMsg(0x90, 0x16, "hJob(%#lx)(tv=%#x,av=%#x,iso=%#x)", job, (unsigned __int8)tv.Tv, (unsigned __int8)tv.Av, (unsigned __int8)tv.ISO);
  DryosDebugMsg(0x90, 0x16, "FA_CreateTestImage Fin");
  return job;

it sets factory mode, gets Tv, Av and ISO into a struct shootParm

Code: [Select]
#pragma pack(push, 1)
struct __attribute__((packed)) __attribute__((aligned(1))) struc_ShootParm
  char Tv;
  char Av;
  char Tv2;
  char Av2;
  char ISO;
  char field_5;
  char unk_HI;
  char unk_LO;
  int field_8;
  int field_C;
  char field_10;
  char field_11;
  char WftReleaseCheck;
  char field_13;
  char field_14;
  char field_15;
  char field_16;
  char field_17;
  char field_18;
  char TP;
  char field_1A;
  char PO_hi;
  char PO_lo;
  char field_1D;
  char field_1E;
  char field_1F;
  char field_20;
  char field_21;
  char field_22;
  char field_23;
  char field_24;
  __int16 field_25;
  char field_27;
  char field_28;
  char field_29;
  char field_2A;
  char field_2B;
  int field_2C;
  char EshutMode__;
  char EshutMode_;
  char field_32;
  char field_33;
  int field_34;
  int field_38;
  int field_3C;
  char field_40;
#pragma pack(pop)

and then calls CreateSkeltonJob to create a job for these parameters

Code: [Select]
struct struc_JobClass *__cdecl CreateSkeltonJob(struct struc_ShootParm *shootParam, int (__cdecl *cbr)(int, int))
  int v4; // r0@1
  const char *v5; // r2@1
  int v6; // r3@1
  struct struc_memChunk *v7; // r0@4
  struct struc_JobClass *job; // r5@4
  signed int jobField; // r0@4
  int v10; // r1@5
  int v11; // r0@6
  int v12; // r0@10
  const char *v13; // r2@10
  int v14; // r3@10
  struct struc_Container *v15; // r0@13
  struct struc_Container *v16; // r0@14
  signed int v17; // r0@16
  struct struc_memSuite *Mem1Component; // r0@21
  void *v20; // [sp+0h] [bp-38h]@1
  struct struc_memSuite *suite; // [sp+8h] [bp-30h]@1
  int data; // [sp+Ch] [bp-2Ch]@3

  v20 = shootParam;
  suite = 0;
  DryosDebugMsg(0x8F, 5, "CreateSkeltonJob (%#x)", cbr);
  SRM_AllocateMemoryResourceForJobObject(0x114C, SRM_AllocateMemoryResourceFor1stJob_cbr, &suite);
  v4 = TakeSemaphoreTimeout((void *)dword_27A44, 0x64);
  if ( v4 )
    v6 = v4;
    v5 = "SRM_AllocateMemoryResourceForJobObject failed [%#x]";
  data = v4;
  if ( v4 )
    goto LABEL_9;
  v7 = GetFirstChunkFromSuite(suite);
  job = (struct struc_JobClass *)GetMemoryAddressOfMemoryChunk(v7);
  memzero(job, 0x114Cu);
  jobField = 0;
    v10 = 0x31 * jobField;
    job->jobs[jobField++].job_ref = job;
    job->jobs[4 * v10 / 0xC4u].signature = "JobClass";
  while ( jobField < 3 );
  job->suite = suite;
  SRM_AllocateMemoryResourceForCaptureWork(0x40000, (int)SRM_AllocateMemoryResourceFor1stJob_cbr, (unsigned int *)&job->Mem1Component_0x4000_MEM1);
  v11 = TakeSemaphoreTimeout((void *)dword_27A44, 0x64);
  data = v11;
  if ( v11 || !job->Mem1Component_0x4000_MEM1 )
    v5 = (const char *)"SRM_AllocateMemoryResourceForCaptureWork failed [%#x, %#x]";
    v20 = suite;
    v6 = v11;
    DryosDebugMsg(0x8F, 6, v5, v6, v20);
    data = 5;
    prop_request_change(PROP_MVR_REC, &data, 4u);
    return (struct struc_JobClass *)&unk_5;
  SRM_AllocateMemoryResourceFor1stJob((int)SRM_AllocateMemoryResourceFor1stJob_cbr, (int)&job->ImageBuffer);
  v12 = TakeSemaphoreTimeout((void *)dword_27A44, 0x64);
  if ( v12 )
    v14 = v12;
    v13 = "SRM_AllocateMemoryResourceFor1stJob failed [%#x]";
  data = v12;
  if ( v12 )
    DryosDebugMsg(0x8F, 6, v13, v14);
    return (struct struc_JobClass *)&unk_5;
  memcpy_0(&job->ShootParam, shootParam, 0x31u);
  jobSetUnitPictType(job, job->DcsParam.PictType);
  job->cbr = cbr;
  job->cbr_ptr = &job->cbr;
  job->field_25C = 1;
  job->JobID = dword_27A24 + 1;
  v15 = CreateContainerWithoutLock("JobClass");
  job->FileContainer = v15;
  if ( (unsigned __int8)v15 & 1 || (v16 = CreateContainerWithoutLock("JobClass"), job->JobClassContainer = v16, (unsigned __int8)v16 & 1) )
    v14 = data;
    v13 = (const char *)"CreateContainerWithoutLock failed [%#x]";
    goto LABEL_18;
  v17 = Container_AddObject(job->FileContainer, "Mem1Component", (int)job->Mem1Component_0x4000_MEM1, 0x40000, (int)sub_FF0F2008, 0);
  data = v17;
  if ( v17 & 1 )
    v14 = v17;
    v13 = "AddObject failed [%#x]";
    goto LABEL_18;
  Mem1Component = job->Mem1Component_0x4000_MEM1;
  job->pLuckyTable = &Mem1Component[0x2600];
  DryosDebugMsg(0x8F, 5, "Mem1Component 0x%x pLuckyTable 0x%x", Mem1Component, &Mem1Component[0x2600]);
  if ( !powersave_count )
  return job;

the job structure is this one:

Code: [Select]
#pragma pack(push, 4)
struct struc_JobClass
  struc_JobClassListElem jobs[3];
  _BYTE gap24C[4];
  struct struc_memSuite *suite;
  int (__cdecl **cbr_ptr)(int, int);
  int (__cdecl *cbr)(int, int);
  int field_25C;
  int JobID;
  int field_264;
  int field_268;
  int ObjectID;
  int field_270;
  int field_274;
  int field_278;
  int Destination;
  struct struc_ShootParm ShootParam;
  struc_AfterParam AfterParam;
  __attribute__((aligned(4))) struct struc_DcsParam DcsParam;
  int ShootImageStorage;
  struct struc_memSuite *ImageMemory_0x4_JPEG_L;
  struct struc_memSuite *ImageMemory_0x1_JPEG_M;
  struct struc_memSuite *ImageMemory_0x1_JPEG_S;
  struct struc_memSuite *ImageMemory_0x40000000;
  struct struc_memSuite *ImageMemory_0x80000000;
  struct struc_memSuite *ImageMemory_0x40;
  struct struc_memSuite *ImageMemory_0x20;
  struct struc_memSuite *ImageMemory_0x10;
  struct struc_memSuite *ImageMemory_0x800;
  struct struc_memSuite *ImageMemory_0x200_JPEG_M1;
  struct struc_memSuite *ImageMemory_0x400_JPEG_M2;
  struct struc_memSuite *ImageMemory_0x100;
  struct struc_memSuite *ImageMemory_0x10000;
  struct struc_memSuite *ImageMemory_0x8000;
  struct struc_memSuite *ImageMemory_0x4000;
  struct struc_memSuite *ImageMemory_0x1000;
  struct struc_memSuite *ImageMemory_0x2000;
  struct struc_memSuite *ImageMemory_0x20000000_RAW;
  struct struc_memSuite *ImageMemory_0x10000000;
  struct struc_memSuite *ImageMemory_0x1000000;
  struct struc_memSuite *ImageMemory_0x80000;
  struct struc_memSuite *ImageMemory_0x400000;
  struct struc_memSuite *ImageMemory_0x100000;
  struct struc_memSuite *ImageMemory_0x200000;
  int field_F88;
  _BYTE gapF8C[140];
  int field_1018;
  int field_101C;
  struct struc_Container *FileContainer;
  void *JobClassContainer;
  struct struc_memSuite *Mem1Component_0x4000_MEM1;
  int field_102C;
  int field_1030;
  int DonePictType;
  int field_1038;
  struct struc_memSuite *ImageBuffer;
  int HDRCorrectImageBuffer;
  int HDRUnderImageBuffer;
  int HDROverImageBuffer;
  int field_104C;
  int field_1050;
  int field_1054;
  int field_1058;
  struct struc_memSuite *ImageMemory_0x2000000;
  int field_1060;
  int field_1064;
  int field_1068;
  _BYTE gap106C[116];
  int field_10E0;
  void *pLuckyTable;
  struct struc_LuckyParm LuckyParam;
  _BYTE gap1128[16];
  int BackupWbOutList;
  int BackupLensOutList;
  int BackupFnoOutList;
  int BackupLongExpNoiseReductionList;
  int BackupMultipleExposureSettingList;
#pragma pack(pop)

this job is then returned to which will start the capturing process using FA_CaptureTestImage

Code: [Select]
void __cdecl cmd_FA_CaptureTestImage(struct struc_JobClass **hJob)
  struct struc_JobClass *job; // r4@1
  int fa_flag; // [sp+0h] [bp-10h]@1

  job = *hJob;
  DryosDebugMsg(0x90, 0x16, "FA_CaptureTestImage(hJob:%#lx)", *hJob);
  faGetProperty(PROP_FA_ADJUST_FLAG, &fa_flag, 4u);
  fa_flag |= 4u;
  faSetProperty(PROP_FA_ADJUST_FLAG, &fa_flag, 4u);
  if ( TakeSemaphoreTimeout(FactRC_Semaphore_2, 20000) & 1 )
    DryosDebugMsg(0x90, 6, "ERROR TakeSemaphore");
  fa_flag &= 0xFFFFFFFB;
  faSetProperty(PROP_FA_ADJUST_FLAG, &fa_flag, 4u);
  DryosDebugMsg(0x90, 0x16, "FA_CaptureTestImage Fin");

the exposure is started and data is retrieved with
Code: [Select]
signed int sht_FA_ReleaseStart()
  return StageClass_Post(ShootCapture->StageClass, ShootCapture, 1, 0, 0);
signed int sht_FA_ReleaseData()
  return StageClass_Post(ShootCapture->StageClass, ShootCapture, 2, 0, 0);

FA_ReleaseData is calling the CBR FA_CreateTestImage_cbr() which releases the semaphore FactRC_Semaphore_2.
this CBR was given in FA_CreateTestImage with CreateSkeltonJob(&tv, FA_CreateTestImage_cbr).

the data in the job could imho get read using GetImageBuffer()

Code: [Select]
void *__fastcall GetImageBuffer(struct struc_JobClass *job)
  void *result; // r0@2

  if ( job->jobs[0].signature == "JobClass" )
    result = job->jobs[0].job_ref->ImageBuffer;
    DryosDebugMsg(0x8F, 6, "GetImageBuffer failed");
    result = &byte_7;
  return result;

Raw Video / MLV-Recovery with PhotoRec
« on: October 15, 2014, 12:26:02 AM »
Christophe Grenier added MLV support to his great tool PhotoRec which recovers all important file formats.
So if you encounter card or file system trouble, download the latest 7.0-WIP version from his download page and recover as much as possible.

Thanks, Christophe!

Reverse Engineering / Datasheet sharing folder
« on: October 04, 2014, 12:38:43 AM »
If you want to help us organize the datasheets and service manuals related to canon cameras, there is a simple way to do so.

install BitTorrent Sync and add these folders:

  Datasheets: BN5KE7A7OFOJ7LKOJQAUCFS3P5RWD5W5R (read only access)
  Contributions: ABBF35JA4KTEB7MOE6NSQEPIN2OVLPSKB (read/write access)

our Datasheet directory contains all public accessible datasheets for devices located on canon cameras,
or datasheets that are related for our reverse engineering work.

if you find a datasheet that might be interesting for us (PDF preferred) then just copy it into the Contributions folder.
BitTorrent Sync will synchronize the folder with us, no need to upload somewhere and share the D/L link etc.
you can place anything there that will help us doing our reverse engineering work.

of course, if it violates anyone's rights, we will remove it from that public folder, storing it in a safe place :)

Share Your Photos / Some of my favorites
« on: September 19, 2014, 10:41:35 PM »
some of my favorites :)

nothing photoshopped, just LR.
one or two of them have some strong effects applied, like grain and vignetting.

This is a statement about how the Magic Lantern team positions itself regarding copyleft discussions.

As some may have noticed, there was a lengthy discussion about the GPL and violations of it in post-processing tools designed to work with the files produced by Magic Lantern.
Let us first define why we use GPL and what it is for.  Please read this for a detailed and formal description.

This explanation is a condensed one to clarify our position.

We, the Magic Lantern developers, provide Magic Lantern and it's suite of tools on a free basis (free as in beer), and everything we give to you is a result of several thousand hours of work, either researching or programming.  Along with binary versions, you get all of the source code for Magic Lantern, and it's suite of tools.

Our intention:
To drive forward the Magic Lantern project through open sourced development.  Be that through development of the core code, modules, post processing applications, or any other applications designed to work primarily with the Magic Lantern project.

The only things we ask in return:
  • Contribute back to the Magic Lantern project if you make improvements to it.
  • Honor our decision that this code is free, and help to establish and support the free nature of Magic Lantern.
  • If you use the code, or parts of it and distribute it (or even sell it), you must release this code (per the GPL).
  • Don't act against common sense.
Unfortunately, even after a lengthy discussion, there were authors who used our GPLed code in their binary-only tools, without redistributing the source code of their tool, and not even mentioning that they use GPL code, and from where they obtained that code (appropriate credit).  Not only is this a violation of the GPL, but it is also rude to the developers who provided the original code.
There was no consent during that discussion, so we are asked to write down what we clearly expected to be common sense.

We think it's time to start actions against such behavior:
Due to the nature of these binary applications, and the actions of their developers, the Magic Lantern team cannot provide any assistance for these applications, and as such all related threads will be now be closed.  The affected application developers are free to work with the Magic Lantern development team if they would like to move forward in helping the Magic Lantern project.
If no move forward is being shown, these threads will be deleted, and the application developers can seek other avenues of support for their applications.

Closed source application developers who implemented their applications on their own, without re-using any of our GPL code, or those who got some exclusive permission (dual-licensed code) through the Magic Lantern developers, are of course not affected.
Naturally, application developers who implement their applications as open source, are also not affected.

What does this mean for developers:
We prefer open sourced development, whether through the use of the code base already available from this project, or entirely on your own.
And of course we tolerate any closed source application as long it doesn't violate GPL terms, even if it is commercial.
But we will definitely take actions against commercial closed source tools that use GPLed code without asking the affected devs before to get an exclusive license.

Compressed view of categories:
a) open source, using our code [preferred]
b) open source, not using our code [preferred]
c) closed source, not using our code [tolerated]
d) closed source, commercial, not using our code [tolerated]
e) closed source, using our code [asked to publish source, ban likely]
f) closed source, commercial, using our code [banned]

What does this mean for end users:
From now on, we discourage everyone from using those applications that have their threads closed.
Using, testing and providing your bug reports for the remaining applications, helps drive forward the Magic Lantern project.
To clarify, only two tools fall into categories e) and f) and will face actions against them, both of them are kind of "better wrappers GUIs".
The professional tools are not affected at all, they know how to behave.

If you have any questions or queries regarding the Magic Lantern source code (including in your own applications), or any licensing queries, please contact a1ex or g3gg0.

Respect the developers who provide original code!

ML developers and contributors
Code: [Select]
    Simon Dibbern

Raw Video Postprocessing / [deprecated] MLVFS windows client
« on: September 11, 2014, 03:54:18 PM »
Update 06.02.2016:

Please use MLVFS FUSE driver linked below, it will soon have builds for win32 that use dokany VFS driver.
dokany is open source, actively developed and allows to use FUSE drivers without API change.
(see: )

as long there is no official release, you can use this build after you installed dokany.




inspired from the brilliant idea of the FUSE MLVFS driver from dmilligan and ayshih, that allows you to mount MLV files as directories,
i wondered how to make their code/idea available to us windows users too.
The underlying system they use, called FUSE, is a file system extender for unix-like systems and wraps normal file calls so that an application can do arbritraty stuff with it.
In the case of MLVFS they simulate that MLV files are directories that contain DNGs. If you read the DNG from that virtual directory, they create it on-the-fly.
Unfortunately there is no real alternative for windows users to load the MLVFS daemon that is designed for FUSE.

But i found a simple way to give windows users the same experience as unix users get with MLVFS.
Back in my symbian OS days, i created a WebDAV server that allows symbian phones to mount directories on your windows computer. (see my old site)
So i could use my symbian phone to browse directories on my computer at home (MP3s and such).

I've added MLV support and browsing it as virtual folders! Should work from WinXP up to Win8.

Supported (generated) file types:
 - 16 bit DNG
 - JPEG for previews
 - WAV in case of audio-MLV
 - RAW FITS with metadata for astral photography (monochrome raw bayer mode) for e.g. DeepSkyStacker
 - a text file containing all important metadata in human/script readable form

 - you can select your MLV folder on HDD or memory card and browse it just like a normal directory - as soon there is a MLV, its simulated as directory
 - any write access is redirected into a separate subfolder (<mlv_filename>.MLD), just like with the original FUSE driver
 - overwriting and modifying the virtual files also possible - files get copied into virtual folder then
 - deleting all files in the virtual folder will remove only the files in the .MLD subdirectory, so you will have a clean MLV again
 - MLV files are *never* modified when doing stuff with files in the directory, except you delete the directory from its parent folder
 - you can enable/disable any file type separately

 - it is currently disabled, due to memory issues :(
   (i cannot catch out of memory exception properly, as it may happen anywhere)

Select a drive letter that is free and press "Map", it will connect the share to a network drive.
You can also do it manually using the shell by typing:
net use x: \\ (change letter and port accordingly)

If you close the window, it will minimize into systray, showing a star icon and run in background.

It also supports that windows computers mount the shares as a local network drive.
So i extended this tool to act a bit more responsive and added MLV support using MLVViewSharp and the DNG code from dmilligans MLVFS daemon.
Now you can browse the MLV files as they were directories, showing you the frames as DNGs, JPGs for preview and WAV if it contains audio.

You also can save files and folders "into" that MLV file. all files get redirected into a separate directory named like the MLV file itself, with an extra "_store" suffix

Just like with the unix-version, you can use (hopefully) all your tools with that mapped network drive.
For instance here i import the DNG frames using LightRoom:

Running WebDAVServer as Windows Service:
You can install the tool as a Windows Service which will automatically start on system boot.
To do this, first start the tool as Administrator (rightclick -&amp;gt; Run as Administrator).
First set up all options like Path, Port and Auth – dont forget to press the “Write” button to save a default config.
Now you have written a default config that is always loaded whenever the server starts (both as service and as normal app)
To install the service, simply press “Install”.

If this was successful, the “Install” button goes inactive and the “Uninstall” button activates.
The buttons “Start” and “Stop” are for starting and stopping the service.

Since the service has no GUI, sometimes it makes sense to stop the service and use the normal mode instead.

Web Browser Access:
You can access the server with your web browser and browse the contents of your share as the phone would see it.
There are some debug and log views too (check the links on top). If authentication is required (username/pass) the log/debug view is crippled to prevent abuse.
Accessing MLV content using the web browser is not implemented yet. Anyone who needs it?

Download the current version of the "MLV WebDAV Server" here
Download the source code on bitbucket

Important Hints:
Windows is per default very sluggish when accessing the WebDAV shares.
Please disable "Automatic proxy detection" in your internet explorer, like microsoft suggests here
If you dont do that, accessing the mounted drive is very slow. Its a problem with windows itself.
And yes, its important also for chrome and firefox users ;)

If you want to use authentication on Windows Vista and above, you have to apply a registry patch that enables user/pass authentication.
Please install the fix from the microsoft article here.

If you get DLL errors, you might have to install the MSVCRT runtime libraries from here

This program is licensed under the GPL v2 license.
This code contains GPL code from MLVFS, a GPLed FUSE library for accessing MLV files.
To be specific, the whole RAW-&amp;gt;DNG code was taken from there.

Modules Development / [experiment] [5D3, others?] massive mlv_play speedup
« on: September 01, 2014, 02:23:08 AM »
recently i've been trying hard to use hardware engines to process raw video for playback.
unfortunately i did not get far enough to say we have realtime playback.

but using an hardware engine, i was able to speed up processing at least a bit.

a example video with 600 frames, 24fps, 1920x1080, of 25 seconds length
takes 96 seconds using "color" and "all", which means it will play all frames with nice colors and no skipping.

using a tweak module "raw_twk" (see source in unified), which adds new methods for the latest
mlv_play to play raw with improved speeds.

a) use it on your own risk
b) only compatible with mlv_play
d) 5D3 only yet

maybe this is stable enough to make use of it in ML core?

it is making use of "ProcessPathForFurikake" DSUNPACK/DARK/PACK16/WDMAC16 engines which receive 14bpp raw stream and align it correctly into 16bpp.
this eases up the way we can read out the pixel data and basically the most CPU expensive thing is rgb->yuv.

for this reason ive also improved rgb2yuv:

left: original code, right: handcrafted assembly (there are also 6 words of constans not shown)


Raw Video / RAW/MLV black level issues fix needs testers
« on: May 20, 2014, 01:22:23 AM »
more here:
for those who know how to compile ML, please test it so we can merge that.

Raw Video / FFMPEG now officially supports Magic Lantern Video
« on: April 22, 2014, 01:06:42 AM »
two days ago a patch was committed to ffmpeg official source code database that
adds support for our Magic Lantern Video (MLV) format produced by mlv_rec.

how it came to this?
the FFMPEG team applied for Google Summer of Code (GSoC) for raw bayer support in their libraries.
i talked to peter and suggested him to look at our video format as some open source raw video format that is free of any royalities.
so they could continue to improve their raw support with already existing footage in this simple video file format.

peter then started to implement the format reader within only a few days :)
the nightly build of ffmpeg already can play .mlv videos using ffplay, just the coloring isnt finished yet.

the commit is here

a big thank you to Peter Ross and Michael Niedermayer for making this happen :)

General Development / [proposal] unified graphics interface
« on: February 22, 2014, 10:09:51 PM »
current state:
a thing that i personally consider a bit odd in magic lantern core is how the graphics code works.
everything, even fonts is being printed on screen directly. this can e.g. cause weird flickering if you redraw stuff.

when you want to "build" up a graphic and display it with only one operation, like a simple BitBlt you
have no other chance than implementing your drawing routines on your own.
for creating graphics, like the plots alex is doing, you can only write them on screen and then do a screenshot.
(in the hope that nothing printed over your graphs while you built them)

also these operations are only possible in bitmap buffer, which is a 8 bit per pixel indexed (palette) graphics buffer.
drawing/printing into the vram is not supported at all.

for that reason i am starting a discussion for a new graphics backend, that should cover all current and (hopefully) future needs.
i hope that you take part in the discussion and maybe we can find someone who is able to implement that.

@all devs:
do you think this API will be usefule and would cover all use cases we currently have or will face in future?
(e.g. painting on ML's own back and front buffers before printing them on screen etc?)

@future devs:
this code is really simple to implement. you dont have to know ML or canon. this can even be tested on the computer standalone.
anyone who is interested in implementing it?

 - every operation happens on a "context" which tells the graph routines where they have to draw on (screen, ram, etc)
 - all necessary information for drawing must be accessible in the context being passed
 - every operation must support 8bpp and YUV color modes
 - the graphics type is designed to be compatible to the screen buffers (BMP, LV, HD) without any hacks
 - for easy usage, there are predefined contexts that are meant for printing on screen directly
   (e.g. when specifying CANON_BMP_FRONT the code will pick an internal graph_t which contains the screen's bmp configuration)

/* image data is stored in YUV422 packed, also known as YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr  */
#define PIX_FMT_YUYV422  0
/* image data is stored as 8 bits per pixel indexed. palette can be specified optionally */
#define PIX_FMT_PAL8     1

/* special cases: when specifying them, the routines will render on screen directly */
#define CANON_BMP_FRONT ((graph_t *) 1)
#define CANON_BMP_BACK  ((graph_t *) 2)
#define CANON_VRAM_HD   ((graph_t *) 3)
#define CANON_VRAM_PREV ((graph_t *) 4)

/* for any copy operation, specify how to proceed when destimation dimensions differ from source */
#define COPY_MODE_CROP   0
#define COPY_MODE_SCALE  1
#define COPY_MODE_BILIN  2

typedef struct
    /* pointer to raw image data */
    void *data;
    /* pixel format as specified in PIX_FMT macros */
    uint32_t pixel_format;
    /* image dimensions */
    graph_size_t size;
    /* optional palette, especially important when saving or copying to YUV targets */
    graph_palette_t *palette;
    /* if non-NULL, the graphic will be locked when drawing on it (t.b.d) */
    void *lock;
} graph_t;

typedef struct
    /* image width in pixels, visible content only */
    uint32_t width;
    /* image height in pixels, visible content only */
    uint32_t height;
    /* how wide every pixel line is, given in pixels */
    uint32_t pitch;
    /* number of invisible pixels left of the image data */
    uint32_t x_ofs;
    /* number of invisible pixels above the image data */
    uint32_t y_ofs;
} graph_size_t;

/* draw a single dot, color depends on image format. either palette index or full YUV word. size=1 must be optimized */
uint32_t graph_draw_pixel(graph_t *ctx, uint32_t x, uint32_t y, uint32_t radius, uint32_t color);
uint32_t graph_draw_line(graph_t *ctx, uint32_t x1, uint32_t y1, uint32_t x2, uint32_t y2, uint32_t radius, uint32_t color);
uint32_t graph_draw_rect(graph_t *ctx, uint32_t x1, uint32_t y1, uint32_t x2, uint32_t y2, uint32_t line_color, uint32_t fill_color);

/* width/height may be zero for auto */
uint32_t graph_copy(graph_t *dst, graph_t *src, uint32_t x, uint32_t y, uint32_t width, uint32_t height, uint32_t copy_mode);

/* font_t is the font type we use with bmp_printf etc */
uint32_t graph_printf(graph_t *dst, uint32_t x, uint32_t y, font_t font, char *msg, ...);

/* can be used to get the palette of the canon screen */
graph_palette_t *graph_get_palette();

    data pointer is pointing here
  |         ^                                      |
  |         | y_ofs                                |
  |         |                                      |
  |       __v______________________________        |
  | x_ofs|                          ^      |       |
  |<---->|                   height |      |       |
  |      |                          |      |       |
  |      |       (image content)    |      |       |
  |      |                          |      |       |
  |      |             width        |      |       |
  |      |<-------------------------|----->|       |
  |      |__________________________v______|       |
  |                                                |
  |                     pitch                      |

e.g. either call
    graph_draw_pixel(CANON_BMP_FRONT, 10, 20, COLOR_WHITE);
    graph_draw_pixel(my_own_graph, 10, 20, COLOR_WHITE);
where 'my_own_graph' is a pointer to a custom graph context.
this may be displayed on screen later or saved using appropriate routines.

Code: [Select]
/* init sample graphic */
graph_t *my_own_graph = graph_alloc(PIX_FMT_PAL8, 1024, 768);

/* set a dot (width 1) */
graph_draw_pixel(my_own_graph, 10, 20, 1, COLOR_WHITE);

/* draw an ellipse, width 2 */
graph_draw_circle(my_own_graph, 90, 90, 40, 80, 2, COLOR_WHITE);

/* save it */
graph_save_bmp(my_own_graph, "ML/DATA/PLOT.BMP");

/* width/height may be zero for auto */
graph_copy(CANON_BMP_FRONT, my_own_graph, 0, 0, 0, 0, COPY_MODE_CROP);

Modules Development / io_crypt - encrypt your photos while you shoot them
« on: February 02, 2014, 12:36:25 AM »
Status: experimental, need your testing!

Short description:
io_crypt is a module which automatically encrypts .CR2 and .JPG while you shoot them.
The original file content is never written to card, so there is no way to restore the image content by reading the raw sectors etc.
You can choose between different modes and security levels.
This was formerly discussed there and was requested already a few times.

Detailed description:
This module hooks the file-io operations for your SD and CF card and places custom read/write routines instead.
These custom r/w operations encrypt your file content before the card's real write handler is being called.
For you there is no additional task to do after you shot the image - just shoot as usual and your files are encrypted.

There are two possible modes:
 - Password
    Before you shoot images, you have to enter a password which is being used for all images
    The password gets fed into a LFSR (Linear Feedback Shift Register) to shuffle the bits and get a 64 bit file key.
    advantage: you can enter different keys, one per "session" or "access level" and share them accordingly
    disadvantage: you have to enter the key every time you power on the camera (storing is insecure of course)

 - RSA
    Before you start your shooting, you create a RSA public/private key pair via menu.
    (edit: this takes up to 10 minutes with a 4096 bit key!!)
    Then you copy the private key from your card (ML/DATA/IO_CRYPT.KEY), store it at a safe place and delete it from your card (!!).
    You need the private key only for decrypting (on computer), the public key only for encrypting (on camera)
    With the internal PRNG for every image a separate file key is being generated and encrypted using RSA.
    advantage: no password must be entered, power on and shoot. every image has a different, random "password"
    disadvantage: you have to prepare yourself a bit by copying and deleting the encryption keys correctly

In both modes, the file content is being encrypted using a XOR operation with the output of a 64-bit LFSR that was pre-loaded with the file key and the current block numver.
To make random access feasible and the encryption fast enough, the keys are used blockwise.
This obviously weakens encryption a lot and makes it possible to recover the 64 bit block encryption key using known plaintext attacks.
The good thing - know plaintext attacks are only suitable for file content that has a predictable pattern, like the file header.

Still the encryption i implemented is *not* military grade. Although it is (imho) safe enough for a normal individual.

    The block size that is being encrypted with the same 64 bit key.
    larger is faster, but insecure. smaller values slow down saving. choose.
    Ask for password on startup
    If you are in Password mode, camera will ask for password right after poweron.
    When disabled, you have to enter the menu manually and set the key - else no pictures will be encrypted.

    RSA Keysize
    Choose the largest value that you can tolerate. The larger the size, the longer generating will take (up to 10 minutes...).
    Also saving will slow down a bit with larger keys

Image review:
Canon caches the images you have shot until you poweroff the camera or the memory gets full (5-10 images).
As long the images are in cache, you can review it without any problem, even if you change the key.

In RSA-Mode you currently *not* review images other than those in cache. Not sure if i will implement it at all.
In Password mode, you can view images when you set the correct password.

After you copied the files onto your computer, you can decrypt it with io_decrypt which is (not yet) avaiable precompiled, but you can get from the repository.

./io_decrypt <in_file> [out_file] [password]

If you want to decrypt password protected files (LFSR64), you have to supply the encryption password on commandline.
For RSA encrypted files, the private key ML/DATA/IO_CRYPT.KEY must be in the current directory.

The module contains some camera specific memory addresses, so it has to be ported for every model.
Cameras that are supported: 7D, 5D3, 60D, 600D, 650D
Next cameras being added: 5D2, 6D
If you have a different model and want to use/test the module, please post it here.

1. Do not do any illegal stuff with it.
2. It is meant for e.g. reporters whose security depends on the footage not being revealed or for securing sensible information
3. Dont rely on it. It will for sure somewhen fail and your footage is gone.
4. Dont cry when something goes badly wrong.

You can always download my latest build there
here is the windows console decrypter.

 - Show fake images instead of the standard canon error screen
 - background encryption for unsupported models. will scan, encrypt and save the images in background while your camera is idle.

here is a _very_ simple and hackish MLV viewer to check your footage
it is also available as an OSX-App

it will read uncompressed MLV files and display the frames with just a few frames per second.
this tool was programmed in C# on windows, but it uses nothing windows-specific, so it should run on any OS using mono. (positive reports from linux and mac os x)

please remind:
 - these tools are just a PROOF OF CONCEPT
 - it is not meant as productional tool
 - i used it to check what is necessary to decode and view RAW/MLV files, its just my playground
 - it has bugs!
 - it will most likely not be continued
 - i shared it as a last resort tool in case you need something like that


 - just drop the .mlv or .m00, .raw, .r01, ... file into the program window
 - shows the video in full res using bilinear demosaicing
 - other debayering methods (e.g. fast ones) are available (right click onto image)
 - ramps exposure up/down if there is under/overexposure (so it may not be accurate enough for some of you)
 - has no white balance algorithm
 - just tested on 5D3, other cameras have different bayer patterns - didnt check them yet
 - it uses the coefficients from the raw info block, so color weighting should be correct caused trouble, disabled
 - the scene is scaled to TV black and white levels (16..235) for a better looking playback
 - updated to work with files that have less bpp than 14 (e.g. when used mlv_dump to reduce size)
 - supports both .mlv and old .raw file format

it has a white balancing feature:
 - press and hold SHIFT
 - press LEFT mouse button
 - image will get displayed 1:1
 - move to where you have gray level
 - release LEFT mouse button
it will pick a 8x8 pixel area and use this as white balance reference after debayering and kelvin correction.

camera color matrices are also used now, which should result in better colors

to disable all correction post processing:
 - right click to get context menu
 - there is an option to disable color correction


 - just select the folder to browse on the left pane
 - you can select multiple files on the right pane as you are used to (CTRL click etc)
 - the only thing you can do yet, is RIGHT CLICK and choose anything you want
 - for opening using MLV Viewer, please asign the .mlv extension to MLVViewSharp.exe
 - for every file visible, it creates a thread, so this might overload your computer when you have several hundreds of files in one directory (i hope you sort your footage better than this...)
 - selected files play back in maximum speed, unselected play slower (1 fps)
 - selecting a file causes it to play back from the beginning
 - you cannot set WB or debayering in the preview window (its simple to add, but i dont understand why someone would need that)

 - when a file is selected, you can CLICK and HOLD the left mouse button on the icon and DRAG left and right to seek in the file

as modules got really useful lately and some versioning and updating issues come up, we should
think about handling variants of modules, different revisions and automated module updating.

a year ago, every dev released customized autoexec.bin, today we all use basically the same autoexec.bin, but vary the modules.
this is a really good transition and was making us think more modular and separating concerns into separate modules.

but before it gets a mess with modules now, it would make sense to introduce a repository with all the latest modules and even branches or variant of standard modules.
i have no 100% clear view about how to set up the repository and the branch system exactly, i still have a bit highlevel view on this.

implementation level:
 - heavy duty: using hg (bitbucket) and a separate repository called e.g. ml-modules
     - perfect branch / fork mechanism
     - perfect revisioning mechanism
     - basically no server cost, administration and implementation effort
     - updater client just needs the base URL, which even can be a fork (just like we already do with source)
     - dependency on bitbucket
     - getting revision lists etc might be not as simple as just downloading an URL (is there a JSON interface?)

 - light weight: simple upload system with some bash scripts...

functionality level:
 - show a list of all modules possible to install
 - get description of a module (extracted from .mo?)
   - contains dependencies, revision, hg link to source
   - contains a "provides" field? (like in all package managers to detect if two modules are doing the same thing)
 - get revision list and branches of a module
 - get module at specific revision / branch

so if we have this repository, we need an updater for the modules on the card.
why do i come up with this idea right now?
see this thread.

i want a which uses the transcend wifi card to update modules automatically or on demand.
as soon the wifi module in 6D and 70D is understood enough, this interface can also be used,
so owners of many camera models can make use of the repository system - even while being on a journey using their cell phones.

of course we will first have to implement a windows/linux/mac updater (or one of those) which is the reference application and that uses disk access.

any comments?
anyone who will check if this is doable using bitbucket?

General Development / [proposal] - Transcend WiFi SD driver
« on: September 11, 2013, 09:00:55 PM »
the transcend wifi cards recently got interesting, after an article about root'ing these devices was published.


they contain an ARMv5 instruction set ARM926EJ with somewhere around 400 MHz and 32 MiB RAM.
plus an integrated 16 or 32 GiB SD card ;)
the used operating system is linux with busybox and a bunch of reaaaallllyyy hackish shell scripts.
you can place an '' and it will get executed on startup... as root...

unfortunately the wifi speed is embarrasing slow - i got 1 MBit/s which is not making any fun with .cr2 files.
(no, dont even ask for raw video!)

Magic Lantern - module functionality
 - "Enable TrWiFi" / "Disable TrWiFi" - places or removes with magic lantern specific code
 - "Mode: DirectShare" / "Mode: Internet" - depending on current mode switch to the other one for either accessing internet or tethering with mobile phone

plus providing these functions to other modules:
 - int32_t trwifi_get_file ( char *url, char *dst_file )
   the file at given URL is being downloaded and copied from the linux system to camera filesystem.
   as we can access the SD from linux, but this will compete with our DryOS filesystem driver, we have to use files like B:/ML/DATA/TR_UPLNK.DAT and B:/ML/DATA/TR_DNLNK.DAT.
   both camera and linux will access the files without changing anything in the file structure to transfer data between each other.
   possible structure: [payload_size][payload] where payload initially is a shell script that is executed by
   these shell scripts can use that comm channel for any arbitrary command specific to the script. the camera has to care for communicating with the right commands.

 - char *trwifi_exec ( char *command )
   execute any command on linux side and return its stdout as string

i am not sure if it makes sense or fun to implement tcp/udp connect/read/write functionality or even PTP functionality (by forwarding to DryOS PTP handler).
tunneling through these files may be a bit slow and complicated.

constructive feedback?

General Development / [Module/5D3] SMPTE experiment, not usable
« on: August 19, 2013, 01:52:01 AM »
can someone with SMPTE equipment try this module?

SMPTE output module

i developed it on 5D3, it is likely that other models that have audio support will work too.
as i dont have any equip and there are no free tools to read SMPTE, i cannot test what it produces.

General Development / Task madness - can we do some cleanup?
« on: August 15, 2013, 11:49:41 PM »

this time i am requesting collaboration to analyze and clean up our task chaos.

When investigating the performance drop of CF writing in photo mode compared to playback mode,
which causes up to 7MiB/s less transfer speed, i recorded a timing trace of all task and ISR activations.
What annoyed me, was the endless number of tasks for various more and less important things.

Let me show you a trace (please scroll horizontally using cursor keys in your browser):

I marked all ML tasks in red on the left column.
The horizontal axis is the execution time of course.
A red bar means, this item (task/ISR) is being executed at this time. If the activation is very short, you just see a black bar.

Zooming into two activations of ML tasks:

There you see that the tasks are running very short. Only a few microseconds.

But even this short activation period costs execution time - about 2 * 10 microseconds for switching the tasks.
Sometimes this is totally unnecessary and we could save CPU execution time, battery power and maybe write performance for raw recording.
For example the joypress task that takes ~15µs execution time plus 10µs context switch time every 20ms for nothing?
I never press the joystick, so why do i have to sacrifice 0.1% of the execution time?
Sum up all 30 tasks and this is at least 3% that might be unnecessary (yeah, in theory ;) )

Some bad thing is, that the context switches will take longer, the more tasks are waiting to get activated.
At the moment some of the "unnecessary" msleep-polling costs 924µs according to the image above.
Thats a milisecond that is causing delay to other tasks that *really* have to process stuff.
Also the CF write rate seems to go down due to those activations.

So can we try to investigate task by task,
a) if we really need that task
b) if it really has to msleep(x) for just polling a variable
c) if the thing the task does can be achieved with timeout-less message queues

This is not a one-day task, but an ongoing process that may take weeks to clean up.


after alex spent a lot of time to find out how we can squeeze out the last bit of performance while
writing raw video to SD and CF cards, i used the last days to think about how to structure the
raw videos to make the post processing easier and the format more extensible.

the result is our next Magic Lantern Video format (.mlv) i want you to look at.
use it on your own risk.

for users:
mlv_rec: nightly download page.
mlv_dump: most recent nightly download page. (binary for WINDOWS only)

mlv_dump: or here (binaries for WINDOWS, LINUX and OSX)

for developers:
mlv file structures in C: here (LGPL)

preferred: you can export .dng frames from the recorded video using "mlv_dump --dng <in>.mlv -o <prefix>"
legacy mode: post processing is still possible with 'raw2dng' after converting the .mlv into the legacy .raw format using mlv_dump.

for details see the description below.
see the short video i made: it shows a bunch of the new (user visible) features of that file format.

 - used for debugging and converting .mlv files
 - can dump .mlv to legacy .raw + .wav files
 - can dump .mlv to .dng  + .wav
 - can compress and decompress frames using LZMA
 - convert bit depth (any depth in range from 1 to 16 bits)

you can get a data reduction of ~60% with 12 bit files.
downconverting to 8 bits gives you about 90% data reduction.
this feature is for archiving your footage.
converting back to e.g. legacy raw doesnt need any parameters - it will decompress and convert transparently without any additional parameter.

Code: [Select]
-o output_file      set the filename to write into
 -v                  verbose output

-- DNG output --
 --dng               output frames into separate .dng files. set prefix with -o
 --no-cs             no chroma smoothing
 --cs2x2             2x2 chroma smoothing
 --cs3x3             3x3 chroma smoothing
 --cs5x5             5x5 chroma smoothing

-- RAW output --
 -r                  output into a legacy raw file for e.g. raw2dng

-- MLV output --
 -b bits             convert image data to given bit depth per channel (1-16)
 -z bits             zero the lowest bits, so we have only specified number of bits containing data (1-16) (improves compression rate)
 -f frames           stop after that number of frames
 -x                  build xref file (indexing)
 -m                  write only metadata, no audio or video frames
 -n                  write no metadata, only audio and video frames
 -a                  average all frames in <inputfile> and output a single-frame MLV from it
 -s mlv_file         subtract the reference frame in given file from every single frame during processing
 -e                  delta-encode frames to improve compression, but lose random access capabilities
 -c                  (re-)compress video and audio frames using LZMA (set bpp to 16 to improve compression rate)
 -d                  decompress compressed video and audio frames using LZMA
 -l level            set compression level from 0=fastest to 9=best compression

Code: [Select]
# show mlv content (verbose)
./mlv_dump -v in.mlv

# will dump frames 0 through 123 into a new file
# note that ./mlv_dump --dng -f 0 in.mlv (or ./mlv_dump --dng -f 0-0 in.mlv) will now extract just frame 0 instead of all of the frames.
./mlv_dump -f 123 -o out.mlv in.mlv

# prepare an .idx (XREF) file
./mlv_dump -x in.mlv

# compress input file
./mlv_dump -c -o out.mlv in.mlv

# compress input file with maximum compression level 9
./mlv_dump -c -l 9 -o out.mlv in.mlv

# compress input file with maximum compression level 9 and improved delta encoding
./mlv_dump -c -e -l 9 -o out.mlv in.mlv

# compress input file with maximum compression level 9, improved delta encoding, 16 bit alignment which improves compression and 12 bpp
./mlv_dump -c -e -l 9 -z12 -b16 -o out.mlv in.mlv

# decompress input file
./mlv_dump -d -o out.mlv in.mlv

# convert to 10 bit per pixel
./mlv_dump -b 10 -o out.mlv in.mlv

# convert to 8 bit per pixel and compress
./mlv_dump -c -b 14 -o out.mlv in.mlv

# create legacy raw, decompress and convert to 14 bits if needed
./mlv_dump -r -o out.raw in.mlv

Play MLV Files


baldand implemented an amazing video player that is using OpenGL and is able to convert your .raw/.mlv into ProRes directly.
even i use it as my playback tool, so consider it as the official player. ;)



see here for a MLV player on windows

in-camera mlv_play:
the module is shipped with the pre-built binaries.
it is a plugin for to play .raw and .mlv files in camera.
the discussion thread for this module is there

Drastic Preview:
the guys over at are currently implementing the MLV format and already have a working non-open beta version. (i tried it already and i love it :) )
i am sure within the next weeks they will release a new version.

some technical facts:
 - structured format
 - extensible layout
 - as a consequence, we can start with the minimal subset (file header, raw info and then video frames)
 - multi-file support (4 GiB splitting is enforced)
 - spanning suport (write to CF and SD in parallel to gain 20MiB/s)
 - out-of-order data support (frames are written some random order, depending on which memory slot is free)
 - audio support
 - exact clock/frametime support (every frame has the hardware counter value)
 - RTC information (time of day etc)
 - align fields in every frame (can differ from frame to frame)

the benefit for post processing will be:
 - files can be easily grouped by processing SW due to UIDs and file header information (autodetect file count and which files belong to each other)
 - file contains a lot of shooting information like camera model, S/N and lens info
 - lens/focus movement can be tracked (if lens reports)
 - exact* frame timing can be determined from hw counter values (*=its accuracy is the limiting thing)
 - also frame drops are easy to detect
 - hopefully exact audio/video sync, even with frame drops
 - unsupported frames can be easily skipped (no need to handle e.g. RTC or LENS frames if the tool doesnt need them)
 - specified XREF index format to make seeking easier, even with out of order data and spanning writes

why a custom format and not reuse e.g. .mov?
 - other formats are good, but none fits to our needs
 - hard to make frames align to sector or EDMAC sizes
 - they dont support 14 bit raw bayer patterns out of the box
 - even when using a flexible container, nearly all sub blocks would need custom additions
 - this means a lot of effort to make the standard libs for those formats compatible
 - its hard to implement our stuff in a clean way without breaking the whole format

thats the reason why i decided to throw out another format.
it is minimalistic when desired (especially the first implementation will only use a subset of the frames)
and can be extended step by step - while even the most minimalistic parser/post processing tool
can process the latest video files where all stuff is implemented.

if you are a developer (ML or even 3rd party tools) - look over it and make yourself comfortable with that format.
in case there is a bug or something doesnt make sense, please report it.
i would love to get feedback.

here is the link of the spreadsheet that is some kind of reference when designing the format:

implementer's notes
green = fully implemented
blue= implemented, but not 100%
red = not implemented yet, just defined

[MLVI] (once)
 - MLVI block is the first block in every .mlv file
 - the MLVI block has no timestamp, it is assumed to have timestamp value 0 if necessary
 - the MLVI block contains a GUID field which is a random value generated per video shoot
 - using the GUID a tool can detect which partial or spanning files belong together, no matter how they are named
 - it is the only block that has a fixed position, all other blocks may follow in random order
 - fileCount field in the header may get set to the number of total chunks in this recording (the current implementation on camera isn't doing this right)

[RAWI] (once, event triggered)
 - this block is known from the old raw_rec versions
 - whenever the video format is set to RAW, this block has to appear
 - this block exactly specifies how to parse the raw data
 - bit depth may be any value from 1 to 16
 - settings apply to all VIDF blocks that come after RAWI's timestamp (this implies that RAWI must come before VIDF - at least the timestamp must be lower)
 - settings may change during recording, even resolution may change (this is not planned yet, but be aware of this fact)

[VIDF] (periodic)
 - the VIDF block contains encoded video data in any format (H.264, raw, YUV422, ...)
 - the format of the data in VIDF blocks have to be determined using MLVI.videoClass
 - if the video format requires more information, additional format specific "content information" blocks have to be defined (e.g. RAWI)
 - VIDF blocks have a variable sized frameSpace which is meant for optimizing in-memory copy operations for address alignment. it may be set to zero or any other value
 - the data right after the header is of the size specified in frameSpace and considered random, unusable data. just ignore it.
 - the data right after frameSpace is the video data which fills up the rest until blockSize is reached
 - the blockSize of a VIDF is therefore sizeof(mlv_vidf_hdr_t) + frameSpace + video_data which means that a VIDF block is a composition of those three data fields
 - if frames were skipped, either a VIDF block with zero sized payload may get written or it may be completele omitted
 - the format of the data in VIDF frames may change during recording (e.g. resolution, bit depth etc)
 - whenever in time line a new content information block (e.g. RAWI) appears, the format has to get parsed and applies to all following blocks

[WAVI] (once, event triggered)
 - when the audio format is set to WAV, this block specifies the exact wave audio format

[AUDF] (periodic)
 - see [VIDF] block. same applies to audio

[RTCI] (periodic, event triggered)
 - contains the current time of day information that can be gathered from the camera
 - may appear with any period, maybe every second or more often
 - should get written before any VIDF block appears, else post processing tools cannot reliable extract frame time

[LENS] / [EXPO] / ... (periodic, event triggered)
 - whenever a change in exposure settings or lens status (ISO, aperture, focal length, focus dist, ...) is detected a new block is inserted
 - all video/audio blocks after these blocks should use those parameters

[IDNT] (once)
 - contains camera identification data, like serial number and model identifier
 - the camera serial number is written as HEX STRING, so you have to convert it to a 64 bit INTEGER before displaying it

[INFO] (once, event triggered)
 - right after this header the info string with the length blockLen - sizeof(mlv_info_hdr_t) follows
 - the info string may contain any string entered by the user in format "tag1: value1; tag2: value2"
 - tag can for example be strings like take, shot, customer, day etc and value also any string

[NULL] (random)
 - ignore this block - its just to fill some writing buffers and thus may contain valid or invalid data
 - timestamp is bogus

[ELVL] (periodic)
 - roll and pitch values read from acceleration sensor is provided with this block

[WBAL] (periodic, event triggered)
 - all known information about the current white balance status is provided with this block

[XREF] (once)
 - this is the only block written after recording by processing software, but not the camera
 - it contains a list to all blocks that appear, sorted by time
 - the XREF block is saved to an additional chunk
 - files that only contain a XREF block should get named .idx to clarify their use
 - .idx files must contain the same MLVI header like all chunks, but only have the XREF block in it

 - on keypresses, like halfshutter or any other button, this block gets written for e.g. supplying video cutting positions
 - the data embedded into this block is the keypress ID you can get from module.h

[VERS] (any number, usually at the beginning)
 - a string follows that may get used to identify ML and module versions
 - should follow the format "<module> <textual version info>"
 - possible content: "mlv_play built 2017-07-02 15:10:43 UTC; commit c8dba97 on 2016-12-18 12:45:34 UTC by g3gg0: mlv_play: add variable bit depth support. mlv_play requires experi..."

possible future blocks:

 - in-camera black and noise reference pictures can be attached here (dark frame, bias frame, flat frame)
 - to be checked if this is useful and doable

[MLV Format]
 - the Magic Lantern Video format is a block-based file format
 - every information, no matter if audio or video data or metadata is written as data block with the same basic structure
 - this basic structure includes block type information, block size and timestamp (exception to this is the file header, which has no timestamp, but a version string instead)
 - the timestamp field in every block is a) to determine the logical order of data blocks in the file and b) to calculate the wall time distance between any of the blocks in the files
 - the file format allows multiple files (=chunks) which basically are in the same format with file header and blocks
 - chunks are either sequentially written (due to e.g. 4 GiB file size limitation) or parallel (spanning over mutiple media)
 - the first chunk has the extension .mlv, subsequent chunks are numbered .m00, m01, m02, ...
 - there is no restriction what may be in which chunk and what not

 - to accurately process MLV files, first all blocks and their timestamps and offset in source files should get sorted in memory
 - when sorting, the sorted data can be written into a XREF block and saved to an additional chunk
 - do not rely on any order at all, no matter in which order they were written into a file
 - the only reliable indicator is the timestamp in all headers

with my last commit, i fixed the IME system to interwork cleanly with menu etc.
(see )

the function to be called is:
Code: [Select]
extern void *ime_base_start (char *caption, char *text, int max_length, int codepage, int charset, t_ime_update_cbr update_cbr, t_ime_done_cbr done_cbr, int x, int y, int w, int h );

if a module wants to have a text enterd by the user, it can now call the ime code like that:

Code: [Select]
static char text_buffer[100];

    //bmp_printf(FONT_MED, 30, 90, "ime_base: CBR: <%s>, %d, %d", text, caret_pos, selection_length);
    return IME_OK;

    for(int loops = 0; loops < 50; loops++)
        bmp_printf(FONT_MED, 30, 120, "ime_base: done: <%s>, %d", text, status);
    return IME_OK;

static MENU_SELECT_FUNC(ime_base_test)
    strcpy(text_buffer, "test");
    ime_base_start("Enter something:", text_buffer, sizeof(text_buffer), IME_UTF8, IME_CHARSET_ANY, &ime_base_test_update, &ime_base_test_done, 0, 0, 0, 0);

the whole thing is running asynchronously. this means you call ime_base_start and that function immediately returns.
it captures all key events and prevents the ML menu to paint.
instead it is showing you a dialog to enter your text.

the specified update CBR (CallBackRoutine) is called periodically with the current string. it should return IME_OK if the string is acceptable.
(as soon its implemented fully, you can check if it is an valid string, e.g. an email address etc and return a value != IME_OK to grey out the OK button)
when the user selects OK or Cancel, the done CBR is called with the string and the status IME_OK or IME_CANCEL.

the x, y, w, h parameters are planned to specify the location where the caller code prints the text that is passed via update_cbr.
this way the caller code can care for displaying the text somewhere and the IME just cares for the character selection.
but it is not implemented yet.

the code is still very fragile ;)
i planned to support different charsets, but not sure yet how to implement, or if it is necessary at all.
also the way the characters are displayed and the menu is drawn isnt final yet.
i think i should use canon fonts as they look better.
also the "DEL" function cuts the string at the deleted character. that can be fixed easily by using strncpy.

please test that code and improve it where it needs improvement.

Update (17.08.14)

you can place both ime_std and ime_rot in your module dir, or one of them - which ever you prefer.
ime_base is always needed for both

Reverse Engineering / ResLock stuff
« on: June 24, 2013, 11:33:56 PM »
i digged a bit into ResLock stuff and will describe how i think it works.

Code: [Select]
struct struc_LockEntry
  char *name;
  int status;
  int semaphore;
  int some_prev;
  int some_next;
  unsigned int *pResource;
  int resourceEntries;
  void (*cbr)(struct struc_LockEntry *lockEntry, void *cbr_priv);
  void *cbr_priv;

struct struc_LockEntry *CreateResLockEntry(uint32_t *resIds, resIdCount);
unsigned int LockEngineResources(struct struc_LockEntry *lockEntry);
unsigned int UnLockEngineResources(struct struc_LockEntry *lockEntry);
unsigned int AsyncLockEngineResources(struct struc_LockEntry *lockEntry, void (*cbr)(struct struc_LockEntry *lockEntry, void *cbr_priv), void *cbr_priv);

Lock a previously allocated LockEntry and its associated devices

Unlock a previously allocated LockEntry and its associated devices

register a lock that will use semaphores to lock all the resources specified in a list.
when registering your lock, the resIds[] is a list of resources to be locked.
the number of entries in this list is passed as second parameter.
initial state of the lock is unlocked.

resId format:
resId = (block << 16) | (entry)

entry specifies the exact "device" in the given block, if any.
blocks are one of those:
 0x00 = EDMAC[0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x20, 0x21]
 0x01 = EDMAC[0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x28, 0x29, 0x2A, 0x2B]
 0x04 = HEAD
 0x36 = encdrwrap
 0x37 (max)
 ( be continued)

e.g. resId 0x1000C is block 0x01 and entry 0x0C. This is EDMAC 0x28 being locked whenever LockEngineResources is called with the LockEntry.

Pages: [1] 2