Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - g3gg0

#2
Camera-specific Development / Canon 5DS / 5DS R
February 10, 2018, 01:14:29 PM
just for the record, i pushed some 5Ds experiments.

booting the firmware is a bit different now, but works.
the code is hardcoded right now and just meant as experiment / documentation.

i wasnt able to display something meaningful yet.
could write into some YUV buffers or modify the graphics processor's RAM.
but not really usable at all.

THIS DOES NOT MEAN 5Ds WILL GET ML SOON!

https://bitbucket.org/hudson/magic-lantern/commits/branch/5Ds_experiments
#3
It is possible to compile magic lantern and qemu on windows, without any third-party-programs like cygwin, MSYS, VirtualBox etc by solely using windows' native linux compatibility layer.

Magic Lantern


For those who didn't know, microsoft added wrappers to allow linux code to execute properly.
You have just to enable it, as described on microsoft's website.
This gives you "bash" the famous native linux shell executable directly within windows.

OS Preparation

After you installed ubuntu, you should install a few standards tools.

Depending on the Windows 10 installation you have, you might be able to simply execute "bash" via Win+R or a menu entry called Bash or Ubuntu etc.
Then, in bash run: 


sudo apt-get update
sudo apt-get install make gcc gcc-arm-none-eabi mercurial gcc-mingw-w64 python3-docutils zip


There were also cases when you had to install python2 - your mileage may vary.


sudo apt-get install python2



Download

directly clone magic lantern from the mercurial repository using this command:

hg clone -u unified https://bitbucket.org/hudson/magic-lantern

it will download the latest version, the unified branch.


Configuration

first determine the exact arm-gcc compiler version you have either by executing

ls /usr/lib/gcc/arm-none-eabi/

or by entering

arm-none-eabi-gcc- [TAB] [TAB]



then use your favorite text editor in either linux or windows and create a file named Makefile.user with only this content:

GCC_VERSION=-4.8.2
ARM_PATH=/usr



open a windows shell at the folder where your makefiles are and run 'bash'.
and you should be able to compile Magic Lantern on windows with *native* compile speed :)



here an "all-in-one" script by a1ex, a bit modified:

# prepare system
sudo apt-get update
sudo apt-get install make gcc gcc-arm-none-eabi mercurial gcc-mingw-w64 python3-docutils zip

# download and prepare ML
hg clone -u unified https://bitbucket.org/hudson/magic-lantern
cd magic-lantern
echo "GCC_VERSION=-`ls /usr/lib/gcc/arm-none-eabi/`" > Makefile.user
echo "ARM_PATH=/usr" >> Makefile.user

# preparation complete, now build ML
cd platform/5D3.123
make zip

# desktop utilities
cd ../../modules/mlv_rec
make mlv_dump.exe
cd ../../modules/dual_iso
make cr2hdr.exe

# ports in progress (100D, 70D)
hg update 100D_merge_fw101 -C # use TAB to find the exact name
hg merge unified # or lua_fix or whatever (optional)
cd ../../platform/100D.101
make zip

# 4K with sound
hg update crop_rec_4k_mlv_snd -C
cd ../../platform/5D3.123
make clean; make zip

# quick build (autoexec.bin only, without modules)
cd ../../platform/5D3.123
make zip ML_MODULES_DYNAMIC=

# recovery (portable display test, ROM dumper, CPU info...)
hg update recovery -C
cd ../../platform/portable.000
make zip ML_MODULES_DYNAMIC=





QEMU (or: how to run Canon OS within qemu within the linux environment within windows 10 on a x64 CPU)

If you were successful with compiling magic lantern, then why not start compiling qemu?



install missing packages (review those please)

sudo apt-get update
sudo apt-get install zlib1g-dev libglib2.0 autoconf libtool libsdl-console flex bison libgtk2.0-dev mtools
sudo apt-get install libsdl-console-dev


the last one - libsdl-console-dev - caused some trouble. i could not download some (unnecessary) drm graphics drivers.
i used aptitude to inspect the status and don't ask me what i did, but aptitude asked me if i want to examine its recommendations and i accepted them.
suddenly libdrm was held back and all other packages got installed.

you probably have to switch to the qemu branch

hg update qemu


then it is time to compile qemu using the script in contrib/qemu/install.sh.
make sure your magic lantern path is named "magic-lantern" else the script will abort.

hint by a1ex, doesn't happen on my system:
for some reason, the output from install.sh is truncated
opening a new terminal appears to fix it (?!)
if it still doesn't work: ./install.sh |& tee install.log
then open install.log in a text editor to read it

when it's done, do what it says:
    a) cd `pwd`/some_path_here"
    b) ../configure_eos.sh"
    c) make -j4   (or the numer of cores your CPU has)

if you now run the run_canon_fw.sh you get an error telling you:

qemu-system-arm: -chardev socket,server,nowait,path=qemu.monitor,id=monsock: Failed to bind socket to qemu.monitor: Operation not permitted


my assumption is, that either unix domain socket implementation in WSL is buggy or at least incompatible to qemu.
so the script run_canon_fw.sh needs some patches before it runs - remove those lines:


    -chardev socket,server,nowait,path=qemu.monitor,id=monsock \
    -mon chardev=monsock,mode=readline \



enjoy!
#4
This topic has been removed. No selling threads.
#5
Raw Video / Solar Eclipse MLV filming?
June 11, 2017, 04:20:55 PM
Hello there,

inspired from this SmarterEveryDay video i am really excited if someone plans to
catch some cool phenomena during that eclipse on August 21st in the states using their canon cameras and make a cool video from it?

e.g. the so called "Shadow Bands" would for sure look better with MLV than with a iphone camera as seen on youtube :D
#6
Hi there.

i recently decided to make a "clean" database of the register map from our wiki.
i defined the data format and made some (win/.net) tool that prints a pretty representation of the registers, like it is in datasheets.

example:


        <EngineDescription Name="SDCON">
            <Registers>
                <Register Offset="0x000" Name="" Text="Unknown" Description="Set to 0x00 on init"/>
                <Register Offset="0x004" Name="" Text="Unknown" Description="Set to 0x01 on init"/>
                <Register Offset="0x008" Name="" Text="Unknown" Description="Set to 0x00 on init, 0x01/0xF1 before read/write, not used for status block. means: use DMA?"/>
                <Register Offset="0x00C" Name="" Text="Unknown" Description="Set to 0x14/0x13/0x12/0x11/0x02 on command, after writing regs +0x024, +0x020 and +0x010, with 0x11, registers +0x028/+0x02C is ignored probably"/>
                <Register Offset="0x010" Name="" Text="Status Register" Description="">
                    <RegisterFields>
                        <RegisterField xsi:type="Bit" Pos="0" Name="" Text="Transfer finished" Description="" />
                        <RegisterField xsi:type="Bit" Pos="1" Name="" Text="Error during transfer" Description="" />
                        <RegisterField xsi:type="Bit" Pos="20" Name="" Text="DAT transfer data available in reg +0x06C?" Description="" />
                        <RegisterField xsi:type="Bit" Pos="21" Name="" Text="DAT transfer finished?" Description="" />
                    </RegisterFields>
                </Register>
                <Register Offset="0x014" Name="" Text="Unknown" Description="Set to 0x03 before transfer start, 0x00 on ISR"/>
                <Register Offset="0x018" Name="" Text="Unknown" Description="Set to 0x08 on init"/>
                <Register Offset="0x020" Name="" Text="Command frame lower 32 bits" Description="needs 0x0001 being set (end bit)"/>
                <Register Offset="0x024" Name="" Text="Command frame upper 16 bits" Description="needs 0x4000 being set (transmission bit)"/>
                <Register Offset="0x028" Name="" Text="Unknown" Description="Written with 0x88/0x30/0x30 before CMD"/>
                <Register Offset="0x02C" Name="" Text="Unknown" Description="Written with 0x7F08/0x2701/0x80000000 before CMD"/>
               
                <Register Offset="0x034" Name="" Text="Data received lower 32 bits" Description=""/>
                <Register Offset="0x038" Name="" Text="Data received upper 16 bits" Description=""/>
               
                <Register Offset="0x058" Name="" Text="SD bus width" Description="">
                    <RegisterFields>
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---0  1 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---1  4 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="1---0  8 bit" Description="" />
                    </RegisterFields>
                </Register>
                <Register Offset="0x05C" Name="" Text="Write transfer block size" Description=""/>
                <Register Offset="0x064" Name="" Text="SD bus width" Description="">
                    <RegisterFields>
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---0  1 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="0---1  4 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="0" End="4" Name="" Text="1---0  8 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="20" End="27" Name="" Text="01100000  1 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="20" End="27" Name="" Text="01100000  4 bit" Description="" />
                        <RegisterField xsi:type="Bits" Start="20" End="27" Name="" Text="01110000  8 bit" Description="" />
                    </RegisterFields>
                </Register>
                <Register Offset="0x068" Name="" Text="Read transfer block size" Description=""/>
                <Register Offset="0x070" Name="" Text="Some flags" Description="set to 0x39 before transfer">
                    <RegisterFields>
                        <RegisterField xsi:type="Bit" Pos="0" Name="Transfer running" Text="" Description="" />
                    </RegisterFields>
                </Register>
                <Register Offset="0x07C" Name="" Text="Read/Write transfer block count" Description=""/>
                <Register Offset="0x080" Name="" Text="Transferred blocks" Description=""/>
                <Register Offset="0x084" Name="SDREP" Text="Status register/error codes" Description=""/>
                <Register Offset="0x088" Name="SDBUFCTR" Text="Buffer counter?" Description="Set to 0x03 before reading/writing"/>
            </Registers>
        </EngineDescription>
       


and the output will be

SDCON Engines
-------------------
  0xC0C10000    SDCON0
  0xC0C20000    SDCON1
  0xC0C30000    SDCON2
  0xC0C40000    SDCON3
    +0x0000    Unknown
                Set to 0x00 on init
    +0x0004    Unknown
                Set to 0x01 on init
    +0x0008    Unknown
                Set to 0x00 on init, 0x01/0xF1 before read/write, not used for status block. means: use DMA?
    +0x000C    Unknown
                Set to 0x14/0x13/0x12/0x11/0x02 on command, after writing regs +0x024, +0x020 and +0x010, with 0x11, registers +0x028/+0x02C is ignored probably
    +0x0010    Status Register
      -------- -------- -------- -------X     Transfer finished
      -------- -------- -------- ------X-     Error during transfer
      -------- ---X---- -------- --------     DAT transfer data available in reg +0x06C?
      -------- --X----- -------- --------     DAT transfer finished?

    +0x0014    Unknown
                Set to 0x03 before transfer start, 0x00 on ISR
    +0x0018    Unknown
                Set to 0x08 on init
    +0x0020    Command frame lower 32 bits
                needs 0x0001 being set (end bit)
    +0x0024    Command frame upper 16 bits
                needs 0x4000 being set (transmission bit)
    +0x0028    Unknown
                Written with 0x88/0x30/0x30 before CMD
    +0x002C    Unknown
                Written with 0x7F08/0x2701/0x80000000 before CMD
...



now my question is, will there be some helpers wo try to transfer the information from our wiki and from alex' adtg_gui into a single XML file?

if you want to help, then
* pick the example registermap.xml
* and the pretty-printer for win32.
* go to the wiki/adtg_gui
* and add a missing section to the XML file
* check how it looks
* post it here in the forum :)

its enough if you just post the

<EngineDescription Name="SIO">
    .....     
</EngineDescription>

and the corresponding group

<Group Name="SIO Engines" Engine="SIO" Device="Digic">
    <Engines>
        <Engine Address="0xC0820000" Name="SIO0"/>
        <Engine Address="0xC0820100" Name="SIO1"/>
        <Engine Address="0xC0820200" Name="SIO2"/>
        <Engine Address="0xC0820300" Name="SIO3"/>
    </Engines>
</Group>

everyone can then merge the new ones into their XML.
its not too complicated and you can get some insight into what is happening in those registers.
maybe some of you have some findings that will improve the register map?

thanks :)
#7
Reverse Engineering / FRSP related infos
January 22, 2015, 10:55:35 PM
some reverse engineering notes.

for FRSP we call first  FA_CreateTestImage  to get a prepared image job.


struct struc_JobClass *cmd_FA_CreateTestImage()
{
  struct struc_JobClass *job; // r4@3
  unsigned int length; // [sp+Ch] [bp-44h]@3
  unsigned int *data_ptr; // [sp+10h] [bp-40h]@3
  struct struc_ShootParm tv; // [sp+14h] [bp-3Ch]@3

  DryosDebugMsg(0x90, 0x16, "FA_CreateTestImage");
  exec_command_vararg("sht_FA_CreateTestImage");
  msleep(20);
  if ( !word_2771C )
  {
    SRM_ChangeMemoryManagementForFactory();
  }
  ++word_2771C;
  PROP_GetMulticastProperty(PROP_SHUTTER, &data_ptr, &length);
  tv.Tv = *data_ptr;
  tv.Tv2 = *data_ptr;
  PROP_GetMulticastProperty(PROP_APERTURE, &data_ptr, &length);
  tv.Av = *data_ptr;
  tv.Av2 = *data_ptr;
  PROP_GetMulticastProperty(PROP_ISO, &data_ptr, &length);
  tv.ISO = *data_ptr;
  tv.PO_lo = 185;
  tv.PO_hi = 0;
  tv.TP = 153;
  job = CreateSkeltonJob(&tv, FA_CreateTestImage_cbr);
  DryosDebugMsg(0x90, 0x16, "hJob(%#lx)(tv=%#x,av=%#x,iso=%#x)", job, (unsigned __int8)tv.Tv, (unsigned __int8)tv.Av, (unsigned __int8)tv.ISO);
  DryosDebugMsg(0x90, 0x16, "FA_CreateTestImage Fin");
  return job;
}


it sets factory mode, gets Tv, Av and ISO into a struct shootParm


#pragma pack(push, 1)
struct __attribute__((packed)) __attribute__((aligned(1))) struc_ShootParm
{
  char Tv;
  char Av;
  char Tv2;
  char Av2;
  char ISO;
  char field_5;
  char unk_HI;
  char unk_LO;
  int field_8;
  int field_C;
  char field_10;
  char field_11;
  char WftReleaseCheck;
  char field_13;
  char field_14;
  char field_15;
  char field_16;
  char field_17;
  char field_18;
  char TP;
  char field_1A;
  char PO_hi;
  char PO_lo;
  char field_1D;
  char field_1E;
  char field_1F;
  char field_20;
  char field_21;
  char field_22;
  char field_23;
  char field_24;
  __int16 field_25;
  char field_27;
  char field_28;
  char field_29;
  char field_2A;
  char field_2B;
  int field_2C;
  char EshutMode__;
  char EshutMode_;
  char field_32;
  char field_33;
  int field_34;
  int field_38;
  int field_3C;
  char field_40;
};
#pragma pack(pop)


and then calls CreateSkeltonJob to create a job for these parameters


struct struc_JobClass *__cdecl CreateSkeltonJob(struct struc_ShootParm *shootParam, int (__cdecl *cbr)(int, int))
{
  int v4; // r0@1
  const char *v5; // r2@1
  int v6; // r3@1
  struct struc_memChunk *v7; // r0@4
  struct struc_JobClass *job; // r5@4
  signed int jobField; // r0@4
  int v10; // r1@5
  int v11; // r0@6
  int v12; // r0@10
  const char *v13; // r2@10
  int v14; // r3@10
  struct struc_Container *v15; // r0@13
  struct struc_Container *v16; // r0@14
  signed int v17; // r0@16
  struct struc_memSuite *Mem1Component; // r0@21
  void *v20; // [sp+0h] [bp-38h]@1
  struct struc_memSuite *suite; // [sp+8h] [bp-30h]@1
  int data; // [sp+Ch] [bp-2Ch]@3

  v20 = shootParam;
  suite = 0;
  DryosDebugMsg(0x8F, 5, "CreateSkeltonJob (%#x)", cbr);
  SRM_AllocateMemoryResourceForJobObject(0x114C, SRM_AllocateMemoryResourceFor1stJob_cbr, &suite);
  v4 = TakeSemaphoreTimeout((void *)dword_27A44, 0x64);
  if ( v4 )
  {
    v6 = v4;
    v5 = "SRM_AllocateMemoryResourceForJobObject failed [%#x]";
  }
  data = v4;
  if ( v4 )
  {
    goto LABEL_9;
  }
  v7 = GetFirstChunkFromSuite(suite);
  job = (struct struc_JobClass *)GetMemoryAddressOfMemoryChunk(v7);
  memzero(job, 0x114Cu);
  jobField = 0;
  do
  {
    v10 = 0x31 * jobField;
    job->jobs[jobField++].job_ref = job;
    job->jobs[4 * v10 / 0xC4u].signature = "JobClass";
  }
  while ( jobField < 3 );
  job->suite = suite;
  SRM_AllocateMemoryResourceForCaptureWork(0x40000, (int)SRM_AllocateMemoryResourceFor1stJob_cbr, (unsigned int *)&job->Mem1Component_0x4000_MEM1);
  v11 = TakeSemaphoreTimeout((void *)dword_27A44, 0x64);
  data = v11;
  if ( v11 || !job->Mem1Component_0x4000_MEM1 )
  {
    v5 = (const char *)"SRM_AllocateMemoryResourceForCaptureWork failed [%#x, %#x]";
    v20 = suite;
    v6 = v11;
LABEL_9:
    DryosDebugMsg(0x8F, 6, v5, v6, v20);
    data = 5;
    prop_request_change(PROP_MVR_REC, &data, 4u);
    return (struct struc_JobClass *)&unk_5;
  }
  SRM_AllocateMemoryResourceFor1stJob((int)SRM_AllocateMemoryResourceFor1stJob_cbr, (int)&job->ImageBuffer);
  v12 = TakeSemaphoreTimeout((void *)dword_27A44, 0x64);
  if ( v12 )
  {
    v14 = v12;
    v13 = "SRM_AllocateMemoryResourceFor1stJob failed [%#x]";
  }
  data = v12;
  if ( v12 )
  {
LABEL_18:
    DryosDebugMsg(0x8F, 6, v13, v14);
    return (struct struc_JobClass *)&unk_5;
  }
  memcpy_0(&job->ShootParam, shootParam, 0x31u);
  GetCurrentDcsParam(&job->DcsParam);
  jobSetUnitPictType(job, job->DcsParam.PictType);
  job->cbr = cbr;
  job->cbr_ptr = &job->cbr;
  job->field_25C = 1;
  job->JobID = dword_27A24 + 1;
  v15 = CreateContainerWithoutLock("JobClass");
  job->FileContainer = v15;
  if ( (unsigned __int8)v15 & 1 || (v16 = CreateContainerWithoutLock("JobClass"), job->JobClassContainer = v16, (unsigned __int8)v16 & 1) )
  {
    v14 = data;
    v13 = (const char *)"CreateContainerWithoutLock failed [%#x]";
    goto LABEL_18;
  }
  v17 = Container_AddObject(job->FileContainer, "Mem1Component", (int)job->Mem1Component_0x4000_MEM1, 0x40000, (int)sub_FF0F2008, 0);
  data = v17;
  if ( v17 & 1 )
  {
    v14 = v17;
    v13 = "AddObject failed [%#x]";
    goto LABEL_18;
  }
  Mem1Component = job->Mem1Component_0x4000_MEM1;
  job->pLuckyTable = &Mem1Component[0x2600];
  DryosDebugMsg(0x8F, 5, "Mem1Component 0x%x pLuckyTable 0x%x", Mem1Component, &Mem1Component[0x2600]);
  irq_disable();
  if ( !powersave_count )
  {
    cmd_DisablePowerSave();
  }
  ++dword_27A24;
  ++powersave_count;
  irq_enable();
  return job;
}


the job structure is this one:


#pragma pack(push, 4)
struct struc_JobClass
{
  struc_JobClassListElem jobs[3];
  _BYTE gap24C[4];
  struct struc_memSuite *suite;
  int (__cdecl **cbr_ptr)(int, int);
  int (__cdecl *cbr)(int, int);
  int field_25C;
  int JobID;
  int field_264;
  int field_268;
  int ObjectID;
  int field_270;
  int field_274;
  int field_278;
  int Destination;
  struct struc_ShootParm ShootParam;
  struc_AfterParam AfterParam;
  __attribute__((aligned(4))) struct struc_DcsParam DcsParam;
  int ShootImageStorage;
  struct struc_memSuite *ImageMemory_0x4_JPEG_L;
  struct struc_memSuite *ImageMemory_0x1_JPEG_M;
  struct struc_memSuite *ImageMemory_0x1_JPEG_S;
  struct struc_memSuite *ImageMemory_0x40000000;
  struct struc_memSuite *ImageMemory_0x80000000;
  struct struc_memSuite *ImageMemory_0x40;
  struct struc_memSuite *ImageMemory_0x20;
  struct struc_memSuite *ImageMemory_0x10;
  struct struc_memSuite *ImageMemory_0x800;
  struct struc_memSuite *ImageMemory_0x200_JPEG_M1;
  struct struc_memSuite *ImageMemory_0x400_JPEG_M2;
  struct struc_memSuite *ImageMemory_0x100;
  struct struc_memSuite *ImageMemory_0x10000;
  struct struc_memSuite *ImageMemory_0x8000;
  struct struc_memSuite *ImageMemory_0x4000;
  struct struc_memSuite *ImageMemory_0x1000;
  struct struc_memSuite *ImageMemory_0x2000;
  struct struc_memSuite *ImageMemory_0x20000000_RAW;
  struct struc_memSuite *ImageMemory_0x10000000;
  struct struc_memSuite *ImageMemory_0x1000000;
  struct struc_memSuite *ImageMemory_0x80000;
  struct struc_memSuite *ImageMemory_0x400000;
  struct struc_memSuite *ImageMemory_0x100000;
  struct struc_memSuite *ImageMemory_0x200000;
  int field_F88;
  _BYTE gapF8C[140];
  int field_1018;
  int field_101C;
  struct struc_Container *FileContainer;
  void *JobClassContainer;
  struct struc_memSuite *Mem1Component_0x4000_MEM1;
  int field_102C;
  int field_1030;
  int DonePictType;
  int field_1038;
  struct struc_memSuite *ImageBuffer;
  int HDRCorrectImageBuffer;
  int HDRUnderImageBuffer;
  int HDROverImageBuffer;
  int field_104C;
  int field_1050;
  int field_1054;
  int field_1058;
  struct struc_memSuite *ImageMemory_0x2000000;
  int field_1060;
  int field_1064;
  int field_1068;
  _BYTE gap106C[116];
  int field_10E0;
  void *pLuckyTable;
  struct struc_LuckyParm LuckyParam;
  _BYTE gap1128[16];
  int BackupWbOutList;
  int BackupLensOutList;
  int BackupFnoOutList;
  int BackupLongExpNoiseReductionList;
  int BackupMultipleExposureSettingList;
};
#pragma pack(pop)


this job is then returned to silent.mo which will start the capturing process using FA_CaptureTestImage


void __cdecl cmd_FA_CaptureTestImage(struct struc_JobClass **hJob)
{
  struct struc_JobClass *job; // r4@1
  int fa_flag; // [sp+0h] [bp-10h]@1

  job = *hJob;
  DryosDebugMsg(0x90, 0x16, "FA_CaptureTestImage(hJob:%#lx)", *hJob);
  SCS_FaSetSkeltonJob(job);
  faGetProperty(PROP_FA_ADJUST_FLAG, &fa_flag, 4u);
  fa_flag |= 4u;
  faSetProperty(PROP_FA_ADJUST_FLAG, &fa_flag, 4u);
  msleep(20);
  sht_FA_ReleaseStart();
  exec_command_vararg("sht_FA_ReleaseStart");
  msleep(20);
  sht_FA_ReleaseData();
  exec_command_vararg("sht_FA_ReleaseData");
  if ( TakeSemaphoreTimeout(FactRC_Semaphore_2, 20000) & 1 )
  {
    DryosDebugMsg(0x90, 6, "ERROR TakeSemaphore");
  }
  fa_flag &= 0xFFFFFFFB;
  faSetProperty(PROP_FA_ADJUST_FLAG, &fa_flag, 4u);
  DryosDebugMsg(0x90, 0x16, "FA_CaptureTestImage Fin");
}


the exposure is started and data is retrieved with

signed int sht_FA_ReleaseStart()
{
  return StageClass_Post(ShootCapture->StageClass, ShootCapture, 1, 0, 0);
}
signed int sht_FA_ReleaseData()
{
  return StageClass_Post(ShootCapture->StageClass, ShootCapture, 2, 0, 0);
}


FA_ReleaseData is calling the CBR FA_CreateTestImage_cbr() which releases the semaphore FactRC_Semaphore_2.
this CBR was given in FA_CreateTestImage with CreateSkeltonJob(&tv, FA_CreateTestImage_cbr).

the data in the job could imho get read using GetImageBuffer()


void *__fastcall GetImageBuffer(struct struc_JobClass *job)
{
  void *result; // r0@2

  if ( job->jobs[0].signature == "JobClass" )
  {
    result = job->jobs[0].job_ref->ImageBuffer;
  }
  else
  {
    DryosDebugMsg(0x8F, 6, "GetImageBuffer failed");
    result = &byte_7;
  }
  return result;
}



#8
Raw Video / MLV-Recovery with PhotoRec
October 15, 2014, 12:26:02 AM
Christophe Grenier added MLV support to his great tool PhotoRec which recovers all important file formats.
So if you encounter card or file system trouble, download the latest 7.0-WIP version from his download page and recover as much as possible.

Thanks, Christophe!
#9
Reverse Engineering / Datasheet sharing folder
October 04, 2014, 12:38:43 AM
If you want to help us organize the datasheets and service manuals related to canon cameras, there is a simple way to do so.

install BitTorrent Sync and add these folders:

  Datasheets: BN5KE7A7OFOJ7LKOJQAUCFS3P5RWD5W5R (read only access)
  Contributions: ABBF35JA4KTEB7MOE6NSQEPIN2OVLPSKB (read/write access)

our Datasheet directory contains all public accessible datasheets for devices located on canon cameras,
or datasheets that are related for our reverse engineering work.

if you find a datasheet that might be interesting for us (PDF preferred) then just copy it into the Contributions folder.
BitTorrent Sync will synchronize the folder with us, no need to upload somewhere and share the D/L link etc.
you can place anything there that will help us doing our reverse engineering work.

of course, if it violates anyone's rights, we will remove it from that public folder, storing it in a safe place :)
#10
Share Your Photos / Some of my favorites
September 19, 2014, 10:41:35 PM
some of my favorites :)

nothing photoshopped, just LR.
one or two of them have some strong effects applied, like grain and vignetting.











#11
This is a statement about how the Magic Lantern team positions itself regarding copyleft discussions.

As some may have noticed, there was a lengthy discussion about the GPL and violations of it in post-processing tools designed to work with the files produced by Magic Lantern.
Let us first define why we use GPL and what it is for.  Please read this for a detailed and formal description.

This explanation is a condensed one to clarify our position.

We, the Magic Lantern developers, provide Magic Lantern and it's suite of tools on a free basis (free as in beer), and everything we give to you is a result of several thousand hours of work, either researching or programming.  Along with binary versions, you get all of the source code for Magic Lantern, and it's suite of tools.

Our intention:
To drive forward the Magic Lantern project through open sourced development.  Be that through development of the core code, modules, post processing applications, or any other applications designed to work primarily with the Magic Lantern project.

The only things we ask in return:

  • Contribute back to the Magic Lantern project if you make improvements to it.
  • Honor our decision that this code is free, and help to establish and support the free nature of Magic Lantern.
  • If you use the code, or parts of it and distribute it (or even sell it), you must release this code (per the GPL).
  • Don't act against common sense.
Unfortunately, even after a lengthy discussion, there were authors who used our GPLed code in their binary-only tools, without redistributing the source code of their tool, and not even mentioning that they use GPL code, and from where they obtained that code (appropriate credit).  Not only is this a violation of the GPL, but it is also rude to the developers who provided the original code.
There was no consent during that discussion, so we are asked to write down what we clearly expected to be common sense.


We think it's time to start actions against such behavior:
Due to the nature of these binary applications, and the actions of their developers, the Magic Lantern team cannot provide any assistance for these applications, and as such all related threads will be now be closed.  The affected application developers are free to work with the Magic Lantern development team if they would like to move forward in helping the Magic Lantern project.
If no move forward is being shown, these threads will be deleted, and the application developers can seek other avenues of support for their applications.

Closed source application developers who implemented their applications on their own, without re-using any of our GPL code, or those who got some exclusive permission (dual-licensed code) through the Magic Lantern developers, are of course not affected.
Naturally, application developers who implement their applications as open source, are also not affected.

What does this mean for developers:
We prefer open sourced development, whether through the use of the code base already available from this project, or entirely on your own.
And of course we tolerate any closed source application as long it doesn't violate GPL terms, even if it is commercial.
But we will definitely take actions against commercial closed source tools that use GPLed code without asking the affected devs before to get an exclusive license.

Compressed view of categories:
a) open source, using our code [preferred]
b) open source, not using our code [preferred]
c) closed source, not using our code [tolerated]
d) closed source, commercial, not using our code [tolerated]
e) closed source, using our code [asked to publish source, ban likely]
f) closed source, commercial, using our code [banned]

What does this mean for end users:
From now on, we discourage everyone from using those applications that have their threads closed.
Using, testing and providing your bug reports for the remaining applications, helps drive forward the Magic Lantern project.
To clarify, only two tools fall into categories e) and f) and will face actions against them, both of them are kind of "better wrappers GUIs".
The professional tools are not affected at all, they know how to behave.

Contact:
If you have any questions or queries regarding the Magic Lantern source code (including in your own applications), or any licensing queries, please contact a1ex or g3gg0.

Respect the developers who provide original code!


ML developers and contributors

    Audionut
    Gr3g01
    MarsBlessed
    Marsu42
    OtherOnePercent
    Pelican
    Simon Dibbern
    Sticks
    [0xAF]
    a1ex
    a_d_
    af
    andrewjohncoutts
    andyperring
    antonynpavlov
    arm.indiana
    ayshih
    bnvm
    britom
    broscutamaker
    cbob5435
    chris.nz
    cjb
    count-magiclantern
    david.l.milligan
    dhessel
    dkelly11
    dlrpgmsvc
    escho
    flameeyes
    freemed
    g3gg0
    gary.mathews.93
    go
    grumpyriffic
    grzesiekpl
    hipescho
    housebox
    houz
    hudson
    info
    jarno.paananen
    joao_pedro_lx
    jordancolburn
    josepvm
    kedzierski.m
    kichetof
    kotyatokino
    leigh_tuck
    ltuck
    mahonrig
    mail
    marazmarci
    marcus
    me
    meeok
    michael.angle
    minimimi4649
    mk11174
    morghus
    nanomad
    nospam
    nsr204
    nviennot
    pdavis
    pel
    phil.a.mitchell
    piersg
    ppluciennik
    pravdomil.toman
    roald.frederickx
    rob
    rudison
    sc1ence
    scrizza
    sodapopodalaigh
    sven
    swinxx
    trsaunders
    ubbut
    up4
    viniciusatique
    vladimir.vyskocil
    w01f
    zloe
#12
Update 06.02.2016:

Please use MLVFS FUSE driver linked below, it will soon have builds for win32 that use dokany VFS driver.
dokany is open source, actively developed and allows to use FUSE drivers without API change.
(see: https://github.com/dokan-dev/dokany )

as long there is no official release, you can use this build after you installed dokany.

BR,
g3gg0

---


Hello,

inspired from the brilliant idea of the FUSE MLVFS driver from dmilligan and ayshih, that allows you to mount MLV files as directories,
i wondered how to make their code/idea available to us windows users too.
The underlying system they use, called FUSE, is a file system extender for unix-like systems and wraps normal file calls so that an application can do arbritraty stuff with it.
In the case of MLVFS they simulate that MLV files are directories that contain DNGs. If you read the DNG from that virtual directory, they create it on-the-fly.
Unfortunately there is no real alternative for windows users to load the MLVFS daemon that is designed for FUSE.

But i found a simple way to give windows users the same experience as unix users get with MLVFS.
Back in my symbian OS days, i created a WebDAV server that allows symbian phones to mount directories on your windows computer. (see my old site)
So i could use my symbian phone to browse directories on my computer at home (MP3s and such).

I've added MLV support and browsing it as virtual folders! Should work from WinXP up to Win8.

Supported (generated) file types:
- 16 bit DNG
- JPEG for previews
- WAV in case of audio-MLV
- RAW FITS with metadata for astral photography (monochrome raw bayer mode) for e.g. DeepSkyStacker
- a text file containing all important metadata in human/script readable form

Features:
- you can select your MLV folder on HDD or memory card and browse it just like a normal directory - as soon there is a MLV, its simulated as directory
- any write access is redirected into a separate subfolder (<mlv_filename>.MLD), just like with the original FUSE driver
- overwriting and modifying the virtual files also possible - files get copied into virtual folder then
- deleting all files in the virtual folder will remove only the files in the .MLD subdirectory, so you will have a clean MLV again
- MLV files are *never* modified when doing stuff with files in the directory, except you delete the directory from its parent folder
- you can enable/disable any file type separately

Caching:
- it is currently disabled, due to memory issues :(
   (i cannot catch out of memory exception properly, as it may happen anywhere)



Select a drive letter that is free and press "Map", it will connect the share to a network drive.
You can also do it manually using the shell by typing:
net use x: \\127.0.0.1@8080 (change letter and port accordingly)

If you close the window, it will minimize into systray, showing a star icon and run in background.

It also supports that windows computers mount the shares as a local network drive.
So i extended this tool to act a bit more responsive and added MLV support using MLVViewSharp and the DNG code from dmilligans MLVFS daemon.
Now you can browse the MLV files as they were directories, showing you the frames as DNGs, JPGs for preview and WAV if it contains audio.



You also can save files and folders "into" that MLV file. all files get redirected into a separate directory named like the MLV file itself, with an extra "_store" suffix


Just like with the unix-version, you can use (hopefully) all your tools with that mapped network drive.
For instance here i import the DNG frames using LightRoom:




Running WebDAVServer as Windows Service:
You can install the tool as a Windows Service which will automatically start on system boot.
To do this, first start the tool as Administrator (rightclick -&amp;gt; Run as Administrator).
First set up all options like Path, Port and Auth – dont forget to press the "Write" button to save a default config.
Now you have written a default config that is always loaded whenever the server starts (both as service and as normal app)
To install the service, simply press "Install".

If this was successful, the "Install" button goes inactive and the "Uninstall" button activates.
The buttons "Start" and "Stop" are for starting and stopping the service.

Since the service has no GUI, sometimes it makes sense to stop the service and use the normal mode instead.

Web Browser Access:
You can access the server with your web browser and browse the contents of your share as the phone would see it.
There are some debug and log views too (check the links on top). If authentication is required (username/pass) the log/debug view is crippled to prevent abuse.
Accessing MLV content using the web browser is not implemented yet. Anyone who needs it?

Download:
Download the current version of the "MLV WebDAV Server" here
Download the source code on bitbucket

Important Hints:
Windows is per default very sluggish when accessing the WebDAV shares.
Please disable "Automatic proxy detection" in your internet explorer, like microsoft suggests here
If you dont do that, accessing the mounted drive is very slow. Its a problem with windows itself.
And yes, its important also for chrome and firefox users ;)

If you want to use authentication on Windows Vista and above, you have to apply a registry patch that enables user/pass authentication.
Please install the fix from the microsoft article here.

If you get DLL errors, you might have to install the MSVCRT runtime libraries from here


Legals:
This program is licensed under the GPL v2 license.
This code contains GPL code from MLVFS, a GPLed FUSE library for accessing MLV files.
To be specific, the whole RAW-&amp;gt;DNG code was taken from there.
#13
recently i've been trying hard to use hardware engines to process raw video for playback.
unfortunately i did not get far enough to say we have realtime playback.

but using an hardware engine, i was able to speed up processing at least a bit.

a example video with 600 frames, 24fps, 1920x1080, of 25 seconds length
takes 96 seconds using "color" and "all", which means it will play all frames with nice colors and no skipping.

using a tweak module "raw_twk" (see source in unified), which adds new methods for the latest
mlv_play to play raw with improved speeds.

warning:
a) use it on your own risk
b) only compatible with mlv_play
c) THIS IS REALLY EXPERIMENTAL, DONT DARE TO POST HERE IF IT RUINED SOME SHOOT YOU MADE!
d) 5D3 only yet

maybe this is stable enough to make use of it in ML core?

it is making use of "ProcessPathForFurikake" DSUNPACK/DARK/PACK16/WDMAC16 engines which receive 14bpp raw stream and align it correctly into 16bpp.
this eases up the way we can read out the pixel data and basically the most CPU expensive thing is rgb->yuv.

for this reason ive also improved rgb2yuv:

left: original code, right: handcrafted assembly (there are also 6 words of constans not shown)


source
#14
more here:
https://bitbucket.org/hudson/magic-lantern/pull-request/484/black-level-fix/diff
for those who know how to compile ML, please test it so we can merge that.
#15
two days ago a patch was committed to ffmpeg official source code database that
adds support for our Magic Lantern Video (MLV) format produced by mlv_rec.

how it came to this?
the FFMPEG team applied for Google Summer of Code (GSoC) for raw bayer support in their libraries.
i talked to peter and suggested him to look at our video format as some open source raw video format that is free of any royalities.
so they could continue to improve their raw support with already existing footage in this simple video file format.

peter then started to implement the format reader within only a few days :)
the nightly build of ffmpeg already can play .mlv videos using ffplay, just the coloring isnt finished yet.

the commit is here

a big thank you to Peter Ross and Michael Niedermayer for making this happen :)

#16
current state:
a thing that i personally consider a bit odd in magic lantern core is how the graphics code works.
everything, even fonts is being printed on screen directly. this can e.g. cause weird flickering if you redraw stuff.

when you want to "build" up a graphic and display it with only one operation, like a simple BitBlt you
have no other chance than implementing your drawing routines on your own.
for creating graphics, like the plots alex is doing, you can only write them on screen and then do a screenshot.
(in the hope that nothing printed over your graphs while you built them)

also these operations are only possible in bitmap buffer, which is a 8 bit per pixel indexed (palette) graphics buffer.
drawing/printing into the vram is not supported at all.


change:
for that reason i am starting a discussion for a new graphics backend, that should cover all current and (hopefully) future needs.
i hope that you take part in the discussion and maybe we can find someone who is able to implement that.

@all devs:
do you think this API will be usefule and would cover all use cases we currently have or will face in future?
(e.g. painting on ML's own back and front buffers before printing them on screen etc?)

@future devs:
this code is really simple to implement. you dont have to know ML or canon. this can even be tested on the computer standalone.
anyone who is interested in implementing it?


fundamentals:
- every operation happens on a "context" which tells the graph routines where they have to draw on (screen, ram, etc)
- all necessary information for drawing must be accessible in the context being passed
- every operation must support 8bpp and YUV color modes
- the graphics type is designed to be compatible to the screen buffers (BMP, LV, HD) without any hacks
- for easy usage, there are predefined contexts that are meant for printing on screen directly
   (e.g. when specifying CANON_BMP_FRONT the code will pick an internal graph_t which contains the screen's bmp configuration)


prototypes:
/* image data is stored in YUV422 packed, also known as YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr  */
#define PIX_FMT_YUYV422  0
/* image data is stored as 8 bits per pixel indexed. palette can be specified optionally */
#define PIX_FMT_PAL8     1

/* special cases: when specifying them, the routines will render on screen directly */
#define CANON_BMP_FRONT ((graph_t *) 1)
#define CANON_BMP_BACK  ((graph_t *) 2)
#define CANON_VRAM_HD   ((graph_t *) 3)
#define CANON_VRAM_PREV ((graph_t *) 4)

/* for any copy operation, specify how to proceed when destimation dimensions differ from source */
#define COPY_MODE_CROP   0
#define COPY_MODE_SCALE  1
#define COPY_MODE_BILIN  2


typedef struct
{
    /* pointer to raw image data */
    void *data;
    /* pixel format as specified in PIX_FMT macros */
    uint32_t pixel_format;
    /* image dimensions */
    graph_size_t size;
    /* optional palette, especially important when saving or copying to YUV targets */
    graph_palette_t *palette;
    /* if non-NULL, the graphic will be locked when drawing on it (t.b.d) */
    void *lock;
} graph_t;

typedef struct
{
    /* image width in pixels, visible content only */
    uint32_t width;
    /* image height in pixels, visible content only */
    uint32_t height;
    /* how wide every pixel line is, given in pixels */
    uint32_t pitch;
    /* number of invisible pixels left of the image data */
    uint32_t x_ofs;
    /* number of invisible pixels above the image data */
    uint32_t y_ofs;
} graph_size_t;


/* draw a single dot, color depends on image format. either palette index or full YUV word. size=1 must be optimized */
uint32_t graph_draw_pixel(graph_t *ctx, uint32_t x, uint32_t y, uint32_t radius, uint32_t color);
uint32_t graph_draw_line(graph_t *ctx, uint32_t x1, uint32_t y1, uint32_t x2, uint32_t y2, uint32_t radius, uint32_t color);
uint32_t graph_draw_rect(graph_t *ctx, uint32_t x1, uint32_t y1, uint32_t x2, uint32_t y2, uint32_t line_color, uint32_t fill_color);

/* width/height may be zero for auto */
uint32_t graph_copy(graph_t *dst, graph_t *src, uint32_t x, uint32_t y, uint32_t width, uint32_t height, uint32_t copy_mode);

/* font_t is the font type we use with bmp_printf etc */
uint32_t graph_printf(graph_t *dst, uint32_t x, uint32_t y, font_t font, char *msg, ...);

/* can be used to get the palette of the canon screen */
graph_palette_t *graph_get_palette();


/*
    data pointer is pointing here
   /
  |________________________________________________
  |         ^                                      |
  |         | y_ofs                                |
  |         |                                      |
  |       __v______________________________        |
  | x_ofs|                          ^      |       |
  |<---->|                   height |      |       |
  |      |                          |      |       |
  |      |       (image content)    |      |       |
  |      |                          |      |       |
  |      |             width        |      |       |
  |      |<-------------------------|----->|       |
  |      |__________________________v______|       |
  |                                                |
  |________________________________________________|
  |                     pitch                      |
  |<---------------------------------------------->|
 
*/



example:
e.g. either call
    graph_draw_pixel(CANON_BMP_FRONT, 10, 20, COLOR_WHITE);
or
    graph_draw_pixel(my_own_graph, 10, 20, COLOR_WHITE);
where 'my_own_graph' is a pointer to a custom graph context.
this may be displayed on screen later or saved using appropriate routines.


/* init sample graphic */
graph_t *my_own_graph = graph_alloc(PIX_FMT_PAL8, 1024, 768);

/* set a dot (width 1) */
graph_draw_pixel(my_own_graph, 10, 20, 1, COLOR_WHITE);

/* draw an ellipse, width 2 */
graph_draw_circle(my_own_graph, 90, 90, 40, 80, 2, COLOR_WHITE);

/* save it */
graph_save_bmp(my_own_graph, "ML/DATA/PLOT.BMP");

/* width/height may be zero for auto */
graph_copy(CANON_BMP_FRONT, my_own_graph, 0, 0, 0, 0, COPY_MODE_CROP);


#17
Status: experimental, need your testing!

Short description:
io_crypt is a module which automatically encrypts .CR2 and .JPG while you shoot them.
The original file content is never written to card, so there is no way to restore the image content by reading the raw sectors etc.
You can choose between different modes and security levels.
This was formerly discussed there and was requested already a few times.

Detailed description:
This module hooks the file-io operations for your SD and CF card and places custom read/write routines instead.
These custom r/w operations encrypt your file content before the card's real write handler is being called.
For you there is no additional task to do after you shot the image - just shoot as usual and your files are encrypted.

There are two possible modes:
- Password
    Before you shoot images, you have to enter a password which is being used for all images
    The password gets fed into a LFSR (Linear Feedback Shift Register) to shuffle the bits and get a 64 bit file key.
    advantage: you can enter different keys, one per "session" or "access level" and share them accordingly
    disadvantage: you have to enter the key every time you power on the camera (storing is insecure of course)

- RSA
    Before you start your shooting, you create a RSA public/private key pair via menu.
    (edit: this takes up to 10 minutes with a 4096 bit key!!)
    Then you copy the private key from your card (ML/DATA/IO_CRYPT.KEY), store it at a safe place and delete it from your card (!!).
    You need the private key only for decrypting (on computer), the public key only for encrypting (on camera)
    With the internal PRNG for every image a separate file key is being generated and encrypted using RSA.
    advantage: no password must be entered, power on and shoot. every image has a different, random "password"
    disadvantage: you have to prepare yourself a bit by copying and deleting the encryption keys correctly

In both modes, the file content is being encrypted using a XOR operation with the output of a 64-bit LFSR that was pre-loaded with the file key and the current block numver.
To make random access feasible and the encryption fast enough, the keys are used blockwise.
This obviously weakens encryption a lot and makes it possible to recover the 64 bit block encryption key using known plaintext attacks.
The good thing - know plaintext attacks are only suitable for file content that has a predictable pattern, like the file header.

Still the encryption i implemented is *not* military grade. Although it is (imho) safe enough for a normal individual.

Options:
    Blocksize
    The block size that is being encrypted with the same 64 bit key.
    larger is faster, but insecure. smaller values slow down saving. choose.
   
    Ask for password on startup
    If you are in Password mode, camera will ask for password right after poweron.
    When disabled, you have to enter the menu manually and set the key - else no pictures will be encrypted.

    RSA Keysize
    Choose the largest value that you can tolerate. The larger the size, the longer generating will take (up to 10 minutes...).
    Also saving will slow down a bit with larger keys


Image review:
Canon caches the images you have shot until you poweroff the camera or the memory gets full (5-10 images).
As long the images are in cache, you can review it without any problem, even if you change the key.

In RSA-Mode you currently *not* review images other than those in cache. Not sure if i will implement it at all.
In Password mode, you can view images when you set the correct password.

Decryption:
After you copied the files onto your computer, you can decrypt it with io_decrypt which is (not yet) avaiable precompiled, but you can get from the repository.

./io_decrypt <in_file> [out_file] [password]

If you want to decrypt password protected files (LFSR64), you have to supply the encryption password on commandline.
For RSA encrypted files, the private key ML/DATA/IO_CRYPT.KEY must be in the current directory.

Compatibility:
The module contains some camera specific memory addresses, so it has to be ported for every model.
Cameras that are supported: 7D, 5D3, 60D, 600D, 650D
Next cameras being added: 5D2, 6D
If you have a different model and want to use/test the module, please post it here.

Disclaimer:
1. Do not do any illegal stuff with it.
2. It is meant for e.g. reporters whose security depends on the footage not being revealed or for securing sensible information
3. Dont rely on it. It will for sure somewhen fail and your footage is gone.
4. Dont cry when something goes badly wrong.


Download:
You can always download my latest build there
here is the windows console decrypter.


ToDo:
- Show fake images instead of the standard canon error screen
- background encryption for unsupported models. will scan, encrypt and save the images in background while your camera is idle.



#18
here is a _very_ simple and hackish MLV viewer to check your footage
it is also available as an OSX-App


it will read uncompressed MLV files and display the frames with just a few frames per second.
this tool was programmed in C# on windows, but it uses nothing windows-specific, so it should run on any OS using mono. (positive reports from linux and mac os x)

please remind:
- these tools are just a PROOF OF CONCEPT
- it is not meant as productional tool
- i used it to check what is necessary to decode and view RAW/MLV files, its just my playground
- it has bugs!
- it will most likely not be continued
- i shared it as a last resort tool in case you need something like that

MLVViewSharp:


notes:
- just drop the .mlv or .m00, .raw, .r01, ... file into the program window
- shows the video in full res using bilinear demosaicing
- other debayering methods (e.g. fast ones) are available (right click onto image)
- ramps exposure up/down if there is under/overexposure (so it may not be accurate enough for some of you)
- has no white balance algorithm
- just tested on 5D3, other cameras have different bayer patterns - didnt check them yet
- it uses the coefficients from the raw info block, so color weighting should be correct caused trouble, disabled
- the scene is scaled to TV black and white levels (16..235) for a better looking playback
- updated to work with files that have less bpp than 14 (e.g. when used mlv_dump to reduce size)
- supports both .mlv and old .raw file format

it has a white balancing feature:
- press and hold SHIFT
- press LEFT mouse button
- image will get displayed 1:1
- move to where you have gray level
- release LEFT mouse button
it will pick a 8x8 pixel area and use this as white balance reference after debayering and kelvin correction.

camera color matrices are also used now, which should result in better colors

to disable all correction post processing:
- right click to get context menu
- there is an option to disable color correction

MLVBrowseSharp:


- just select the folder to browse on the left pane
- you can select multiple files on the right pane as you are used to (CTRL click etc)
- the only thing you can do yet, is RIGHT CLICK and choose anything you want
- for opening using MLV Viewer, please asign the .mlv extension to MLVViewSharp.exe
- for every file visible, it creates a thread, so this might overload your computer when you have several hundreds of files in one directory (i hope you sort your footage better than this...)
- selected files play back in maximum speed, unselected play slower (1 fps)
- selecting a file causes it to play back from the beginning
- you cannot set WB or debayering in the preview window (its simple to add, but i dont understand why someone would need that)

- when a file is selected, you can CLICK and HOLD the left mouse button on the icon and DRAG left and right to seek in the file
#19
as modules got really useful lately and some versioning and updating issues come up, we should
think about handling variants of modules, different revisions and automated module updating.

a year ago, every dev released customized autoexec.bin, today we all use basically the same autoexec.bin, but vary the modules.
this is a really good transition and was making us think more modular and separating concerns into separate modules.

but before it gets a mess with modules now, it would make sense to introduce a repository with all the latest modules and even branches or variant of standard modules.
i have no 100% clear view about how to set up the repository and the branch system exactly, i still have a bit highlevel view on this.

implementation level:
- heavy duty: using hg (bitbucket) and a separate repository called e.g. ml-modules
   pro:
     - perfect branch / fork mechanism
     - perfect revisioning mechanism
     - basically no server cost, administration and implementation effort
     - updater client just needs the base URL, which even can be a fork (just like we already do with source)
   con:
     - dependency on bitbucket
     - getting revision lists etc might be not as simple as just downloading an URL (is there a JSON interface?)

- light weight: simple upload system with some bash scripts...

functionality level:
- show a list of all modules possible to install
- get description of a module (extracted from .mo?)
   - contains dependencies, revision, hg link to source
   - contains a "provides" field? (like in all package managers to detect if two modules are doing the same thing)
- get revision list and branches of a module
- get module at specific revision / branch

so if we have this repository, we need an updater for the modules on the card.
why do i come up with this idea right now?
see this thread.

i want a mod_mgr.mo which uses the transcend wifi card to update modules automatically or on demand.
as soon the wifi module in 6D and 70D is understood enough, this interface can also be used,
so owners of many camera models can make use of the repository system - even while being on a journey using their cell phones.


of course we will first have to implement a windows/linux/mac updater (or one of those) which is the reference application and that uses disk access.


any comments?
anyone who will check if this is doable using bitbucket?
#20
the transcend wifi cards recently got interesting, after an article about root'ing these devices was published.
(http://haxit.blogspot.com.es/2013/08/hacking-transcend-wifi-sd-cards.html)


internals:


they contain an ARMv5 instruction set ARM926EJ with somewhere around 400 MHz and 32 MiB RAM.
plus an integrated 16 or 32 GiB SD card ;)
the used operating system is linux with busybox and a bunch of reaaaallllyyy hackish shell scripts.
you can place an 'autorun.sh' and it will get executed on startup... as root...

unfortunately the wifi speed is embarrasing slow - i got 1 MBit/s which is not making any fun with .cr2 files.
(no, dont even ask for raw video!)

Magic Lantern - tr_wifi.mo module functionality
- "Enable TrWiFi" / "Disable TrWiFi" - places or removes autorun.sh with magic lantern specific code
- "Mode: DirectShare" / "Mode: Internet" - depending on current mode switch to the other one for either accessing internet or tethering with mobile phone

plus providing these functions to other modules:
- int32_t trwifi_get_file ( char *url, char *dst_file )
   the file at given URL is being downloaded and copied from the linux system to camera filesystem.
   as we can access the SD from linux, but this will compete with our DryOS filesystem driver, we have to use files like B:/ML/DATA/TR_UPLNK.DAT and B:/ML/DATA/TR_DNLNK.DAT.
   both camera and linux will access the files without changing anything in the file structure to transfer data between each other.
   possible structure: [payload_size][payload] where payload initially is a shell script that is executed by autorun.sh
   these shell scripts can use that comm channel for any arbitrary command specific to the script. the camera has to care for communicating with the right commands.

- char *trwifi_exec ( char *command )
   execute any command on linux side and return its stdout as string

i am not sure if it makes sense or fun to implement tcp/udp connect/read/write functionality or even PTP functionality (by forwarding to DryOS PTP handler).
tunneling through these files may be a bit slow and complicated.


constructive feedback?

#21
can someone with SMPTE equipment try this module?

SMPTE output module

i developed it on 5D3, it is likely that other models that have audio support will work too.
as i dont have any equip and there are no free tools to read SMPTE, i cannot test what it produces.
#22
Hi,

this time i am requesting collaboration to analyze and clean up our task chaos.

When investigating the performance drop of CF writing in photo mode compared to playback mode,
which causes up to 7MiB/s less transfer speed, i recorded a timing trace of all task and ISR activations.
What annoyed me, was the endless number of tasks for various more and less important things.

Let me show you a trace (please scroll horizontally using cursor keys in your browser):


I marked all ML tasks in red on the left column.
The horizontal axis is the execution time of course.
A red bar means, this item (task/ISR) is being executed at this time. If the activation is very short, you just see a black bar.

Zooming into two activations of ML tasks:


There you see that the tasks are running very short. Only a few microseconds.



But even this short activation period costs execution time - about 2 * 10 microseconds for switching the tasks.
Sometimes this is totally unnecessary and we could save CPU execution time, battery power and maybe write performance for raw recording.
For example the joypress task that takes ~15µs execution time plus 10µs context switch time every 20ms for nothing?
I never press the joystick, so why do i have to sacrifice 0.1% of the execution time?
Sum up all 30 tasks and this is at least 3% that might be unnecessary (yeah, in theory ;) )

Some bad thing is, that the context switches will take longer, the more tasks are waiting to get activated.
At the moment some of the "unnecessary" msleep-polling costs 924µs according to the image above.
Thats a milisecond that is causing delay to other tasks that *really* have to process stuff.
Also the CF write rate seems to go down due to those activations.

So can we try to investigate task by task,
a) if we really need that task
b) if it really has to msleep(x) for just polling a variable
c) if the thing the task does can be achieved with timeout-less message queues

This is not a one-day task, but an ongoing process that may take weeks to clean up.
#23
Hey.

after alex spent a lot of time to find out how we can squeeze out the last bit of performance while
writing raw video to SD and CF cards, i used the last days to think about how to structure the
raw videos to make the post processing easier and the format more extensible.

the result is our next Magic Lantern Video format (.mlv) i want you to look at.
use it on your own risk.

for users:
mlv_rec: nightly download page.
mlv_dump: most recent nightly download page. (binary for WINDOWS only)

mlv_dump: or here (binaries for WINDOWS, LINUX and OSX)

for developers:
mlv file structures in C: here (LGPL)

preferred: you can export .dng frames from the recorded video using "mlv_dump --dng <in>.mlv -o <prefix>"
legacy mode: post processing is still possible with 'raw2dng' after converting the .mlv into the legacy .raw format using mlv_dump.


for details see the description below.
see the short video i made: http://www.youtube.com/watch?v=A6pug1g-kNs
it shows a bunch of the new (user visible) features of that file format.

mlv_dump
- used for debugging and converting .mlv files
- can dump .mlv to legacy .raw + .wav files
- can dump .mlv to .dng  + .wav
- can compress and decompress frames using LZMA
- convert bit depth (any depth in range from 1 to 16 bits)

compression:
you can get a data reduction of ~60% with 12 bit files.
downconverting to 8 bits gives you about 90% data reduction.
this feature is for archiving your footage.
converting back to e.g. legacy raw doesnt need any parameters - it will decompress and convert transparently without any additional parameter.

parameters:

-o output_file      set the filename to write into
-v                  verbose output

-- DNG output --
--dng               output frames into separate .dng files. set prefix with -o
--no-cs             no chroma smoothing
--cs2x2             2x2 chroma smoothing
--cs3x3             3x3 chroma smoothing
--cs5x5             5x5 chroma smoothing

-- RAW output --
-r                  output into a legacy raw file for e.g. raw2dng

-- MLV output --
-b bits             convert image data to given bit depth per channel (1-16)
-z bits             zero the lowest bits, so we have only specified number of bits containing data (1-16) (improves compression rate)
-f frames           stop after that number of frames
-x                  build xref file (indexing)
-m                  write only metadata, no audio or video frames
-n                  write no metadata, only audio and video frames
-a                  average all frames in <inputfile> and output a single-frame MLV from it
-s mlv_file         subtract the reference frame in given file from every single frame during processing
-e                  delta-encode frames to improve compression, but lose random access capabilities
-c                  (re-)compress video and audio frames using LZMA (set bpp to 16 to improve compression rate)
-d                  decompress compressed video and audio frames using LZMA
-l level            set compression level from 0=fastest to 9=best compression




examples:

# show mlv content (verbose)
./mlv_dump -v in.mlv

# will dump frames 0 through 123 into a new file
# note that ./mlv_dump --dng -f 0 in.mlv (or ./mlv_dump --dng -f 0-0 in.mlv) will now extract just frame 0 instead of all of the frames.
./mlv_dump -f 123 -o out.mlv in.mlv

# prepare an .idx (XREF) file
./mlv_dump -x in.mlv

# compress input file
./mlv_dump -c -o out.mlv in.mlv

# compress input file with maximum compression level 9
./mlv_dump -c -l 9 -o out.mlv in.mlv

# compress input file with maximum compression level 9 and improved delta encoding
./mlv_dump -c -e -l 9 -o out.mlv in.mlv

# compress input file with maximum compression level 9, improved delta encoding, 16 bit alignment which improves compression and 12 bpp
./mlv_dump -c -e -l 9 -z12 -b16 -o out.mlv in.mlv

# decompress input file
./mlv_dump -d -o out.mlv in.mlv

# convert to 10 bit per pixel
./mlv_dump -b 10 -o out.mlv in.mlv

# convert to 8 bit per pixel and compress
./mlv_dump -c -b 14 -o out.mlv in.mlv

# create legacy raw, decompress and convert to 14 bits if needed
./mlv_dump -r -o out.raw in.mlv



Play MLV Files

MLRawViewer

baldand implemented an amazing video player that is using OpenGL and is able to convert your .raw/.mlv into ProRes directly.
even i use it as my playback tool, so consider it as the official player. ;)

see: http://www.magiclantern.fm/forum/index.php?topic=9560.0

MLV_Viewer

see here for a MLV player on windows



in-camera mlv_play:
the module mlv_play.mo is shipped with the pre-built binaries.
it is a plugin for file_man.mo to play .raw and .mlv files in camera.
the discussion thread for this module is there

Drastic Preview:
the guys over at drastic.tv are currently implementing the MLV format and already have a working non-open beta version. (i tried it already and i love it :) )
i am sure within the next weeks they will release a new version.
http://www.drastic.tv/index.php?option=com_content&view=category&id=42&Itemid=79





some technical facts:
- structured format
- extensible layout
- as a consequence, we can start with the minimal subset (file header, raw info and then video frames)
- multi-file support (4 GiB splitting is enforced)
- spanning suport (write to CF and SD in parallel to gain 20MiB/s)
- out-of-order data support (frames are written some random order, depending on which memory slot is free)
- audio support
- exact clock/frametime support (every frame has the hardware counter value)
- RTC information (time of day etc)
- align fields in every frame (can differ from frame to frame)

the benefit for post processing will be:
- files can be easily grouped by processing SW due to UIDs and file header information (autodetect file count and which files belong to each other)
- file contains a lot of shooting information like camera model, S/N and lens info
- lens/focus movement can be tracked (if lens reports)
- exact* frame timing can be determined from hw counter values (*=its accuracy is the limiting thing)
- also frame drops are easy to detect
- hopefully exact audio/video sync, even with frame drops
- unsupported frames can be easily skipped (no need to handle e.g. RTC or LENS frames if the tool doesnt need them)
- specified XREF index format to make seeking easier, even with out of order data and spanning writes

why a custom format and not reuse e.g. .mov?
- other formats are good, but none fits to our needs
- hard to make frames align to sector or EDMAC sizes
- they dont support 14 bit raw bayer patterns out of the box
- even when using a flexible container, nearly all sub blocks would need custom additions
- this means a lot of effort to make the standard libs for those formats compatible
- its hard to implement our stuff in a clean way without breaking the whole format

thats the reason why i decided to throw out another format.
it is minimalistic when desired (especially the first implementation will only use a subset of the frames)
and can be extended step by step - while even the most minimalistic parser/post processing tool
can process the latest video files where all stuff is implemented.

if you are a developer (ML or even 3rd party tools) - look over it and make yourself comfortable with that format.
in case there is a bug or something doesnt make sense, please report it.
i would love to get feedback.

here is the link of the spreadsheet that is some kind of reference when designing the format:
https://docs.google.com/spreadsheet/ccc?key=0AgQ2MOkAZTFHdHJraTVTOEpmNEIwTVlKd0dHVi1ULUE#gid=0

implementer's notes
green = fully implemented
blue= implemented, but not 100%
red = not implemented yet, just defined

[MLVI] (once)
- MLVI block is the first block in every .mlv file
- the MLVI block has no timestamp, it is assumed to have timestamp value 0 if necessary
- the MLVI block contains a GUID field which is a random value generated per video shoot
- using the GUID a tool can detect which partial or spanning files belong together, no matter how they are named
- it is the only block that has a fixed position, all other blocks may follow in random order
- fileCount field in the header may get set to the number of total chunks in this recording (the current implementation on camera isn't doing this right)

[RAWI] (once, event triggered)
- this block is known from the old raw_rec versions
- whenever the video format is set to RAW, this block has to appear
- this block exactly specifies how to parse the raw data
- bit depth may be any value from 1 to 16
- settings apply to all VIDF blocks that come after RAWI's timestamp (this implies that RAWI must come before VIDF - at least the timestamp must be lower)
- settings may change during recording, even resolution may change (this is not planned yet, but be aware of this fact)

[VIDF] (periodic)
- the VIDF block contains encoded video data in any format (H.264, raw, YUV422, ...)
- the format of the data in VIDF blocks have to be determined using MLVI.videoClass
- if the video format requires more information, additional format specific "content information" blocks have to be defined (e.g. RAWI)
- VIDF blocks have a variable sized frameSpace which is meant for optimizing in-memory copy operations for address alignment. it may be set to zero or any other value
- the data right after the header is of the size specified in frameSpace and considered random, unusable data. just ignore it.
- the data right after frameSpace is the video data which fills up the rest until blockSize is reached
- the blockSize of a VIDF is therefore sizeof(mlv_vidf_hdr_t) + frameSpace + video_data which means that a VIDF block is a composition of those three data fields
- if frames were skipped, either a VIDF block with zero sized payload may get written or it may be completele omitted
- the format of the data in VIDF frames may change during recording (e.g. resolution, bit depth etc)
- whenever in time line a new content information block (e.g. RAWI) appears, the format has to get parsed and applies to all following blocks

[WAVI] (once, event triggered)
- when the audio format is set to WAV, this block specifies the exact wave audio format

[AUDF] (periodic)
- see [VIDF] block. same applies to audio

[RTCI] (periodic, event triggered)
- contains the current time of day information that can be gathered from the camera
- may appear with any period, maybe every second or more often
- should get written before any VIDF block appears, else post processing tools cannot reliable extract frame time

[LENS] / [EXPO] / ... (periodic, event triggered)
- whenever a change in exposure settings or lens status (ISO, aperture, focal length, focus dist, ...) is detected a new block is inserted
- all video/audio blocks after these blocks should use those parameters

[IDNT] (once)
- contains camera identification data, like serial number and model identifier
- the camera serial number is written as HEX STRING, so you have to convert it to a 64 bit INTEGER before displaying it

[INFO] (once, event triggered)
- right after this header the info string with the length blockLen - sizeof(mlv_info_hdr_t) follows
- the info string may contain any string entered by the user in format "tag1: value1; tag2: value2"
- tag can for example be strings like take, shot, customer, day etc and value also any string

[NULL] (random)
- ignore this block - its just to fill some writing buffers and thus may contain valid or invalid data
- timestamp is bogus

[ELVL] (periodic)
- roll and pitch values read from acceleration sensor is provided with this block

[WBAL] (periodic, event triggered)
- all known information about the current white balance status is provided with this block

[XREF] (once)
- this is the only block written after recording by processing software, but not the camera
- it contains a list to all blocks that appear, sorted by time
- the XREF block is saved to an additional chunk
- files that only contain a XREF block should get named .idx to clarify their use
- .idx files must contain the same MLVI header like all chunks, but only have the XREF block in it

[MARK]
- on keypresses, like halfshutter or any other button, this block gets written for e.g. supplying video cutting positions
- the data embedded into this block is the keypress ID you can get from module.h

[VERS] (any number, usually at the beginning)
- a string follows that may get used to identify ML and module versions
- should follow the format "<module> <textual version info>"
- possible content: "mlv_play built 2017-07-02 15:10:43 UTC; commit c8dba97 on 2016-12-18 12:45:34 UTC by g3gg0: mlv_play: add variable bit depth support. mlv_play requires experi..."


possible future blocks:

[BIAS]
[DARK]
[FLAT]
- in-camera black and noise reference pictures can be attached here (dark frame, bias frame, flat frame)
- to be checked if this is useful and doable




[MLV Format]
- the Magic Lantern Video format is a block-based file format
- every information, no matter if audio or video data or metadata is written as data block with the same basic structure
- this basic structure includes block type information, block size and timestamp (exception to this is the file header, which has no timestamp, but a version string instead)
- the timestamp field in every block is a) to determine the logical order of data blocks in the file and b) to calculate the wall time distance between any of the blocks in the files
- the file format allows multiple files (=chunks) which basically are in the same format with file header and blocks
- chunks are either sequentially written (due to e.g. 4 GiB file size limitation) or parallel (spanning over mutiple media)
- the first chunk has the extension .mlv, subsequent chunks are numbered .m00, m01, m02, ...
- there is no restriction what may be in which chunk and what not

[processing]
- to accurately process MLV files, first all blocks and their timestamps and offset in source files should get sorted in memory
- when sorting, the sorted data can be written into a XREF block and saved to an additional chunk
- do not rely on any order at all, no matter in which order they were written into a file
- the only reliable indicator is the timestamp in all headers
#24
with my last commit, i fixed the IME system to interwork cleanly with menu etc.
(see https://bitbucket.org/hudson/magic-lantern/commits/7225666a08fe3bcb39f1bc5e23d71b31736111e8 )




the function to be called is:

extern void *ime_base_start (char *caption, char *text, int max_length, int codepage, int charset, t_ime_update_cbr update_cbr, t_ime_done_cbr done_cbr, int x, int y, int w, int h );


if a module wants to have a text enterd by the user, it can now call the ime code like that:


static char text_buffer[100];

IME_UPDATE_FUNC(ime_base_test_update)
{
    //bmp_printf(FONT_MED, 30, 90, "ime_base: CBR: <%s>, %d, %d", text, caret_pos, selection_length);
    return IME_OK;
}

IME_DONE_FUNC(ime_base_test_done)
{
    for(int loops = 0; loops < 50; loops++)
    {
        bmp_printf(FONT_MED, 30, 120, "ime_base: done: <%s>, %d", text, status);
        msleep(100);
    }
    return IME_OK;
}

static MENU_SELECT_FUNC(ime_base_test)
{
    strcpy(text_buffer, "test");
   
    ime_base_start("Enter something:", text_buffer, sizeof(text_buffer), IME_UTF8, IME_CHARSET_ANY, &ime_base_test_update, &ime_base_test_done, 0, 0, 0, 0);
}


the whole thing is running asynchronously. this means you call ime_base_start and that function immediately returns.
it captures all key events and prevents the ML menu to paint.
instead it is showing you a dialog to enter your text.

the specified update CBR (CallBackRoutine) is called periodically with the current string. it should return IME_OK if the string is acceptable.
(as soon its implemented fully, you can check if it is an valid string, e.g. an email address etc and return a value != IME_OK to grey out the OK button)

when the user selects OK or Cancel, the done CBR is called with the string and the status IME_OK or IME_CANCEL.

the x, y, w, h parameters are planned to specify the location where the caller code prints the text that is passed via update_cbr.
this way the caller code can care for displaying the text somewhere and the IME just cares for the character selection.
but it is not implemented yet.


the code is still very fragile ;)
i planned to support different charsets, but not sure yet how to implement, or if it is necessary at all.
also the way the characters are displayed and the menu is drawn isnt final yet.
i think i should use canon fonts as they look better.
also the "DEL" function cuts the string at the deleted character. that can be fixed easily by using strncpy.

please test that code and improve it where it needs improvement.

Update (17.08.14)
ime_base
ime_rot
ime_std

you can place both ime_std and ime_rot in your module dir, or one of them - which ever you prefer.
ime_base is always needed for both

#25
Reverse Engineering / ResLock stuff
June 24, 2013, 11:33:56 PM
i digged a bit into ResLock stuff and will describe how i think it works.


struct struc_LockEntry
{
  char *name;
  int status;
  int semaphore;
  int some_prev;
  int some_next;
  unsigned int *pResource;
  int resourceEntries;
  void (*cbr)(struct struc_LockEntry *lockEntry, void *cbr_priv);
  void *cbr_priv;
};

struct struc_LockEntry *CreateResLockEntry(uint32_t *resIds, resIdCount);
unsigned int LockEngineResources(struct struc_LockEntry *lockEntry);
unsigned int UnLockEngineResources(struct struc_LockEntry *lockEntry);
unsigned int AsyncLockEngineResources(struct struc_LockEntry *lockEntry, void (*cbr)(struct struc_LockEntry *lockEntry, void *cbr_priv), void *cbr_priv);


LockEngineResources:
Lock a previously allocated LockEntry and its associated devices

UnLockEngineResources:
Unlock a previously allocated LockEntry and its associated devices

CreateResLockEntry:
register a lock that will use semaphores to lock all the resources specified in a list.
when registering your lock, the resIds[] is a list of resources to be locked.
the number of entries in this list is passed as second parameter.
initial state of the lock is unlocked.

resId format:
resId = (block << 16) | (entry)

entry specifies the exact "device" in the given block, if any.
blocks are one of those:
0x00 = EDMAC[0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x20, 0x21]
0x01 = EDMAC[0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x28, 0x29, 0x2A, 0x2B]
0x04 = HEAD
0x36 = encdrwrap
0x37 (max)
(...to be continued)

e.g. resId 0x1000C is block 0x01 and entry 0x0C. This is EDMAC 0x28 being locked whenever LockEngineResources is called with the LockEntry.
#26
Please ask questions about how to compile or install the raw_rec module here.

Keywords: raw_rec, 14-bit raw, dng, video, module, installation


All questions regarding RAW VIDEO POST PROCESSING on any OS
HERE: http://www.magiclantern.fm/forum/index.php?topic=5404.0

All questions regarding RAW_REC MODULE COMPILATION/INSTALLATION/USAGE
HERE
#27
Please ask questions about how to post process RAW-Video here.

Keywords: raw2dng, 14-bit raw, dng, video

All questions regarding RAW VIDEO POST PROCESSING on any OS
HERE

All questions regarding RAW_REC MODULE COMPILATION/INSTALLATION/USAGE
HERE: http://www.magiclantern.fm/forum/index.php?topic=5405.0




raw2dng (reference implementation, command-line tool)

Source code: raw2dng.c and chdk-dng.c
Windows executable: raw2dng.exe (drag and drop the .raw file over the executable)
Mac executable: raw2dngOSX.zip (run in terminal, don't forget chmod +x)
Windows executable for cameras with pink dots (650D, 700D, EOS-M): raw2dng_cs2x2.exe (it does some chroma smoothing, which happens to remove the pink dots too; use it when PinkDotRemover doesn't work)






GUI application for MAC (scrax):
Thread
Just drag a raw file on it and it will convert to dng and save the file in a subfolder inside the .raw folder





Linux scripts
Bash script for conversion to mjpeg (needs ffmpeg and ufraw-batch): raw2avi.sh
#28
UPDATE:
Initially this thread was about my lv_rec module that allowed recording YUV422 and RAW video on an experimental basis.
In this module I found out a lot about the EDMAC (DMA controller) and wrote it down in our wiki. (http://magiclantern.wikia.com/wiki/Register_Map)
Since then we were able to use this high speed engine to copy portions of the image into our own buffers.

Meanwhile alex refactored all the code and optimized buffering, so that we are able to record 14 bit raw bayer data.
the result is a module named 'raw_rec' which he highly optimized to get the maximum out of our beloved canon cameras.

Since then we are constantly trying to improve the usability.
Our focused target is the 5D Mark III, but the devs are porting it to other models as you can see (thanks 1%, coutts, nanomad)

Yet this code is EXPERIMENTAL. It will cause any random failures that lead from data loss to crashing cameras.
As you know, ML is very stable, but sometimes code at this early stage causes unforseen problems.
Prepare yourself for that before you go shooting. (a backup CF card, ML-free SD card)

Key ingredients:
- canon has an internal buffer that contains the RAW data
- we understand the high speed DMA controller "EDMAC" a lot better now and know how to crop areas out of an image
- we know how to get the maximum rate out of the CF card and so achieve to get up to 90MiB/s
- we provided a reference tool that converts the Magic Lantern .RAW movie into single .DNG frames plus a MJPEG script

All together sums up to the most advanced 14-bit RAW recording system people can get for less than 3 kEUR.
We will prepare a full article as soon we see this code being stable enough for public testing.


Again. This is EXPERIMENTAL, so:
- bloody beginners and non-geeks should not touch the whole thing. wait until it is "beginner-proof". we will tell you on the website.
- you know that you are a bloody beginner, when you read the whole thread and you still cannot get it to work.
- DON'T be disappointed if it doesn't work or we figure out the whole thing is unstable and/or unusable
- NO, there is no manual yet
- NO, there is no all-in-one tool that fits every use case
- NO, we don't have tutorials how to use it
- NO, not all models are supported yet ;)
- we are just at the moment testing how good it works and what we have missed and what to improve
- you are welcome to post comparisons, experiences (both good and bad), or even deep analysis or just cool videos
- if you are a programmer and you see potential for improvements, grab the source and support :)

As always we want to remind you of these things:
- Magic Lantern Team will not be responsible or liable for any kind of direct or indirect damage to your camera. (nothing new anyway)
- The software provided for download is not related to Canon in any way
- Do not contact Canon about issues related with this software
- Do not blame Canon for not-implementing this feature! Why? This is no feature that is stable enough so that a company like Canon would ever release.
- Prepare for footage loss due to frame drops, tinted frames, corrupted frames etc.
- THIS IS EXPERIMENTAL. Deal with it. Don't blame anyone.

About sensor heating rumors:
The only thing that could get warmer is DIGiC and the CF circuitry, but i am sure that the power dissipation that reaches the sensor
through all that plastic housing will not have any noticeable temperature raise.

detailed: when doing that much DMA transfers and CF writing, we may cause a bit more current drain (which causes squared power dissipation)
but we do not encode any H.264 while recording, so we use less power there.
its *possible* that the CF writing will consume less energy than the encoding with H.264, which will result in *less* power consumption.
raw is being produced by the DIGiC for every single frame anyway. we "just" save it away.

still this is a *theory*, but i expect the consumption and the temperatures not to raise at all.




old post:
Currently i am working on a module that records YUV422 data to card.
This code will only work when compiled from repository (there is no release yet)

5D3: can record 1904x1274 @ 12.5 fps

here some example video:
https://docs.google.com/file/d/0BwQ2MOkAZTFHdU1tR1pITXFVVXM/edit?usp=sharing
(not sure how to make it look better and not take 600MiB)

here some sample images:
https://docs.google.com/file/d/0BwQ2MOkAZTFHdFFsV1BGU0Nmd2s/edit?usp=sharing

there are three major options
- Frame skipping: record every n-th frame. choose 2 on 5D3 in 25 fps mode to record with 12.5 fps *continuously*
- Single file: save some processing time by writing a single file. you have to split it later on your computer. (maybe the 422 converters will somewhen support this?)
- RAW mode: not working yet, just saving gibberish ;)

right now the module is not user-friendly. press start and it will record 2000 frames.
it will abort if the buffers are exhausted.
you can also abort by removing battery ;)

All questions regarding RAW VIDEO POST PROCESSING on any OS
HERE: http://www.magiclantern.fm/forum/index.php?topic=5404.0

All questions regarding RAW_REC MODULE COMPILATION/INSTALLATION/USAGE
HERE: http://www.magiclantern.fm/forum/index.php?topic=5405.0

And this thread here should from now on just be used for video results, and open discussion.
NOT for any installation or processing help.
#29
General Development / placing ML into shoot memory
April 06, 2013, 01:43:18 PM
i decided to start a thread about this topic.

in canon's firmware we have three options how to allocate memory and where to place data.

1) malloc
2) AllocateMemory (MemoryManager)
3) AllocateMemoryResource (RscMgr, Srm)

1) malloc
the first one, malloc, i am not sure where the memory is located and where it gets initialized. maybe alex can give some hint.


2) AllocateMemory
the second, AllocateMemory is a memory pool between 0x3D0000 and 0xD00000 on 7D and a few others.
its structures contain a reference to the string "MemoryManager".
it's initialized on startup right after "K250S READY" is written to debug port.
> AllocateMemory_Init(&off_3D0000, 0xD00000)
the structure for every memory is 0x0C bytes big block starts with a pointer to next block, then a poiner to previous block.

Magic Lantern was either placed in malloc or in AllocateMemory region by patching the end address for initialization calls


3) AllocateMemoryResource
the new method that was tested on 60D and 7D in the last days is based on shooting memory where images get buffered when shooting until they are saved to card.
the manager that handles all requests is called RscMgr (ResourceManager) and closely tied to Srm (StorageManager?)

this memory is usually fragmented, which means you cannot simply allocate 1 MiB in one piece.
well, this might succeed, but you are likely to get a list of memory blocks that are in sum your requested size.
the good thing is, you can allocate up to 250 MiB depending on your camera.

it is initialized like that:
>    v3 = SRM_Initialize(0x13, 0x40D00000u, 0x1F300000, (int)StartupSequencer_NotifyComplete, 0x20000, startupCacheFreeCallback, 0);
>    if ( v3 )
>        DryosDebugMsg(0x8B, 6, (char *)&"SRM_Initialize (%#x)", v3);

the value 0x40D00000 tells the start address (0x00D00000 uncached) and its length (the rest of the memory).
so this memory region starts right after the AllocateMemory region-
some will notice that the LV buffers etc are also within that memory range. right. not sure yet how RscMgr manages
the memory chunks and which "application" gets which buffer.
it is likely that the RscMgr has some addresses like the LV buffers hardcoded.

as we now can patch the end of the RscMgr memory pool, simply by replacing 0x1F300000 with 0x1F200000, we
get some memory (2 MiB) at 0x1F200000 that remains unused.

what has to be checked, is the question if the LV buffers or some other memory users that RscMgr handles,
use hardcoded addresses and we cause trouble to them this way.
until now it looks stable. :)

#30
General Development / Flexible info screen
January 10, 2013, 12:34:19 AM
i added a new file lately: flexinfo.c

background:
the late updates to all models photo info screen are very good and i like the changes.
unfortunately it gets complex and hackish the more models are updated.

before it gets a real mess, i decided to set up a flexible info screen that separates the code from model-specific settings and positions.
we now can also configure the positions via menu (for developers or power users) and add routines to load/save the setup into ini files.
stuff only interesting for real power users, like CF/SD card names, copyright strings etc etc will go into code, but not be printed or added to menu.
users can later load their power-user screen config and make ML fit their needs.


how does it work:
at the moment the function info_print_screen() called from display_shooting_info().
it uses the configuration in info_config[] that is model specific.
the configuration is an array of "elements" that will be printed on screen at given coordinates.

it looks like this:

    /* print ISO range */
    { .string = { { INFO_TYPE_STRING, { ISO_RANGE_POS_X, ISO_RANGE_POS_Y, 2 }}, INFO_STRING_ISO_MINMAX, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_MEDIUM } },

    /* entry 2 and 3, WB strings */
    { .string = { { INFO_TYPE_STRING, { WBS_POS_X, WBS_POS_Y, 2 }}, INFO_STRING_WBS_BA, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_LARGE } },
    { .string = { { INFO_TYPE_STRING, { WBS_POS_X + 40, WBS_POS_Y, 2 }}, INFO_STRING_WBS_GM, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_LARGE } },
   
    /* entry 4, battery_icon referenced as anchor */
    { .battery_icon = { { INFO_TYPE_BATTERY_ICON, { DISPLAY_BATTERY_POS_X, DISPLAY_BATTERY_POS_Y, 2 }}, DISPLAY_BATTERY_LEVEL_2, DISPLAY_BATTERY_LEVEL_1 } },
    { .battery_perf = { { INFO_TYPE_BATTERY_PERF, { -14, 0, 3, INFO_ANCHOR_LEFT | INFO_ANCHOR_TOP, 4 }}, /* 0=vert,1=horizontal */ 0, /* x size */ 12, /* y size */ 12 } },
    { .string = { { INFO_TYPE_STRING, { 0, 2, 2, INFO_ANCHOR_HCENTER | INFO_ANCHOR_BOTTOM, 4, INFO_ANCHOR_HCENTER | INFO_ANCHOR_TOP }}, INFO_STRING_BATTERY_PCT, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_LARGE } },
    { .string = { { INFO_TYPE_STRING, { 0, 0, 2, INFO_ANCHOR_RIGHT | INFO_ANCHOR_TOP, 4 }}, INFO_STRING_BATTERY_ID, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_LARGE } },

    /* entry 8, MLU string */
    { .string = { { INFO_TYPE_STRING, { MLU_STATUS_POS_X, MLU_STATUS_POS_Y, 2 }}, INFO_STRING_MLU, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_MEDIUM } },
   
    /* entry 9, kelvin */
    { .string = { { INFO_TYPE_STRING, { WB_KELVIN_POS_X, WB_KELVIN_POS_Y, 2 }}, INFO_STRING_KELVIN, COLOR_YELLOW, INFO_COL_FIELD, INFO_FONT_MEDIUM_SHADOW } },
   
    /* entry 10, pictures */
    { .fill = { { INFO_TYPE_FILL, { 540, 390, 1, 0, 0, 0, 150, 60 }}, INFO_COL_FIELD } },
    { .string = { { INFO_TYPE_STRING, { 550, 402, 2 }}, INFO_STRING_PICTURES_4, COLOR_FG_NONLV, INFO_COL_FIELD, INFO_FONT_CANON } },


lets look closer at the first entry:

/* print ISO range */
{
    /* we are defining a new string to be printed */
    .string =
    {
        {
            /* it must be of the type STRING and match the .string initializer above */
            INFO_TYPE_STRING,
            /* print it at X, Y and Z. Z is the layer - the higher the number, the later it gets drawn and overwrites other items (like fills or other strings) */
            { ISO_RANGE_POS_X, ISO_RANGE_POS_Y, 2 }
        },
        /* print the ISO_MINMAX string there. we have dozens of other strings, see the header */
        INFO_STRING_ISO_MINMAX,
        /* foreground color */
        COLOR_YELLOW,
        /* background color is "FIELD" or "BG" or any other COLOR_ define */
        INFO_COL_FIELD,
        /* medium font size */
        INFO_FONT_MEDIUM
    }
},


there is not just ".string", but some other interface items that can be drawn.
this is the current list of defined elements:

.string / INFO_TYPE_STRING
print some string like ISO, Kelvin, WB, picture count, time, date, etc.
if the element is not available, like we print the "MLU" string and MLU is disabled, the item will *not* get drawn.

.fill / INFO_TYPE_FILL
fill some area with specified color. useful for clearing some canon strings or symbols.
use the lowest Z values for such things and print the strings with higher Z values over it.

.battery_icon / INFO_TYPE_BATTERY_ICON
the battery icon Pelican made with some little changes to make it a bit more flexible.
in the initializer you can specify the red/yellow pct values.

.battery_perf / INFO_TYPE_BATTERY_PERF
also from Pelican the battery performance dots that tell you the battery health.
it is also a bit more flexible and can get configured to be printed horizontal/vertical and with custom dot sizes.

.icon / INFO_TYPE_ICON
not implemented yet, but why not put some code to paint icons from a file on screen.


(to be continued)
#31
Archived porting threads / Magic Lantern for 7D alpha 2
December 23, 2012, 11:30:14 PM
For raw video and autoboot, check this thread.

Merry christmas!


Its time to release the second alpha version of the 7D port.

We've enabled these features since alpha 1:
* Advanced Bracketing (HDR)
* Intervalometer
* Audio tags
* Bit Rate manipulation
* [EXPERIMENTAL] Modify card flush rate for higher bit rates
* [EXPERIMENTAL] Modify GOP size (down to ALL-I or up to 100 for better (?) details)
* a lot of minor fixes

Key Features:
* Audio meters while recording
* Zebras
* Focus peaking
* Magic Zooom (via half-shutter, or focus ring)
* Cropmarks, Ghost image
* Spotmeter
* False color
* Histogram, Waveform
* Vectorscope
* Movie logging
* Movie auto stop
* Trap focus
* LiveView settings (brightness, contrast...)
* Level indicator
* Image review tweaks (quick zoom)
* and some debug functions

But:
* If anything goes wrong, we don't pay for repairs. Use Magic Lantern at your own risk!

Known issues:
* You have to reload Magic Lantern every time you use it. (this is intentional)
* when using HDMI output, frame drops may happen (to be verified)
* make sure your battery/adaptor is chipped, else canon menu will abort "firmware update" (= loading ML)
* movie restart and video effects menus visible but not working

Installation
1) Update camera firmware to 2.0.3
2) Format your CF card from the camera
3) Extract contents of ML .zip into your card's root folder
4) Run "firmware upgrade" once again
5) Voilà. Magic Lantern. (press DELETE for menu)

Thanks to all who helped us with donations and bug reports.
We finally received a few IDA licenses and can improve Magic Lantern a lot now!


Main article:
Click here to read this article!

Download:
http://upload.g3gg0.de/pub_files/17248a00956f1e932457094756b2a3ba/magiclantern_7D_203_Alpha2.zip
Alternative: https://bitbucket.org/hudson/magic-lantern/downloads/magiclantern.7D.203.Alpha2.zip
#32
Okay..

As some guys asked for thorough tests of what the high bitrate reported in this thread really look like,
i decided to post an EXPERIMENTAL release of the 7D bitrate hack.
i repeat. EXPERIMENTAL.

what does that mean?
1) it is only intended for people with decent technical knowledge, because...
2) ... you have to be aware of the possible negative side effects like overheating ....
3) ... or some unrecoverable crash of your camera.
4) and it would only make sense to test if you know what bitrate means :)
4) although i don't think this will happen - the risk for your camera is higer as with the alpha
5) the usual "be warned" stuff i always tell you still applies ;)

okay. i have to say that, you know. :)

back to the bitrate hack.


what is it about?
with ML we can drive the bitrate up to factor 3.0x then, depending on your card speed, recording stops.
this is because of the recording buffers that fill too fast.
these buffers are cleared about ONCE per SECOND - if you set 25 fps, they will get written to card after the 25th frame.
if we would set e.g. 10x rate, the buffer would be full after half a seconds or so.
thats the reason for the recording that stops.

what does this hack do?
it flushes the buffers more often. this is configurable.
i pre-set it to 4 frames which works quite well with my 30MiB/s CF card and a rate of < 9.0x.
on 7D this also requires some cache hacks in master firmware.
porting that other models is a lot of simpler. (imho)

what do you want to test?
a) test how far your card can go up.. set bit rate higher and test high detail scenes. report what your highest stable bitrate was and your card type (with benchmark speed).
b) check if the hack is worth its effort. is the video quality good? or would we stick better with 3.0x and this hack is useless?

known issues:
- ERR70 may happen if your card is too slow and/or the flush rate is too low
- important: disable sound recording, else you will get an ERR70 too
- it seems to be not CBR, but VBR although it says CBR


i really would love to see some deep analysis of the videos and your conclusion - is it worth to get implemented in all models?
or should i just drop this code and stick to other things?

here is the DL link for this experimental version.
#33
Archived porting threads / First 7D alpha released!
October 12, 2012, 10:36:53 PM





Finally, the first Magic Lantern release for the 7D is here!

It is still an early alpha version, so here are a few things you should know:

* it was primarily tested on one 7D, and a few days on three other 7D's;
* during those tests we took 1000 photos and gigabytes of videos;
* there were no crashes or strange behaviors during our tests;
* this release will not alter any data in your camera's permanent memory;
* this release will not directly alter any so-called "properties" (persistent camera settings);
* this means, some functions like HDR photos, HDR videos, bulb ramping etc will not work yet;
* it is not a firmware upgrade, despite the camera saying "Firmware update program";
* we have disabled all features that are not yet working perfectly;
* please don't beg for adding feature XYZ, it will be added as soon as it works without issues.

But:
* If anything goes wrong, we don't pay for repairs. Use Magic Lantern at your own risk!

Key Features:

* Audio meters while recording
* Zebras
* Focus peaking
* Magic Zooom (via half-shutter, or focus ring)
* Cropmarks, Ghost image
* Spotmeter
* False color
* Histogram, Waveform
* Vectorscope
* Movie logging
* Movie auto stop
* Trap focus
* LiveView settings (brightness, contrast...)
* Level indicator
* Image review tweaks (quick zoom)
* and some debug functions

Known issues:
* When using trap focus, opening card door won't shut down the camera. Simply power off using power switch.
* Formatting the card will also remove Magic Lantern files.
* You have to reload Magic Lantern every time you use it.
* video frame rates in LV are displayed too high (exactly 1.2x)
* make sure your battery is chipped, else canon firmware will abort "firmware update" (=loading ML)

Installation
1) Update camera firmware to 2.0.3
2) Format your CF card from the camera
3) Extract contents of ML .zip into your card's root folder
4) Run "firmware upgrade" once again
5) Voilà. Magic Lantern. (press DELETE for menu)

Technical Details
Why did it take so long to get Magic Lantern running on the 7D?

This is a long story. The workings of single-DIGiC cameras are already well understood.We know how to forge FIRs and we can execute code using this method.Our code gets executed without any interruption to the cameras proper function, we can hook into startup code and simply restart the camera or update the bootflag needed for execution of autoexec.bin.Same applies to autoexec.bin if the bootflag is enabled.

But not so on the Dual-DIGiC 7D cameras.

One DIGiC is called "Master" and the other "Slave." All ML related stuff like GUI, LV etc is running in Slave.The Master cares about focusing, lens communication and some other related technical stuff.So there are two processors that both load the (forged) firmware update program which contains Magic Lantern. But we could not simply reboot the Slave into normal firmware while the firmware update loader is executed. With some tricks like patching the original firmware updater, it was possible to enable the bootflag for autoexec.bin. But even running Magic Lantern in autoexec.bin failed silently. This was the point where our first investigation started stuttering.

After some deeper investigation with new methods like - lets call it "virtual flash patching" by manually patching the processors cache content - we found out that the Master is still running and waits for the Slave to send synchronization signals. If they don't arrive, Master is disabling the Slave where our code runs.Henceforward it was a job of just two weeks to find out what to do and make Magic Lantern start up cleanly and then another two weeks for updating all defines, macros and constants to get the important features running smoothly.

This alpha is a snapshot of what is working reliably enough to begin testing it widely.

Who was involved in developing Magic Lantern for the 7D?
Definitely everyone! As all the features in Magic Lantern came from the many developers contributing to ML.

Although, beyond the usual suspects, there are two key players for the 7D port: Hudson and Indy. They spent an extraordinary amount of time getting the bootflag enabled and building .FIRs that would run perfectly.Without their hard work, there would definitely be no 7D version.

What is still missing?
We can run Magic Lantern from autoexec.bin, but we still can not reliably enable the bootflag to execute it. This means, virgin cameras will only be able to run the .FIR version of ML for now. We know ways to enable the bootflag, but they would involve copyright issues. And that's something we want to avoid.

Also missing is the FPS override feature. We are not sure if this will be possible as it has been in other models.

All HDR features, bulb ramping, and features that require "properties" could possibly work as soon we enable them.For now we will keep the risk at a minimum and slowly test feature after feature - your feedback is important at this stage.


Troubleshooting
After starting the firmware upgrade, if there is only a black screen, but auto-focus works:
* reinstall the firmware v2.0.3 from the links on the right;
* make sure the 7D000203.FIR checksum is correct.
  7D000203.FIR checksum
    SHA-1: 613439A489A46D2691FB54F0DB22232F17E2AA8E
    MD-5: 29AF55CF2B404D2A60220BC9CC579EFD
    WinMD5: www.winmd5.com

The camera shows no magic lantern, but a standard firmware
* reinstall firmware again, format card and copy ML files again.

What is next?
We have to better understand what Master and Slave are doing exactly.Which one processes MPEG data, which one compresses JPEGs and what could Magic Lantern achieve by understanding this relationship?As the lens communication seems to be handled by the Master – maybe we can change the lens protocol so you can use lenses that have known bugs, or even lenses with a totally different protocol?Maybe we can read the level sensor at higher rates and embed that data into images for automatic leveling? Or even embed into videos?

But first we have to analyze the firmware. And for this we would love to buy a copy of IDA Pro with Hex-Rays Decompiler for ARM for our developers, and we cannot afford it easily without your help.

We gave our best, our time, and considerable knowledge. So please be kind and support our work!
Magic Lantern is a community effort, and you are now part of that community!

Click here to read this article!

Download
Magic Lantern for 7D alpha 1
Canon firmware v2.0.3
#34
I promised to have a look at how to add a GDB stub into magic lantern.
looked quite simple to do, so i implemented it.


as it is a quite big thing, please understand that i cannot publish a full guide to this tool.
also it is (very) far from perfect.
see it as a tool that helps you to breakpoint into a specific function, get the memory/stack/register contents and
*eventually* continuing the execuion.
latter wont work reliable on the camera, as it is some complex realtime system.
we cannot simply halt some tasks without side effects like ERR70 or even total lockup.

but i was able to test some basic breakpointing, changing registers and continuing in some test task.
in canon tasks it did sometimes lock up everything.

the way how i implemented it does not perfectly match how gdb frontends expect it.
that would mean we would not simply set a BP wherever we want (no interrupts!) and wait for some task(s) being stalled.
i intentionally decided not to do it like they expect as i had some special goals using that code.
i want it to be a swiss tool to e.g. hook code anywhere in ROM for testing purpose or capturing registers etc.

nevertheless it may help with inspecting "what are the parameters to this function?"
you can also add "watchpoints" that are breakpoints that trigger once and save the registers.
this feature is not yet available via GDB interface, but you must call the functions manually using
   gdb_add_bkpt(uint32_t address, GDB_BKPT_FLAG_WATCHPOINT | GDB_BKPT_FLAG_ARMED)
and then read the captured registers from gdb_breakpoints[pos].ctx
but you can very simple extend the gdb.c to allow setting such watchpoints from your favorite GDB frontend.


be warned - you will have to repower your camera very often :)

here it is:
magic lantern code: http://upload.g3gg0.de/pub_files/8e155ddcc88ee690cd07b6c2da365807/gdbstub.zip
ptpcam: use the one inside the contrib folder of the repository (as of 2012-10-16)
(Merged) updated tasks.h: http://upload.g3gg0.de/pub_files/4015304003c3c336e66f651e1418439e/tasks.h
(Merged) ptpcam patch: http://upload.g3gg0.de/pub_files/92879a741f5b8863da832ca8fe9327db/gdb.patch

how it works:
* it adds two new PTP messages PTP_CHDK_GDBStub_Download and PTP_CHDK_GDBStub_Upload (defined for 600D)
* "void gdb_setup()" is starting the processing task that handles GDB serial commands - call it from e.g. run_test
* ptpcam has 3 new commands "gdb s" to send gdb serial commands, "gdb r" to receive the response and "gdbproxy" to forward gdb commands between a network socket and the camera (for remote debugging using e.g. IDA)
* breakpoints are set using cache hacking - i place a undefined instruction that raises UNDEFINED interrupt
* in UNDEFINED interrupt, i stall the running process using continuous msleep(1) and *store* the process context
* when that process is resumed, i use another UNDEF instruction and the handler to *restore* the process context where it was stalled

important notes:
* it is not possible to continue a process as long the breakpoint is still active. deactivate the BP, add another one behind the current PC, continue and then set the first one again
* there is no "single step" functionality - the camera will do nothing
* some tools (like IDA) update newly set breakpoints when e.g. "single step"ing and they fetch the current registers - so consdier "single step" as some kind of "refresh" or "sync"
* do not (!) set breakpoints in interrupts. that wont work.
* if you just continue execution in e.g. IDA without setting any breakpoint, IDA will wait. and wait. and wait. ... until you kill the network connection by exiting ptpcam or killing IDA (this break could also be done with a menu in ML to break the wait in in gdb.c:1159)
* this might also happen if the "continue" command failed for some reason (i warned you that it wont work reliable ;) )

how to debug:
* call "gdb_setup()"
* start "ptpcam --chdk"
* enter "gdbproxy"
* connect to localhost:23946 using your favorite debugger
* you will see all the registers 0x0000..
* set a breakpoint in the function you want to debug
* "sync" using single step to activate the breakpoint
* "sync" again to get current process registers - they will change if breakpoint is reached
* you must now disable the triggered breakpoint and set a new one where you want to stop again
* "continue" execution until BP is reached
* some frontends like IDA allow "step over" commands that automatically set a breakpoint after the current instruction - this is supported of course

example code that displays all breakpoints and registers on-screen:


    while(1)
    {
        uint32_t line = 0;
        uint32_t bp = 0;
       
        bmp_printf(FONT_MED, 0, line++ * 20, "exc %08X, l 0x%08X", gdb_exceptions_handled, loops++);
       
        for(bp = 0; bp < GDB_BKPT_COUNT; bp++)
        {
            if(gdb_breakpoints[bp].flags & GDB_BKPT_FLAG_ENABLED)
            {
                uint32_t reg = 0;
           
                bmp_printf(FONT_MED, 0, line++ * 20, "BP#%d 0x%08X flags 0x%1X hits %d", bp, gdb_breakpoints[bp].address, gdb_breakpoints[bp].flags, gdb_breakpoints[bp].hitcount);
                for(reg = 0; reg < 15; reg+=2)
                {
                    bmp_printf(FONT_MED, 0, line++ * 20, "R%02d %08X R%02d 0x%08X", reg, gdb_breakpoints[bp].ctx[reg], reg+1, gdb_breakpoints[bp].ctx[reg+1]);
                }
                bmp_printf(FONT_MED, 0, line++ * 20, "CPSR %08X", gdb_breakpoints[bp].ctx[16]);
            }
        }       
        msleep(100);
    }


feel free to improve that tool. maybe it will get good enough to get into mainline tree somewhen :)
 
#35
Reverse Engineering / ARM + EOS Emulator
September 24, 2012, 12:08:57 AM
well, it think i can make it public.

1. ready-to-run package

Quote
i added GDB stubs to my emulator.
what this means?
you can use IDA Pro to connect to the emulator and step through code using breakpoints, dumping memory etc etc.
check contents of main routine at line 938 and make fit them to your firmware.

1. start TriX
2. select your firmware image as input file
3. click on "Scripts" tab ("General", "Scripts", "Editor")
4. click the lens at the bottom, right of "Script" and the textbox
5. choose "armulate_shell_eos.trx"
6. click "Add" button at bottom
7. click "Start" in the top toolbar
8. a few register/disassembly windows pop up
9. arrange them that you see every window
10. in the main dialog again where it asks you "Your choice", below is a text box. enter the number "16" and press enter


then in IDA just connect to localhost, port 23946 using gdb as debugger interface.

before connecting: in "Debugger Setup", "Set specific options" you should set "Max packet size" to 512
and in same window under "Memory map" you have to insert (rightclick into the list) a new memory segment which
starts at 0x000000 and ends at 0xFFFFFFFE, base 0, 32 bit, read only. delete the old one, if one is defined.

enjoy :)

http://upload.g3gg0.de/pub_files/0e7cc977a512c2168003a4ceb0e82932/TriX_EOS.7z

2. do-it-all-yourself repository

1. get a SVN client (e.g. TortoiseSVN)
2. checkout http://svn.g3gg0.de/svn/default/trunk/nokia/TriX/  (user: trix, pass: trix)
3. get Visual Studio 2008 (v9.0)
4. get Qt SDK (e.g. i have v4.5.1) and build/install *
5. set environment variable QTDIR to your Qt-Dir (that contains bin, lib, include, tools, ...) *
6. open \platform\msvc\TriX.sln
7. rebuild all

* = if you cannot get the project "TriX" compiling because of Qt issues, but the plugins TriX_DisARM, TriX_ARMulate, TriX_HWemuEOS are building fine, then it is also okay.
the most important stuff for emulating canon firmware is in HWemuEOS anyway.
#36
ARM System Developer's Guide
Designing and Optimizing System Software

http://sundyn6410.googlecode.com/files/ARM%20System%20Developer%27s%20Guide.pdf

its really worth reading :)