Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - a1ex

Academic Corner / Magic Lantern usage in academia
January 07, 2021, 05:48:05 PM
While preparing the application to Software Freedom Conservancy (see Applying for fiscal hosting for context), I was a bit surprised to find quite a few academic works mentioning Magic Lantern. I already knew about 2 papers mentioning Dual ISO, but apparently our software can be useful beyond the typical camera usage.

As such, today I've moved previous discussions and experiments related to scientific papers into this newly created Academic Corner - hoping something good might come out of this eventually.

I know, I know, too little, too late... ;)
There were some recent reports from Mac users who couldn't install ML on exFAT cards, when using the latest version of macOS. The first report I've received was from a 60D user who just upgraded from Catalina to Big Sur. Some other reports followed shortly, so I've decided to take a closer look. Tests were done on macOS Big Sur 11.0.1.

Affected models:
- 550D, 60D, 600D, 700D, 100D, EOS M, 5D3 (both SD and CF), 200D, M50 (tested in QEMU).
- 500D does not support exFAT, so this problem does not apply here.
- Likely all other DIGIC 4..8 models that support exFAT (SD or CF, doesn't matter).
- DIGIC X models: not tested, but expecting them to behave just like DIGIC 8.
- PowerShot cameras: not tested.
- non-Canon hardware: not tested, but anything is possible :)

Steps to reproduce on real hardware:
1) Take a 64GB SD card (or larger)
2) Format it in the camera (550D or newer) => it will be formatted as exFAT.
3) Unzip ML files on the card from macOS Big Sur (which will create autoexec.bin, ML-SETUP.FIR and the ML directory)
4a) Attempt to install ML by running Firmware Update with ML-SETUP.FIR:

Result: "ML directory not found! Please copy all ML files."

The second screenshot contains some ad-hoc diagnostic output, showing that a FIO_FindFirst/FindNext from the camera is able to find the ML directory, but it's not able to read its contents.

4b) Make the card bootable manually (e.g. with and attempt to run ML
Result: ML will not be able to read the contents of the ML subdirectory. It won't be able to read its fonts, modules, scripts and so on.

Steps to reproduce in QEMU:
1) Create an empty 64GB image

rm -f sd64.img
truncate -s 64G sd64.img

2) Format it in the virtual camera.
Edit to use the newly created image, launch the emulation for your favorite DIGIC 4/5 camera (it must boot the GUI in the emulator and it must support exFAT) and format the image. Turn off the virtual camera.
3) Mount the card image into macOS Big Sur.
e.g. start here and add these definitions into

  -device ide-hd,bus=sata.3,drive=SDCard
  -drive id=SDCard,if=none,format=raw,file=/path/to/qemu-eos/sd64.img

4) Using macOS Big Sur, download ML for your favorite DIGIC 4/5 camera (that's all we've got for now) and unzip it on the virtual card.
5) Eject the virtual card and run the emulation.

Shortcut: here's a 64GB SD image already prepared for 60D, which you can use to reproduce this bug. Tip: decompress (unxz) it on a BTRFS filesystem - that way, it will only take a few megabytes on the disk ;)

Once confirmed on a fully supported camera, you can also test on newer models, such as 200D or M50, with a minimal test program, found on the digic6-dumper branch in minimal/qemu-fio, using the modified source: minimal.c

Compile with:

hg clone
cd magic-lantern/
hg up digic6-dumper -C
cd minimal/qemu-fio
wget -O minimal.c
# note: make install_qemu won't work on exFAT card images
# mount the 64GB SD image as EOS_DIGITAL, so "make install" will autodetect it
make MODEL=200D CONFIG_QEMU=y install
# run the emulation from the qemu-eos directory


The minimal test code linked earlier will output something like this:

Trying SD card...
    filename     size     mode     timestamp
--> DCIM         00020000 00000010 30/09/2017 12:15
--> MISC         00020000 00000010 30/09/2017 12:15
--> .fseventsd   00020000 0000003a 13/12/2020 22:33
--> .Trashes     00020000 0000003a 13/12/2020 22:33
--> autoexec.bin 00002900 00000020 13/12/2020 18:28
--> ._autoexec.b 00001000 00000022 13/12/2020 22:35
--> ML           00020000 00000038 03/07/2018 16:20
--> ML-SETUP.FIR 00008d5c 00000020 13/12/2020 17:14
--> ._ML-SETUP.F 00001000 00000022 13/12/2020 22:35
--> ._ML         00001000 00000022 13/12/2020 22:35
Trying DCIM dir...
    filename     size     mode     timestamp
--> 100CANON     00020000 00000010 30/09/2017 12:15
--> EOSMISC      00020000 00000010 30/09/2017 12:15
Trying ML dir...
FIO_FindFirstEx error 1, test failed.

Notice the attribute of the ML directory (0x38), created by macOS Big Sur, and compare it with the attribute of directories created by the camera (0x10). Thanks Lorenzo33324 on the Discord channel, for spotting the difference!

According to the official exFAT specification from Microsoft, the FileAttributes field may use the following bits:

Valid bitmasks for exFAT are:

- 0 -> 0x01: ReadOnly
- 1 -> 0x02: Hidden
- 2 -> 0x04: System
- 3 -> 0x08: Reserved1
- 4 -> 0x10: Directory
- 5 -> 0x20: Archive

In our case, the ML directory created by macOS Big Sur has the attributes set to 0x38, meaning: Archive, Directory, Reserved1. This is a problem - the directory created by Big Sur does not have valid attributes.

The exFAT driver from DryOS does not know how to interpret the Reserved1 bit, so it does not recognize ML (created by Big Sur) as a valid directory.

To confirm, I've mounted the SD card image under a Win10 virtual machine and ran the following command on the root directory of the card:

attrib -a ML

Result: the ML directory became readable for the above test code, which was ran on the virtual camera.

You can get the same outcome by manually patching the attribute of the ML directory. In the attached 64GB card image (sd64.img), patch the byte at offset 0x20402A4 from 0x38 to 0x30, recompute the checksum (e.g. with fsck.exfat) and the ML directory becomes readable for the camera.

Caveat: the above workarounds will not "magically" fix your filesystem in order to use Magic Lantern. For that, you'd also have to modify the attributes of all subdirectories of the ML directory.

Another test you may want to run: delete the DCIM directory from the exFAT card and re-create it from Big Sur.
Expected outcome: camera won't be able to save the images.
On 60D, I've got an error at startup: "Card cannot be accessed". This will force you to format the card, losing any data you might have there. The filesystem was fully readable under Linux (FUSE exfat 1.3.0) and Windows 10.

ML bug? Apple bug? Canon bug?

As the issue can be reproduced with plain Canon firmware, it's clearly not a bug in ML.

Microsoft says this about the "reserved" bits:

What happens is that:
1) Apple used a bit declared as "reserved" in the exFAT specification. Whether it was intentional or by mistake, I have no idea.
2) Canon interpreted that bit as in "this is not a directory".

Therefore... both Apple and Canon seem to have misused the exFAT specification. Thanks g3gg0 ;)


The current version of macOS Big Sur - at least 11.0.1 - creates directories with invalid attributes on exFAT. While the filesystem drivers from other operating systems, like Windows or Linux, will tolerate these invalid attributes, the exFAT driver from Canon cameras will not.

In other words, the directories created from Big Sur 11.0.1 will not be recognized as valid directories in Canon EOS cameras.

This will affect users who will try to install Magic Lantern on exFAT cards, regardless of the camera generation, from DIGIC 4 until at least DIGIC 8, likely also DIGIC X (but not tested).

Regular users are unlikely to notice this bug, as triggering it requires the user to *create* a directory from macOS Big Sur, and to use it somehow in the camera.

This is not a ML bug. I can attempt to find a workaround as time permits, but... no guarantees.
General Chat / Applying for fiscal hosting
September 16, 2020, 09:19:57 PM
Topic split from

Quote from: nikfreak on September 03, 2020, 11:33:59 PM
Please just sign up for a patreon page to pay the bills. Then let's grab you at least a 5mkiv or whatever your heart wishes to do the magic work. Community will support it.

Well, given the recent evolution of the project (in particular, recent contributions), opening an individual Patreon page doesn't make sense to me. If we will do some kind of fundraising, it has to be for the entire team of developers and contributors, not just for one individual developer. And I think I've found a much better tool for this purpose.

I'm looking at Open Collective. They offer something similar to a non-profit organization, but without the requirement to incorporate one - they call it a "virtual non-profit". It's fully transparent (everybody can see how we spend the money), they do all the paperwork for us (for a fee), and it's open to all contributors, not just to one particular person, or to a closed group or core developers. Anyone can submit invoices to be reimbursed for project-related expenses, but the core team has to approve them. It even allows paying contributors for their time, as long as they can submit an invoice as a freelancer (but - depending on their country - they may need to register a local business or a sole proprietorship).

In other words, with Open Collective, even if I won't be available for some months (hopefully not years), the project will be able to continue without my direct involvement - as the money from the supporters won't be in my pockets, but available to the entire team of developers/contributors (whoever will still be active in the community). That would be pretty difficult to achieve with Patreon.

Open Collective already offers fiscal hosting for several open source projects - both US-based (Open Source Collective) and EU-based (Open Collective Europe ASBL). Some projects hosted there:

- Qubes OS (US host)
- Mastodon (US host)
- Vue JS (US host)
- (US host)
- Tor (US host)
- Manjaro (EU host)
- many others

Here's our page on Open Collective - but it's not functional yet (you can't donate yet).

Also worth reading:
- What is Open Collective & how it works?
- Open Collective is a New, Transparent Way to Fund Open Source Projects
- Open Collective Docs - all of them :)
- The Value of Fiscal Sponsorship in FLOSS Communities (also covered on LWN)

I've got in touch with Open Collective in spring 2019, but had to abandon the idea for a while (having several unfinished projects in the pipeline, then pandemic etc). Back then, they were very friendly and open towards our project, so... earlier this week I've decided to resume the application process. They even offered to help with legal advice - will keep you posted once I'll have more details.

We still need to choose between the EU-based host (my personal preference), or the US-based one (which is specifically tailored to open source projects, and - according to OC admins - much better prepared for hosting our project). Last year I would have strongly leaned towards the EU-based host, primarily because of DMCA, but this is no longer a critical issue (in my opinion for now, to be confirmed).

Assuming this will work out, i.e. if the level of support will allow me to return to the project without risking my ability to pay the bills, I'll do just that - my job still allows some degree of flexibility. Otherwise, if the donations will not be enough to partially cover my costs of living, but if they will exceed the hosting costs, I might be able to reimburse contributors for their project-related expenses (such as high-speed cards, or nonfree documentations, or equipment needed for reverse engineering, maybe a camera or two... depending on the budget).

Of course, in the past, there were voices completely against money (very understandable), so if there are any concerns with my proposal, I won't move forward unless consensus is reached. I haven't sorted out the details yet - last year I've got green light from Trammell and g3gg0, which is why they are listed on our Collective page linked earlier.
Just received a firmware dump from this model.

ROM dumper (requires an SD card formatted as FAT32):
Quote from: a1ex on January 16, 2019, 09:06:18 AM

  Magic Lantern Rescue
- Model ID: 0x422 4000D
- Camera model: Canon EOS 4000D / Rebel T100
- Firmware version: 1.0.0 / 1.9.2 1B(13)
- IMG naming: 100CANON/IMG_0213.JPG
- Boot flags: FIR=0 BOOT=0 RAM=-1 UPD=-1
- card_bootflags 106744
- boot_read/write_sector 106f38 107030
- 101DE4 Card init => 2
- Dumping ROM0... 100%
- MD5: (yours will be different)
- Dumping ROM1... 100%
- MD5: (yours will be different)
- No serial flash.
- Saving RESCUE.LOG ...

To emulate (Canon GUI working out of the box):
- pretend it's a 1300D
- apply the following ROM patch:

dd if=ROM1.BIN of=BOOT.BIN bs=64k skip=1 count=1
dd if=BOOT.BIN of=ROM1.BIN bs=64k seek=511

- throw away ROM0 (it's not connected)
- change flash model ID to 0x003825C2 (1300D has 0x003925C2)
- 0xFE1171B4 DebugMsg
- 0x3888 task_create

- commit the emulation sources (my job)
- start porting ML (your job; follow the 1300D thread)

Have fun!
Recently, this blog post came to my attention:

This gives a method for measuring rolling shutter of any camera, by filming some flickering light, as long as you know its frequency.

Examples of flickering lights:

- a plain old incandescent bulb in PAL land will flicker at 100 Hz (caveat: mains frequency is not exactly stable, but still reasonably good)
- some laptop monitors will start to flicker when you reduce their brightness; this frequency might be a little more stable
- many LED lights are also flickering; some at hundreds of Hz (useful for us), others at some kHz (not useful for us)
- if you've got an Arduino and a LED, you can program it to flicker at arbitrary frequency -> pwm.ino (default: 500 Hz)
- or, just move the camera around, looking for something that flickers

Unknown frequency?

What if you've got a flickering light, but you don't know its frequency? You can measure it with any ML-enabled camera, as we already know the sensor readout timings:
- open the FPS override submenu, without actually enabling it (i.e. select FPS override in ML menu, leave it OFF and press Q)
- look for Main Clock => camera-specific constant (5D2 and 5D3 use 24 MHz, 700D/650D/M/100D use 32 MHz and so on)
- look for Timer A => this gives line readout speed. Timer A / Main Clock = line readout time. Example: 5D2 25p => 600 / 24 MHz = 25 microseconds per line.
- look for Timer B => this gives frame rate: FPS = Main Clock / Timer A / Timer B.
- write down these values; the Octave script below will do the math for you.

Octave script

I've prepared a small Octave script to perform this measurement: rolling.m

- Octave 4.x with the following packages: image and signal
- to analyze DNG files, you will also need read_raw.m and dcraw

You can either run the measurements on your own (caveat: the script may require some fiddling), or you can upload test images for me to analyze.

Sample test images

All converted from silent picture DNGs:
5D3-500hz-25p.jpg (using pwm.ino at 500 Hz)
5D3-500hz-24p.jpg (same light source)
5D2-100hz-25p-weak.jpg (mains frequency, very weak light, but still usable)

Sample output

octave rolling.m 5D3-500hz-25p.jpg 24e6 480
Using blue channel.
Pattern repeats every 100 lines (method: pwelch).
Pattern repeats every 100 lines (method: overlap).
Pattern repeats every 100 lines (method: zerocross).
Method: pwelch => 100.00 lines
Line readout clock: 50.00 kHz, i.e. 20.00 μs/line (known).
Light source frequency: 500.01 Hz (measured).

octave rolling.m 5D3-500hz-24p.jpg 24e6 440
Using blue channel.
Pattern repeats every 109 lines (method: pwelch).
Pattern repeats every 109 lines (method: overlap).
Pattern repeats every 109 lines (method: zerocross).
Method: pwelch => 108.98 lines
Line readout clock: 54.55 kHz, i.e. 18.33 μs/line (known).
Light source frequency: 500.52 Hz (measured).

octave rolling.m 5D3-500hz-25p.jpg 500
Using blue channel.
Pattern repeats every 100 lines (method: pwelch).
Pattern repeats every 100 lines (method: overlap).
Pattern repeats every 100 lines (method: zerocross).
Method: pwelch => 100.00 lines
Light source frequency: 500.00 Hz (known).
Line readout clock: 49998.86 kHz, i.e. 20.00 μs/line (measured).
Rolling shutter: 25.80 ms for 1290 lines.

octave rolling.m 5D3-500hz-24p.jpg 500
Using blue channel.
Pattern repeats every 109 lines (method: pwelch).
Pattern repeats every 109 lines (method: overlap).
Pattern repeats every 109 lines (method: zerocross).
Method: pwelch => 108.98 lines
Light source frequency: 500.00 Hz (known).
Line readout clock: 54488.46 kHz, i.e. 18.35 μs/line (measured).
Rolling shutter: 23.67 ms for 1290 lines.

octave rolling.m 5D2-100Hz-25p-weak.jpg 24e6 600
Using red channel.
Pattern repeats every 401 lines (method: pwelch).
Pattern repeats every 20 lines (method: overlap).
Pattern repeats every 21 lines (method: zerocross).
Method: pwelch => 400.68 lines
Line readout clock: 40.00 kHz, i.e. 25.00 μs/line (known).
Light source frequency: 99.83 Hz (measured).

# From the blog post: "So, the scan time is a bit over 60 milliseconds [...]"
mogrify -resize "8256x5504" Z702693.jpg
octave rolling.m -v Z702693.jpg 120
Using red channel.
Vignette fix...
Pattern repeats every 736 lines (method: pwelch).
Pattern repeats every 20 lines (method: overlap).
Pattern repeats every 733 lines (method: zerocross).
Method: pwelch => 736.36 lines
Light source frequency: 120.00 Hz (known).
Line readout clock: 88363.15 kHz, i.e. 11.32 μs/line (measured).
Rolling shutter: 62.29 ms for 5504 lines.


That depends on:
- how stable your test frequency is (mains frequency is probably +/- 1%, maybe more);
- how accurate our stripe size measurement is (let's say +/- 1 pixel, so it depends on stripe size).

A few tests with 5D3 at 1080p25 with the same 500Hz test light, input files 1 2 3 4:

for f in *.DNG; do octave rolling.m $f 500 | grep "measured"; done
Line readout clock: 50123.14 kHz, i.e. 19.95 μs/line (measured).
Line readout clock: 49898.92 kHz, i.e. 20.04 μs/line (measured).
Line readout clock: 49932.19 kHz, i.e. 20.03 μs/line (measured).
Line readout clock: 50137.52 kHz, i.e. 19.95 μs/line (measured).

Not that bad.

Wanted: test images

I'm looking for some test images from recent models not yet running ML, i.e. all DIGIC 6 and newer (including, but not limited to, 80D, 750D/760D, 7D2, 5D4, 5DS, 200D, 800D, 77D, 6D2, M50, EOS R), cross-checked with images from a camera already running ML.

Test conditions:
- blank wall under flickering light, without stray objects (e.g. please don't include the light bulb)
- focus doesn't matter (the script will average every row of the image)
- lens doesn't matter at all (you can perform the experiment without a lens if you want)
- choose any shutter speed that makes the flicker obvious (usually faster shutter speeds are preferred)
- ISO and aperture are not important; just make sure the image is reasonably clean

Test set should include:
- For the camera already running ML:
    - two simple DNG silent pictures from movie mode, one at 1080p24 and another at 1080p25
    - if you don't want to install ML on the test camera, a frame extracted from H.264 video will also work (in this case I'll have to guess the captured resolution)
    - I'll use these to measure (or double-check) the frequency of the light source.
- For the camera not (yet?) running ML:
    - a video frame (extracted from video) at each resolution x frame rate setting from Canon menu
    - if the camera has an option to take completely silent pictures in LiveView, please include one of these as well (full-res JPG)

I'll use these tests to estimate the sensor readout speed and to verify some hypotheses about LiveView configuration for 80D, 5D4 and other recent models I've looked into.

This method can be used with images from non-Canon cameras, too; feel free to submit them if you are curious, just be aware this won't magically bring ML to your camera :)
Reverse Engineering / Low-level image capture
June 24, 2018, 04:46:30 PM
Goal: capture raw images (both photo and LiveView) without executing any of the image processing paths. Just get the raw data.

Why? To understand how it works and to have fewer variables for experimenting with crop modes, high frame rates, readout settings etc.

Idea: log all MMIO activity and replay only what's done from the Evf task and associated interrupts.

Current status: able to get periodic HEAD timer interrupts!

Log: 5D3-mv1080p25.log

Rough overview:
- image capture is controlled from the main CPU (maybe with the help of Eeko; I hope it's not the case)
- all the interactions between the CPU and its peripherals are done via MMIO and interrupts (lowest level)
- high-level interactions are done via ADTG, CMOS and ENGIO registers; on top of ENGIO we've got EDMAC, image processing modules etc
- Canon's image capture code is too complex to understand what it does, but we can trace its actions (messages, functions called, MMIO activity)
- stuff is happening in Canon's EvfState (look for state transitions in the log file)

Step by step:

1) evfInit: runs at startup, no interesting MMIO activity

2) evfActive: this starts LiveView, creates resource locks 0x50000 (STARTUP), 0x40000 (HEAD) and 0x250000 (CARTRIDGE) and powers on the image capture device:

- before using a hardware device in an embedded system, we usually have to enable some clocks; best guess:

    MEM(0xC0400008) |= 0x400000;
    MEM(0xC0202000) = 0xD3;
    MEM(0xC0243800) = 0x40;

- then we may have to power it on (SDRV_PowerOnDevice, InitializePcfgPort):

    EngDrvOut(0xC0F01008, 0x1);
    register_interrupt("IMGPOWDET", 0x52, imgpowdet_cbr, 0);
    MEM(0xC0400028) = 0x100;

    /* InitializePcfgPort */
    EngDrvOut(0xC0F18000, 0x2);
    EngDrvOut(0xC0F1800C, 0x7C7F00);
    EngDrvOut(0xC0F01010, 0x200000);

    /* probably some GPIO */
    MEM(0xC0220020) = 0x46;

    /* IMGPOWDET interrupt triggers shortly after this */

    /* these seem to be related, not sure what they do */
    MEM(0xC0220024) = 0x46;

At this point you'll get an IMGPOWDET interrupt, showing that some image capture stuff was successfully powered on.

3) evfStart: bunch of initializations, including FPS timers, raw capture resolution, ADTG, CMOS; enables HEAD1 timer

4) when HEAD1 fires -> evfPrepareChangeVdInterrupt[FrameNo:0]; runs RamClear (guess: zeroing out the image buffer); enables HEAD3

5) when HEAD3 fires -> evfChangeHdInterrupt: stops RamClear, SetLvTgNextState(0x2) (lots of ADTG and CMOS regs)

6) when HEAD1 fires again -> evfChangeVdInterrupt: re-programs HEAD3

7) when HEAD3 fires again -> evfChangeHdInterrupt: SetLvTgNextState(0x4) -> PowerSaveTiming, SetReadOutTiming, SetLiveViewHeadFor1stSR, enables HEAD2, SensorDriveModeChangeCompleteCBR

8 ) evfModeChangeComplete (happens right after the above)

9) when HEAD2 fires -> evfPrepareCaptureVdInterrupt[FrameNo:1], LVx1_SetPreproPath, first raw frame should be available?

To be continued. Unable to get an image yet.
Reverse Engineering / Front AF LED (PROP_LED_LIGHT)
April 14, 2018, 08:25:27 PM
Background: the front LED was something we had no idea how to activate (other than some unreliable hack based on red eye reduction settings).

One hint: (this LED might be triggered by a RC-6 remote). I don't have one to try, maybe I should get one, but I don't really have any other use for it.

Anyway - while looking for a possible LED address in 6D2, I've noticed an interesting piece of code referencing these strings:

fLedOn_Bv %x %x
AFAE LED %d %d %d %d
[LED] OFF -> ON %x
[LED] ON -> OFF %x

That hints the 6D2 might be turning the front LED to autofocus, and that LED might be controllable from software.

Going further in the low-level routine, it sends a MPU message (in other words, it might be asking the MPU - a secondary CPU - to turn on the AF LED). A closer look reveals similar strings in the 5D3 (though not as clear). On this camera, the low-level LED routine changes property 0x80050035 (size=2, arguments appear to be previous and requested LED state). There are some more interesting strings:


The front LED even has a guide number!

Where are these initialized? Breakpoint on SetLedLightInfo in QEMU:

[        AeWb:ff23c230 ] (98:01) [AEWB] aewbProperty ID=0x80030042(0x3)
    0xFF23C1E8(6b2df4, 6b85ac, 0, ff23c1e8)                                      at [AeWb:1796c:185c40] (pc:sp)
     0xFF23DBA0(6b2ea8 &"AEWB_DataStocker", 3, 0, 0, 0)                             at [AeWb:ff23c2d0:185be0] (pc:sp)

0x80030042 is PROP_LED_LIGHT (from Value 3 means LedLightMode 3 and all others 0. Things start making sense.

Manually changing property 0x80050035 doesn't seem to work. When does Canon firmware turn the front LED?! (other than with RC-6 remote)

Some more: 0x80050035 appears to be 09 22 on the MPU side. 6D2 sends message 09 20 (SpecificToPartner). Changing property 0x80050035 to 0x101 sends the following message to MPU:

CA19D>    PropMgr:ff2e9f18:01:03: ###RequestPropertyLVLEDLightRequestResultCBR 9 32 1 1
CA1D5>    PropMgr:000b0d48:00:00: *** mpu_send(08 06 09 20 01 01 00), from ff122df8

From what I could tell, that's exactly what 6D2 does. Yet, the LED doesn't turn on...
If you have played with the broken camera from the homepage, you already knew this is coming :)

That was one of my oldest pet peeves since the 5D Mark III was announced. I had several unsuccessful attempts to understand how it works, but as I didn't really know what I was doing, it was a complete mystery for me. Until some days ago - I've applied this knowledge and got a successful proof of concept. I was like - whoa, it was THAT easy?!


There are four finger sensors on the rear scrollwheel - touch them lightly to get an event similar to a button press. During this process, the camera does move slightly, but in any case, the movement is a LOT less than with a full button press. Therefore, it's desirable (although far from perfect) while shooting video, but also during other situations where camera movement should be avoided - such as extreme macro without a sturdy fixture. Or, even during a timelapse, when you want to make sure you've enabled some setting in menu and you don't want to risk having to align everything in post.

It's probably a very underrated feature, since the only videos I could find on this were in Korean. However, I find it very useful - and also fun to explore.

With Canon's implementation, this feature only works while recording video (H.264), and... you have to press Q during video recording (!) in order to activate it. Why? Beats me...

Under the hood

I could not trigger events with another object (metallic or not) and I could not trigger them without moving the camera slightly. However, after registering the "press" event, you can sometimes move your finger back by 1-2 mm and the camera would consider the "button" is still "pressed".

This feature is controlled by the MPU - it sends events similar to the direction keys (actually coded much like the joystick events: regular up/down/left/right = 0x0B followed by 02/09/06/05, silent control = 0x28 followed by the same direction code). The analog side is not under our control; it's actually beyond our understanding limit, since we don't even know the processor architecture of the MPU on DIGIC 5.

Unfortunately, there's no SET event - just 4 direction keys and one (common) unpress event. Pressing two "buttons" at the same time does not give any event (so we can't, for example, use 2-finger taps to assign various functions). However, we can use gestures - slide the finger from top to right, for example, and you get two "press" events and one "unpress" at the end. That way, you can detect:

- 8 "small" gestures (45 degrees): top->right, right->bottom, bottom->right etc.
- larger gestures would work as well: top->right->bottom (180 degrees) or any other combinations.


I've mentioned before that, with Canon's implementation, this kind of control was only available while recording (and only after pressing Q). Turns out, the controls weren't tied to any of these. They are controlled by property 0x80030047 PROP_SILENT_CONTROL_STATUS (MPU message 03 46). Control values: 0 = enable, 2 = disable. Status values: 1 = enabled and 3 = disabled. There's also 0x8000004B PROP_SILENT_CONTROL_SETTING (the setting from Canon menu).


Unfortunately, this feature is guarded very well by the dragons in the MPU:

- it only works in GUI modes 41 (Q dialog), 84 (sound level adjustment) and a few others
- it does not work during regular standby or recording (so it's hard to use it to open ML menu or to change exposure parameters)
- it disables the rear scrollwheel (could not find a way around it)
- it disables the top scrollwheel in GUI mode 41, but works in 84
- GUI mode 84 has other issues, e.g. it doesn't come back cleanly to standby...
- for some reason it does not send unpress events for long touches while recording (what the...)

Proof of concept code

It's not without side effects, sorry.

Button assignments

I'm not sure what's the best way to assign these gestures to functions, because (1) there are just 4 direction events and (2) they are very easy to trigger by mistake. I was tempted to start enabling them using a gesture (or even using just gestures for all the actions).

First PoC (0352ba0):
- direction keys for navigation
- works nice, but how do you change values?!

Second PoC (cb9c6ce):
- hold direction pads for at least 0.3 seconds: navigation
- top->right or right->top: Q in ML menu, MENU key in Canon menu (this "gesture" is closest to the Q button)
- right->down or down->right: SET (opposite corner)

Proposals welcome.

Side note

While looking into this, I've found a solution for the menu timeout issue on 6D/M/100D/70D (fix available in the lua_fix experimental build). Nobody bothered to report back, besides dfort.


Does any newer Canon camera still have this feature? (5D IV maybe?)
Found this after looking into an issue reported by Walter. Couldn't solve it, but discovered something a little more interesting: how to re-program the bokeh 8)

Say goodbye to busy backgrounds!

Left: with Canon firmware.
Right: with Magic Lantern's new "Crème de la crème à la Edgar" feature.
Camera: 5D Mark III, using the hardware mod from the homepage.
Lens: EF-S 24mm f/2.8 STM.


Download: after Arikhan uploads the NX1 raw hack.

Happy Easter egg hunting!
Forum and Website / Menu screenshots on the homepage
March 19, 2018, 12:25:53 AM
Quote from: dmilligan on April 11, 2015, 06:37:29 PM
[...] photos on the homepage do not necessarily reflect the latest development.

Solved. Sorry it took so long - have fun browsing the menus on the home page :)

Now the screenshots can be auto-updated every time a new build arrives.

Under the hood:
- a script that navigates the entire ML menu and saves screenshots (it runs on Jenkins, here's a GIF)
  - the following modes are simulated: photo, photo LiveView, movie
  - for each top-level ML menu option: you can toggle the option (SET), navigate the submenu (Q) or do both
  - nothing can be changed in submenus (yet)
  - anything you have changed is discarded as soon as you go away from that menu item (to keep the number of screenshots manageable)
  - extras: Canon menu, Q menus (just navigation, without changing any option)
  - result: some pre-rendered screenshots (about 3000 images at the time of writing)
- some hardcoded navigation logic (that runs on the server)
  - simulation state is completely given by the key sequence you see in the URL
  - screenshots are served as static images like this (will be cached by your browser)
  - each screenshot has about 10K (LiveView screenshots are larger)
  - URL grows with each click (but you can start over from the power button)
- front-end (interpreted by the browser)
  - button overlays (the red circles) are SVG elements on top of the camera image
  - works with or without JS (looks nicer with JS, e.g. flips the movie button, LED activity, nicer transitions)
  - LED shows network activity (querying the next state, loading a screenshot)
  - keyboard works too if JS is enabled (same buttons as in QEMU); ESC to disable keybindings, ENTER to re-enable
  - browser back button works too
Reverse Engineering / Interrupt IDs
January 23, 2018, 07:16:12 PM
After writing these notes (in particular, the section about interrupts), I've noticed we didn't document what all these interrupts are used for. This info is interesting for emulation and understanding how Canon code works; they are not used directly in ML code.

So, here's my first attempt to list all the interrupts we know about. Sources of info:

- startup-logs or emulation logs with register_interrupt calls enabled (some have names in Canon code):

grep --text -nro " register_interrupt(.*)" startup-logs/ tests/*/gdb.log | grep -o register_interrupt.*

- interrupts declared in QEMU, model_list.c

cat qemu-2.5.0/hw/eos/model_list.c | grep -o "\..*interrupt.*=.*,"

- interrupts scattered in QEMU source: eos_trigger_int (either hardcoded IDs or arrays)

FILES=$(cat qemu-2.5.0/hw/eos/Makefile.objs | grep -E "eos/\w+.o" | grep -oE "\w+.o" | cut -d . -f 1 | sed -e 's/$/.c/' | sed -e 's!^!qemu-2.5.0/hw/eos/!')
cat $FILES | grep "eos_trigger_int\|^\w.*(" | grep -B 1 eos_trigger_int
cat $FILES | grep -zoE " [a-zA-Z_]*interrupt[a-zA-Z]*\[[^[]*] = {[^{]*}" | tr "\\n" " " | tr "\\0" "\\n"

- Omar interrupts

Results (machine output, take with a grain of salt):

0x03: WdtInt
0x09: dryos_timer
0x0A: dryos_timer
0x0D: Omar
0x0E: UTimerDriver
0x10: OC4_14, hptimer
0x18: hptimer
0x19: OCH_SPx
0x1A: OCH_SPx, hptimer
0x1B: OCHxEPx, dryos_timer
0x1C: OCH_SPx, Omar, hptimer
0x1D: OCHxEPx
0x1E: OCH_SPx, UTimerDriver, hptimer
0x1F: OCHxEPx
0x20: ICAPCHx
0x21: ICAPCHx
0x22: ICAPCHx
0x23: ICAPCHx
0x24: ICAPCHx
0x25: ICAPCHx
0x26: ICAPCHx
0x27: ICAPCHx
0x28: OC4_14, hptimer
0x29: OCHxEPx, sd_dma
0x2A: MREQ_ISR, mpu_mreq
0x2C: DmaAD
0x2D: DmaDA, Omar
0x2E: UTimerDriver, uart_rx
0x2F: BLTDMA, BLTDMAC0, BltDmac, dma
0x30: CFDMADriver, cf_dma
0x32: SDDMADriver, SdDmaInt, sd_dma
0x35: SlowMossy
0x36: SIO3_ISR, mpu_sio3
0x37: INTC_SIO4
0x38: uart_rx
0x39: OCH_SPx, uart_rx
0x3A: uart_tx
0x3C: Omar
0x3E: UTimerDriver
0x41: WRDMAC1
0x42: ASIFAdc
0x43: ASIFDac
0x49: OCHxEPx
0x4A: CFDriver, MREQ2_ICU, cf_driver, sd_driver
0x4B: SDDriver, SdConInt, sd_driver
0x4D: Omar
0x4E: UTimerDriver
0x52: IMGPOWDET, MREQ_ISR, mpu_mreq
0x58: EDmacWR0, WEDmac0, edmac
0x59: EDmacWR1, OCH_SPx, WEDmac1, edmac
0x5A: EDmacWR2, LENSIF_SEL, WEDmac2, edmac
0x5B: EDmacWR3, WEDmac3, edmac
0x5C: EDmacWR4, Omar, WEDmac4, edmac
0x5D: EDmacRD0, REDmac0
0x5E: EDmacRD1, REDmac1, UTimerDriver, edmac
0x5F: EDmacRD2, REDmac2, edmac
0x60: CompleteReadOperation
0x61: AfComplete
0x62: AfOverRun
0x63: Obinteg
0x64: JP51_INT_R, JpCore, jpcore
0x65: ADKIZ, ADMERG, IntDefectCorrect, prepro_execute
0x66: Integral, WB Integ, WbInteg
0x67: Block, WbBlock
0x68: EngInt PBVD, Engine PB VD, PB_VD, display
0x69: EngInt PBERROR, EngInt PBVD, OCHxEPx, PB_ERR, Pb error
0x6A: HEAD1, Head1, head
0x6B: HEAD2, Head2, head
0x6C: HEADERROR, HeadError
0x6D: EDmacWR5, Omar, WEDmac5, edmac
0x6E: EDmacRD3, REDmac3, UTimerDriver, edmac
0x70: HarbInt
0x74: BLTDMA, BLTDMAC1, BltDmac, dma
0x75: BltDmac, dma
0x76: BltDmac, dma
0x79: OCH_SPx
0x7A: XINT_7
0x7C: Omar
0x7E: UTimerDriver
0x82: CFDriver, cf_driver
0x83: WEDmac8, edmac
0x89: OCHxEPx
0x8A: INT_LM, WEDmac9, edmac
0x8B: REDmac6
0x90: WEDmac6
0x91: REDmac4
0x92: REDmac5, REDmac7, edmac
0x93: CompleteOperation
0x95: edmac
0x96: REDmac10, edmac
0x97: REDmac11, edmac
0x98: CAMIF_0
0x99: OCH_SPx
0x9A: CompleteOperation
0x9C: Omar, SEQ
0x9E: REDmac13, edmac
0x9F: edmac
0xA0: BltDmac, EekoBltDmac, dma
0xA1: BltDmac, EekoBltDmac, dma
0xA3: Jp57, JpCore2
0xA5: RDDMAC15, edmac
0xA8: BltDmac, CAMIF_1, dma
0xA9: BltDmac, OCHxEPx, dma
0xAA: CompleteOperation
0xB1: SDDriver, sd_driver
0xB2: OCH_SPx
0xB3: OCH_SPx
0xB8: CFDMADriver, SDDMADriver, sd_dma
0xB9: OCH_SPx
0xBC: Omar
0xBE: SdDmaInt0, sd_dma
0xC0: WEDmac6, edmac
0xC1: REDmac4, edmac
0xC8: REDmac5, edmac
0xC9: Fencing_A, OCHxEPx
0xCA: INT_LM, WEDmac10, edmac
0xCB: WEDmac11, edmac
0xCD: Omar
0xCE: SerialFlash
0xD0: Fencing_B
0xD1: Fencing_C
0xD2: WEDmac12, edmac
0xD3: WEDmac13, edmac
0xD9: HEAD3, Head3, ICAPCHx, head
0xDA: WEDmac14
0xDB: WRDMAC15, edmac
0xDC: Omar
0xDE: SerialFlash
0xE0: HEAD4, head
0xE1: SsgStopIrq
0xE2: REDmac8, edmac
0xE3: CFDMADriver, cf_dma
0xE4: GaUSB20Hal
0xEE: SdConInt0, sd_driver
0xF9: ICAPCHx, WEDmac7
0xFC: OCH_SPx, Omar
0xFE: SerialFlash, dryos_timer, serial_flash
0x102: RDDMAC13
0x109: ICAPCHx
0x10C: BltDmac
0x10E: SerialFlash
0x111: Eeko WakeUp
0x119: ICAPCHx
0x129: ICAPCHx
0x12A: mpu_mreq
0x139: ICAPCHx
0x13E: xdmac
0x140: ICOCCHx
0x141: ICAPCHx
0x142: ICAPCHx
0x147: SIO3_ISR, mpu_sio3
0x148: ICAPCHx
0x149: ICAPCHx
0x14A: ICAPCHx
0x14E: xdmac
0x150: ICAPCHx
0x151: ICAPCHx
0x152: ICAPCHx
0x158: OCH_SPx
0x159: ICAPCHx, OCH_SPx
0x15A: OCH_SPx
0x15D: uart_rx
0x15E: xdmac
0x162: SerialFlash
0x169: ICAPCHx
0x16D: uart_tx
0x16E: xdmac
0x171: SDDMADriver, sd_dma
0x172: SDDriver, sd_driver
0x179: ICAPCHx
0x17B: SerialFlash, serial_flash
0x189: ICAPCHx
0x18B: WdtInt


- group by camera generation, DIGIC version etc
- other sources of info (such as strings present in the interrupt handling function, or other notes about them)
- brute-force interrupts (trigger manually) and see what the firmware is trying to do
- auto-build the above list on Jenkins (so it will be always up to date, at least with QEMU sources)
Following this request, I've decided to revive the old filepref module. Renamed it to

- custom image file prefix (IMG_1234.JPG -> ABCD1234.JPG; from the old filepref module)
- change image file number to any value (IMG_1234.JPG -> IMG_5678.JPG; experimental, restart required)

- timestamped file names (original request). Please don't expect it anytime soon - I don't know how to change the file number (last 4 characters) without restart. Maybe you can figure it out?
- date-stamped file names? (MMDD1234). This one might be easier; still need to find out how to reset the counter.
- continuous numbering? (12349999 -> 12350000, ABCD9999 -> ABCE0000). This one should be easy.
- customize folder number? (didn't try, but noticed the property in QEMU).

Known/possible issues:
- on 5D3, Canon file naming options must be set to default.
- might conflict with Dual ISO custom file naming (not tested).
- only tested on 5D3 and 60D.

- Image file prefix is also available to Lua (lua_fix builds)

Binary: (only the first feature works; the second one requires a custom ML build)
Reverse Engineering / TFT SIO communication (
November 26, 2017, 03:51:30 PM
Some notes after looking into this.

QEMU logs: zip

How I've got them:

make -C ../magic-lantern/60D_install_qemu
./ 60D,firmware="boot=1" -d debugmsg,io
# same for 600D, 650D, 700D, 70D

In ML menu, selected Display -> Advanced -> Orientation -> Normal/Mirror/Reverse, then copied the console output. Had to silence a few things in QEMU to get clean logs. The important lines are those like this:

[  DisplayMgr:ff0611b4 ] (82:02) SIO [3]:0xf01d

60D, 600D:

[ GuiMainTask:ff325714 ] (04:03) -->Mirror start
[  DisplayMgr:ff0611b4 ] (82:02) SIO [0]:0x1000
[  DisplayMgr:ff0611b4 ] (82:02) SIO [1]:0xbe01
[  DisplayMgr:ff0611b4 ] (82:02) SIO [2]:0xe401
[  DisplayMgr:ff0611b4 ] (82:02) SIO [3]:0xf01d
[ GuiMainTask:ff325774 ] (04:03) -->Normal start
[  DisplayMgr:ff0611b4 ] (82:02) SIO [0]:0x1001
[  DisplayMgr:ff0611b4 ] (82:02) SIO [1]:0xbe01
[  DisplayMgr:ff0611b4 ] (82:02) SIO [2]:0xe401
[  DisplayMgr:ff0611b4 ] (82:02) SIO [3]:0xf01d
[ GuiMainTask:ff325744 ] (04:03) -->Reverse start
[  DisplayMgr:ff0611b4 ] (82:02) SIO [0]:0x1000
[  DisplayMgr:ff0611b4 ] (82:02) SIO [1]:0xbe01
[  DisplayMgr:ff0611b4 ] (82:02) SIO [2]:0xe401
[  DisplayMgr:ff0611b4 ] (82:02) SIO [3]:0xf09d

700D, 650D:

cat 700D-*.log | grep -E "DisplayMgr.*SIO|-->"
[ GuiMainTask:ff4d91bc ] (04:03) -->Mirror start
[  DisplayMgr:ff128980 ] (82:01) SIO [0]:0x36
[  DisplayMgr:ff128980 ] (82:01) SIO [1]:0x140
[ GuiMainTask:ff4d921c ] (04:03) -->Normal start
[  DisplayMgr:ff128980 ] (82:01) SIO [0]:0x36
[  DisplayMgr:ff128980 ] (82:01) SIO [1]:0x100
[ GuiMainTask:ff4d91ec ] (04:03) -->Reverse start
[  DisplayMgr:ff128980 ] (82:01) SIO [0]:0x36
[  DisplayMgr:ff128980 ] (82:01) SIO [1]:0x1c0

70D (EOS M matches this):

cat 70D-*.log | grep -E "DisplayMgr.*SIO|-->"
[ GuiMainTask:ff504660 ] (04:03) -->Mirror start
[  DisplayMgr:ff134c18 ] (82:02) SIO [0]:0x602
[ GuiMainTask:ff5046c0 ] (04:03) -->Normal start
[  DisplayMgr:ff134c18 ] (82:02) SIO [0]:0x600
[ GuiMainTask:ff504690 ] (04:03) -->Reverse start
[  DisplayMgr:ff134c18 ] (82:02) SIO [0]:0x606

Experimental code (don't click me):

static void run_test()

    #ifdef CONFIG_5D3_113
    void (*lcd_sio_init)() = (void *) 0xFF12D284;
    void (*lcd_sio_write)(uint32_t * data, int size) = (void *) 0xFF12D1E0;
    void (*lcd_sio_finish)(void * sio_obj) = (void *) 0xFF13BDC8;
    void ** p_lcd_sio_obj = (void **) 0x246F0;
    // 650D 104: FF127E88, FF127D88, FF13B868, 23C48.
    // 700D 115: FF128A28, FF128928, FF13C420, 23C58.

    printf("LCD sio start\n");
    lcd_sio_write((uint32_t[]) { 0x36, 0x140 }, 2);
    printf("LCD sio finish\n");

5D3: the above code turns off the screen, but leaves the backlight on.
650D: wip
700D: ?

SIO initialization sequences are different (likely different TFT controllers). 650D and 700D are identical.

5D3 has an interesting SIO sequence when display brightness is set to Auto.

11949> DisplayMgr:ff12d238:82:02: SIO [0]:0x34
11981> DisplayMgr:ff12d238:82:02: SIO [1]:0x1700
119B5> DisplayMgr:ff12d238:82:02: SIO [2]:0x1808
119E9> DisplayMgr:ff12d238:82:02: SIO [3]:0x1960
11A1C> DisplayMgr:ff12d238:82:02: SIO [4]:0x35

Hypothesis: high byte is TFT register address, low byte is value (similar to ADTG, CMOS, audio).

5D3: register 0x19 appears to be gamma correction (0-63). The remaining two bits cause some flicker in saturated blue (?!)

Register 0 is set to 0x34 on TftDeepStanby; 0x35 brings back the image.

    for (int i = 0; i < 64; i++)
        lcd_sio_write((uint32_t[]) { 0x34, 0x1900 | (i + (rand() & 3) * 64), 0x35 }, 3);

The search space appears small (256 registers, 256 possible values), so let's brute-force it:

    for (int reg = 0; reg < 0x100; reg++)
        for (int val = 0; val < 0x100; val++)
            bmp_printf(FONT_LARGE, 50, 50, "%02x: %02x", reg, val);
            lcd_sio_write((uint32_t[]) { 0x34, (reg << 8) | val, 0x35 }, 3);

        /* restore the display back to working condition */

Documenting other registers (either on 5D3 or on other models) is welcome. Other than trial and error, I don't have a better way to analyze them.

Other registers present (found with { 0x34, rand() & 0xFFFF, 0x35 }, but not written down):
- color adjusments (temperature?)
- mirroring, flipping
- half resolution
- scaling, translation (both H and V)

So far, the image gets back to normal when switching the display mode (such as going into Canon menu).
Just playing with this dataset and

Quote from: Levas on November 28, 2016, 01:46:10 PM
Do you mean you want to try some super resolution algorithms ?
I have about 45 frames of this castle, before I start panning to the right...
Uploading frame 0 to 45 right now, takes about half an hour.
Same link as before.

- before: dcraw M27-1337-frame_000002.dng
- after: averaged with frames 1 and 3, warped with optical flow to match frame 2

To reproduce the above result, get the files below, install the dependencies (follow comments and error messages), then type:

make -j2 M27-1337_frame_000002-a.jpg

or "make -j8" to render the entire sequence on a quad-core processor.

Makefile (use Firefox for copying the text; Google Chrome and Safari will not work!)

# experiment: attempt to reduce aliasing on hand-held footage using optical flow
# requires

# replace with path to pyflow repository
FLOW=python ~/src/pyflow/

# default target: render all frames as jpg
all: $(patsubst %.dng,%-a.jpg,$(wildcard M27-1337_frame_*.dng))

# render DNGs with dcraw
%.ppm: %.dng
dcraw $<

# helper to specify dependencies on previous or next image
# assumes the file name pattern is: prefix_000123 (underscore followed by 6 digits)
# fixme: easier way to... increment a number in Makefile?!
inc = $(shell stem="$1"; echo $${stem%_*}_$$(printf "%06d" $$((10\#$${stem//*_/}+$2))) )

# enable secondary expansion (needed below)

# next or previous frames
%-n.png: %.ppm $$(call inc,%,1).ppm
$(FLOW) $^ $@

%-p.png: %.ppm $$(call inc,%,-1).ppm
$(FLOW) $^ $@

# average
%-a.png: %.ppm %-n.png %-p.png
convert -average $^ $@

# fallback rules: first / last file will only have "next" / "previous" neighbours
# FIXME: these rules may be chosen incorrectly instead of the above in some edge cases; if in doubt, delete them and see if it helps
%-a.png: %.ppm %-n.png
convert -average $^ $@

%-a.png: %.ppm %-p.png
convert -average $^ $@

# convert to jpg
%.jpg: %.ppm
convert $< $@
%.jpg: %.png
convert $< $@

# 100% crops
%-crop.jpg: %.jpg
convert $< -crop 400x300+900+650 $@

# careful if you have other files in this directory ;)
rm -f *.ppm *.jpg *.png *.npy

# do not delete intermediate files

# example:
# make -j8
# make -j2 M27-1337_frame_000002-a.jpg

# Modified the demo from
# -> just save the warped image and the computed flow; filenames from command line

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# from __future__ import unicode_literals
import numpy as np
from PIL import Image
import time
import pyflow
import sys

    print("%s %s -> %s" % (sys.argv[1], sys.argv[2], sys.argv[3]))
    print("usage: %s input1.jpg input2.jpg output.npy" % sys.argv[0])
    raise SystemExit

im1 = np.array([1]))
im2 = np.array([2]))
im1 = im1.astype(float) / 255.
im2 = im2.astype(float) / 255.

# Flow Options:
alpha = 0.012
ratio = 0.75
minWidth = 20
nOuterFPIterations = 7
nInnerFPIterations = 1
nSORIterations = 30
colType = 0  # 0 or default:RGB, 1:GRAY (but pass gray image with shape (h,w,1))

s = time.time()
u, v, im2W = pyflow.coarse2fine_flow(
    im1, im2, alpha, ratio, minWidth, nOuterFPIterations, nInnerFPIterations,
    nSORIterations, colType)
e = time.time()
print('Time Taken: %.2f seconds for image of size (%d, %d, %d)' % (
    e - s, im1.shape[0], im1.shape[1], im1.shape[2]))

flow = np.concatenate((u[..., None], v[..., None]), axis=2)[3] + ".npy", flow)

import cv2
cv2.imwrite(sys.argv[3], im2W[:, :, ::-1] * 255)

Exercise for the reader: use more frames to compute the correction.

Have fun.
General Development / Full-screen histogram WIP
October 30, 2017, 10:26:45 PM
Something like this?

Topic split from here.
Another pipe dream came true :) - this time, a dream of mine.

Have you noticed a bunch of screenshots on the nightly builds page?

Were you wondering what's up with them?

These screenshots are created on the build server, by emulating the very builds available for download, unmodified, in QEMU.

In other words, most of the nightly builds are no longer 100% untested when they appear on the download page :)

This is not an overnight development - it's built upon all these years of fiddling with QEMU. A short while ago I couldn't give a good answer regarding the usefulness of the emulator - now you can see it live.

Right now there are only a few tests, with OCR-based menu navigation (using tesseract):

1) navigate to Debug -> Free Memory and take a screenshot from there
2) load the Lua module and run the Hello World script
3) load the file_man module and browse the filesystem
4) play the first 3 levels of the Sokoban game (lua_fix only; example for 1200D)

- add more tests (easy, but time-consuming)
- emulate more camera components (e.g. image playback to be able to test ML overlays)
- check code coverage
- diff the screenshots
- nicer reports

For now, have fun watching the testing script playing Sokoban in QEMU :)

Emulation log

If you are wondering what's the point of testing this game: it covers many backend items, such as menu navigation, module loading, script config files, making sure keys are not missed randomly during script execution, checking whether the camera has enough memory to run scripts - most of these are real bugs found on some camera models from the current nightly builds.

At least, these tests will catch the long-standing issue of some camera models running out of memory, thus not being able to boot. Not very funny for a build considered somewhat stable...

And the emulation is still pretty limited, so I'm just adding tests for what works :)

A while ago I've got the suggestion to use openQA, but I'm still wrapping my head around it. If you can show how it could save us from reinventing the wheel, I'm all ears.
Sneak preview of what I'm working on:

at ../../src/stdio.c:44 (streq), task module_task
lv:0 mode:3

module_task stack: ad340 [69c60-1dd3b0]
0x0006DC34 @ 7162c:1dd400
0x0007CF7C @ 6dd3c:1dd3f0
0x00069C00 @ 7cf9c:1dd3e0
0x000AD644 @ 69c5c:1dd3b0

What's the meaning of these codes?

eu-addr2line -s -S --pretty-print -e magiclantern 0x0006DC34 0x0007CF7C 0x00069C00 0x000AD644
entry_guess_icon_type at menu.c:694
streq at stdio.c:43
ml_assert_handler at boot-hack.c:596
backtrace_getstr at backtrace.c:859

eu-addr2line -s -S --pretty-print -e magiclantern 7162c 6dd3c 7cf9c 69c5c
menu_add.part.25+0x100 at menu.c:1212
entry_guess_icon_type+0x108 at menu.c:711
streq+0x20 at stdio.c:44
ml_assert_handler+0x5c at boot-hack.c:605

Putting all together:

menu_add (menu.c:1212) called entry_guess_icon_type (located menu.c:694)
entry_guess_icon_type (menu.c:711) called streq (located at stdio.c:43)
  streq (stdio.c:44) called ml_assert_handler (located at boot-hack.c:605) - that's the ASSERT macro
   ml_assert_handler (boot-hack.c:605) called backtrace_getstr (located backtrace.c:859)

Heh, that backtrace went a little bit too far :)

Note: the above line numbers are valid for this changeset.

Works for Canon code too (but it's unable to figure out indirect calls):

at RscMgr.c:2513, task InnerDevelopMgr
lv:0 mode:3

InnerDevelopMgr stack: ad360 [697d8-19e498]
0xUNKNOWN  @ de48:19e568
0xUNKNOWN  @ 17bbc:19e540
0x000178B4 @ ff139c38:19e528
0xUNKNOWN  @ 178e4:19e518
0xUNKNOWN  @ 1796c:19e4f8
0xFF0F2F14 @ ff301928:19e4e0
0x00001900 @ ff0f2f80:19e4d0
0x000AD664 @ 697d4:19e498

Will post more details after committing the source.

In the mean time, I'd appreciate a small script (easy coding task) that would take a crash log as input (as in the above examples) and create a human-readable output from it (as in the "putting all together" example). To get the debugging info required for name translation, you'll need this changeset.
Some early notes (5D3 1.2.3).

PROP_REBOOT (software reboot):

    int reboot = 0;
    prop_request_change(PROP_REBOOT, &reboot, 4);

0x80010001 PROP_TERMINATE_SHUT_REQ (0=request, 3=execute, 4=cancel)

08F1E>    PropMgr:000aecf0:00:00: *** mpu_send(06 04 04 07 00), from ff12298c
09101> **INT-36h*:000aed58:00:00: *** mpu_recv(06 05 02 0b 00 00), from ff2e87f8
09388>    PropMgr:ff0cdc60:8c:03: terminateChangeCBR : SHUTDOWN (0)
093AA>    PropMgr:ff0cde2c:8c:16: SHUTDOWN_REQUEST

Opening battery door:
0x80010002 PROP_ABORT

786D6> **INT-36h*:000aed88:00:00: *** mpu_recv(06 05 06 26 01 00), from ff2e87f8
78821> **INT-36h*:000aed88:00:00: *** mpu_recv(06 05 06 13 01 00), from ff2e87f8
7915B> **INT-36h*:000aed88:00:00: *** mpu_recv(06 05 04 0d 00 00), from ff2e87f8
86E81> **INT-36h*:000aed88:00:00: *** mpu_recv(06 04 02 0c 01), from ff2e87f8
8A249>    PropMgr:ff0f8f74:00:03: [SEQ] CreateSequencer (Terminate, Num = 2)
8A2CB>    PropMgr:000aeb2c:00:00: *** task_create("Terminate", 0x11, 0x1000, 0xff0f8e40, &"Sequencer"), from ff0f8ffc
8A1F8>  Terminate:000aeb0c:00:00: *** terminateAbort(0x200000, 0x0, 0x0, 0x200000), from ff0f8edc
8A5D7>  Terminate:000aeb0c:00:00: *** terminateAbort(0x10, 0x0, 0x0, 0x10), from ff0f8edc

Saving settings to ROM at shutdown:

8A16A>    PropMgr:ff1282a4:02:03: Compare FROMAddress (0) 0x40710e00 0xff060000 Size 2424

      RAM_DebugMsg(140, 22, "Write to FROM");

When opening the battery door, I've identified prop_erase/prop_write calls to 0x3000000 (0xff21c8bc, triggered from PROP_ABORT), 0x5000000 and 0x2000000 (0xff0ce424, Terminate task).

On normal shutdown (power button or card door), terminateShutdown is used instead of terminateAbort.

Canon settings are organized like this (see PROPAD_CreateFROMPropertyHandle):

name   ROM addr   N * sector_size?  block_size?   prop_class
TUNE   0xF8B20000   23 * 0x20000      0x2E0000    0x1000000
TUNE2  0xF0020000   42 * 0x10000      0x2A0000    0x1000000
FIX    0xF8E60000    4 * 0x20000       0x80000    0
RING   0xF8F40000    2 * 0x20000        0x1000    0x2000000
RASEN  0xF8EE0000    3 * 0x20000       0x20000    0x4000000, 0x5000000, 0xE000000
LENS   0xF8E00000    3 * 0x20000       0x20000    0xB000000
CUSTOM 0xF8060000    2 * 0x20000        0x1000    0x3000000

The prop_class fields is the "category" of properties stored in each block. Examples:
- RING: 0x02040002 PROP_LANGUAGE, 0x02040003 PROP_VIDEO_SYSTEM, 0x02040005 PROP_DATE_FORMAT
- TUNE: 0x1010022/25...37 vertical stripe correction parameters
- TUNE2: 0x10500d1...d4 (see above)
- RASEN: unknown (0x5010002 contains '', 0x5010003 contains 'Wft-canon')
- CUSTOM: some of them look like picture style parameters (probably settings for C modes)

See also
As you probably have guessed from the latest developments (QEMU, EDMAC graphs, JPCORE, EEKO), our understanding on how LiveView works has improved considerably. Finally, all my fiddling with QEMU, at first sight with little or no purpose for the everyday users, starts paying off.

Today, Magic Lantern proudly announces new ground-breaking features that were previously thought impossible or very hard to achieve.

We proudly present....

4K RAW Video Recording!


Twitter announcement

On the 5D Mark III, you now have the following new resolutions:

* 1920x960 @ 50p (both 1:1 crop and full-frame - 3x3 pixel binning) - continuous*)
* 1920x800 @ 60p (same as above)  - continuous*)
* 1920x1080 @ 45p and 48p (3x3 binning)  - continuous at 45p*)
* 1920x1920 @ 24p (1:1 square crop) - continuous*)
* 3072x1920 @ 24p (1:1 crop)
* 3840x1536 @ 24p (1:1 crop) (corrupted frames at 1600)
* 4096x2560 @ 12.5p (1:1 crop) - continuous*) at 8 FPS
* 4096x1440 @ 25p (1:1 crop)
* Full-resolution LiveView: 5796x3870 at 7.4 fps (128ms rolling shutter) - continuous*) at 5 FPS!
* Full-width LiveView - decrease vertical resolution in the crop_rec submenu, all the way to 5796x400 @ 48 fps :)

The last feature complements the well-known full-resolution silent pictures - the new implementation will be usable at fast shutter speeds, without the exposure gradient - but with rolling shutter (just like regular LiveView frames).

*) Continuous recording for the above resolutions can be achieved as long as you can get a LJ92 compression ratio (compressed / 14-bit uncompressed) of about 50-55%, with preview set to Frozen LV (previously known as Hacked Preview) for an additional speed boost. Otherwise, you'll have to reduce the resolution or the frame rate.

The following table shows how compression rate changes with ISO and bit depth; please check the figures for your particular scene in the raw video submenu, as they can vary a lot, depending on the scene content.

Quote from: a1ex on April 15, 2017, 01:12:36 PM
Bits per pixel      14  12  11  10   9   8
ISO  100 1/100     61% 53% 50% 48% 46% 43%
ISO  200 1/200     62% 54% 51% 49% 47% 44%
ISO  400 1/400    63% 54% 51% 49% 47% 45%
ISO  800 1/800     65% 55% 52% 50% 48% 46%
ISO 1600 1/1600    67% 56% 53% 50% 48% 46%
ISO 3200 1/3200    70% 57% 53% 50% 49% 47%
ISO 6400 1/6250    76% 60% 55% 52% 50% 48%
ISO 12800 1/12500  79% 63% 57% 53% 50% 49%

Credits: Greg (full-width LiveView), g3gg0 (video timer, DIGIC registers documentation and lots of other low-level insights).

Complete list of new video modes:

                                /*   24p   25p   30p   50p   60p */
    [CROP_PRESET_3X_TALL]       = { 1920, 1728, 1536,  960,  800 }, /* 1920 */
    [CROP_PRESET_3x3_1X]        = { 1290, 1290, 1290,  960,  800 }, /* 1920 */
    [CROP_PRESET_3x3_1X_48p]    = { 1290, 1290, 1290, 1080, 1080 }, /* 1920; 1080p45/48 <- 50/60p in menu */
    [CROP_PRESET_3K]            = { 1920, 1728, 1504,  760,  680 }, /* 3072 */
    [CROP_PRESET_UHD]           = { 1536, 1472, 1120,  640,  540 }, /* 3840 */
    [CROP_PRESET_4K_HFPS]       = { 2560, 2560, 2500, 1440, 1200 }, /* 4096 half-FPS */
    [CROP_PRESET_FULLRES_LV]    = { 3870, 3870, 3870, 3870, 3870 }, /* 5796 */

What else could you wish for?


Where's the catch?

This is only a very rough proof of concept. It has not been battle-tested and has many quirks. Some of them may be easy to fix, others not so. In particular:

* It feels quite buggy. I'm still hunting the issues one by one, but it's hard, as Canon's LiveView implementation is very complex, and our understanding on how it works is still very limited.
* Write speeds are high. For example, 10-bit 4096x2500 at 15 fps requires 180 MB/s. 1080p45 should be a little more manageable at 111 MB/s.
* Canon preview is broken in most modes; you need to use the grayscale preview in the raw recording module.
* High-resolution modes (in particular, full-res LiveView) may cause trouble with memory management. This is very tricky to solve, as we only get 3 full-resolution buffers in LiveView, with restrictions on the order in which they must be freed, and lots of other quirks.
* Since these settings were pushed to limit, the risk of corrupted frames is high. If it happens, decrease the vertical resolution a bit (from the crop_rec submenu).
* When refreshing LiveView settings, the camera might lock-up (no idea why). Pressing MENU twice appears to fix it.

May I fine-tune the new modes?

Yes! I've included some of the knobs on the user interface. Normally you shouldn't need to touch these buttons, but if you do, you might be able to squeeze a few more pixels.

Does it work with FPS override?

Sort of. It's not reliable at this point, so it's best not to try yet.


During my tests, I didn't manage to get a sensor temperature higher than 60 degrees. Your mileage may vary.


This mod changes some low-level sensor parameters that are not well understood. They were all figured by trial and error, and there are no guarantees about the safety of these changes.

As usual, if it breaks, it's your fault, sorry.

Will it work on other camera models?

I hope so; however, this is an area where I hope to get contributions from others (yes, from you). If these new features don't motivate you to look into it, I wonder what else will.

I'll explain how all this works in the coming days or weeks.

Is it difficult to port to other camera models?

So far, the 3x3 720p mode from crop_rec was ported to EOS M (rbrune), 700D (dfort) and 100D (nikfreak). So it shouldn't be that hard...

Will you port this to my camera model, please?

No, sorry. I have better things to do - such as, preparing the April 1st prank for next year :)

Wait a minute, didn't you say you are primarily a still photo user? Why are you even doing this?

If you look close, the usefulness for video is fairly limited, as the write speeds (and therefore the recording times) are not practical.

But the full-resolution LiveView is - in my opinion - very useful for still photo users. Although the current implementation is not very polished (it's just a proof of concept), I hope you'll like the idea of a 7.4 FPS burst mode, 100% silent, without shutter actuations.

Right now, you can take the mlv_lite module with pre-recording and half-shutter trigger: at 10 bits per pixel, you get 5 frames pre-recorded, and saved to card as soon as you touch the half-shutter button. Or, you can capture one frame for each half-shutter press, with negative shutter lag! (since the captured frame will always be pre-recorded).

And if a burst at 7.4 fps is not enough, you may also look at the 4K modes (12-15 fps).

(I know, I know, GH4 already does this, at much higher frame rates...)

The help menu for full-res LiveView says 5796x3870, but MLV Lite only records 5784x3856. What's going on?

The raw recording modules have a couple of alignment constraints (e.g. can only start cropping from a multiple of 8 pixels, and the size of the cropped area (that goes into the MLV file) must be multiple of 16 bytes (that is, W*bpp/8 + H mod 16 must be 0).

To capture the full resolution, you may use the silent picture module. However, this module is not the best when it comes to memory management and buffering. Currently, you'll get an impressive buffer of 2 frames in burst mode :)

But hey - it outputs lossless DNG!

What about that lossless compression routine?

It's included, although I didn't manage to test it much. There is a lot of room for improvement, but for a proof of concept, it seems to work.

update: also got lossless compression at reduced bit depths (8...12-bit).

P.S. The initial announcement was disguised as an April Fools joke, just like the original crop_rec.

Twitter announcement

From original April Fools post:

With our latest achievements in wizardry with ARM programming and DIGIC reverse engineering, we can speak of a new era of raw video recording.

On models like the 5D Mark III, the next upcoming releases will feature an improved version of our crop_rec module that delivers the following new resolutions:

* 1920x960 @ 50p (both 1:1 crop and full-frame - 3x3 pixel binning)
* 1920x800 @ 60p (same as above)
* 1920x1080 @ 45p and 48p (3x3 binning)
* 1920x1920 @ 24p (1:1 square crop)
* 3072x1920 @ 24p (1:1 crop)
* 3840x1600 @ 24p (1:1 crop)
* 4096x2560 @ 12.5p (1:1 crop)
* Full-resolution LiveView: 5796x3870 at 7.4 fps (128ms rolling shutter).

The last feature complements the well-known full-resolution silent pictures - the new implementation will be usable at fast shutter speeds, without the exposure gradient - but with rolling shutter (just like regular LiveView frames).

Please understand that providing the source code for those highly DIGIC optimized routines is a bit troublesome and will need some extra legal care. After this step is taken and as soon we are finished with ensuring the product quality you are used from Magic Lantern, we will upload the code to our repository.

Consider this being a huge leap towards our next mind boggling goal:

8K RAW Video Recording!

Sample DNG from 5D Mark III, to show that our proof of concept is working:


Stay tuned for more information!

So far, if you wanted to write your own module, the best sources of documentation were (and probably still are) reading the source code, the forum, the old wiki, and experimenting. As a template for new modules, you probably took one of the existing modules and removed the extra code.

This is one tiny step to improve upon that: I'd like to write a series of guides on how to write your own modules and how to use various APIs provided by Magic Lantern (some of them tightly related to APIs reverse engineered from Canon firmware, such as properties or file I/O, others less so, such as ML menu).

Will start with the simplest possible module:

Hello, World!

Let's start from scratch:

hg clone -u unified
cd magic-lantern/modules/
mkdir hello
cd hello
touch hello.c

Now edit hello.c in your favorite text editor:

/* A very simple module
* (example for module authors)
#include <dryos.h>
#include <module.h>
#include <menu.h>
#include <config.h>
#include <console.h>

/* Config variables. They are used for persistent variables (usually settings).
* In modules, these variables also have to be declared as MODULE_CONFIG.
static CONFIG_INT("hello.counter", hello_counter, 0);

/* This function runs as a new DryOS task, in parallel with everything else.
* Tasks started in this way have priority 0x1A (see run_in_separate_task in menu.c).
* They can be interrupted by other tasks with higher priorities (lower values)
* at any time, or by tasks with equal or lower priorities while this task is waiting
* (msleep, take_semaphore, msg_queue_receive etc).
* Tasks with equal priorities will never interrupt each other outside the
* "waiting" calls (cooperative multitasking).
* Additionally, for tasks started in this way, ML menu will be closed
* and Canon's powersave will be disabled while this task is running.
* Both are done for convenience.
static void hello_task()
    /* Open the console. */
    /* Also wait for background tasks to settle after closing ML menu */

    /* Plain printf goes to console. */
    /* There's very limited stdio support available. */
    printf("Hello, World!\n");
    printf("You have run this demo %d times.\n", ++hello_counter);
    printf("Press the shutter halfway to exit.\n");

    /* note: half-shutter is one of the few keys that can be checked from a regular task */
    /* to hook other keys, you need to use a keypress hook - TBD in hello2 */
    while (!get_halfshutter_pressed())
        /* while waiting for something, we must be nice to other tasks as well and allow them to run */
        /* (this type of waiting is not very power-efficient nor time-accurate, but is simple and works well enough in many cases */

    /* Finished. */

static struct menu_entry hello_menu[] =
        .name       = "Hello, World!",
        .select     = run_in_separate_task,
        .priv       = hello_task,
        .help       = "Prints 'Hello, World!' on the console.",

/* This function is called when the module loads. */
/* All the module init functions are called sequentially,
* in alphabetical order. */
static unsigned int hello_init()
    menu_add("Debug", hello_menu, COUNT(hello_menu));
    return 0;

/* Note: module unloading is not yet supported;
* this function is provided for future use.
static unsigned int hello_deinit()
    return 0;

/* All modules have some metadata, specifying init/deinit functions,
* config variables, event hooks, property handlers etc.


We still need a Makefile; let's copy it from another module:

cp ../ettr/Makefile .
sed -i "s/ettr/hello/" Makefile

Let's compile it:


The build process created a file named README.rst. Update it and recompile.

make clean; make

Now you are ready to try your module in your camera. Just copy the .mo file to ML/MODULES on your card.

If your card is already configured for the build system, all you have to do is:

make install

Otherwise, try:

make install CF_CARD=/path/to/your/card

or, if you have such device:

make install WIFI_SD=y

That's it for today.

To decide what to cover in future episodes, I'm looking for feedback from anyone who tried (or wanted to) write a ML module, even if you were successful or not.

Some ideas:
- printing on the screen (bmp_printf, NotifyBox)
- keypress handlers
- more complex menus
- properties (Canon settings)
- file I/O
- status indicators (lvinfo)
- animations (e.g. games)
- capturing images
- GUI modes (menu, play, LiveView, various dialogs)
- semaphores, message queues
- DryOS internals (memory allocation, task creation etc)
- custom hooks in Canon code
- your ideas?

Of course, the advanced topics are not for second or third tutorial.
Quote from: dfort on December 30, 2016, 04:34:31 AM
I was experimenting with shooting raw video while simultaneously recording H.264 [...]

[...]it is too much of a hack[...]

Here's an attempt to make it a bit less of a hack:
Currently, focus peaking gives you the option to use two image buffers: the LiveView one (720x480 when used on internal LCD) and the so-called HD one (usually having higher resolution). Of course, the peaking results with the two options are slightly different.

To simplify the code, I'd like to use only the LiveView buffer, like most other overlays.

Is there any reason to use the high-res buffer? In other words, did any of you get better results by using it?
General Development / Thread safety
February 05, 2017, 02:12:43 AM
While refactoring the menu code, I've noticed it became increasingly complex, so evaluating whether it's thread-safe was no longer an easy task (especially after not touching some parts of the code for a long time). The same is true for all other ML code. Not being an expert in multi-threaded software, I started to look for tools that would at least point out some obvious mistakes.

I came across this page, which seems promising, but looks C++ only. This paper appears to be from the same authors (presentation here), and C is mentioned too, so adapting the example is probably doable.

Still, annotation could use some help from a computer. So I found pycparser and wrote a script that recognizes common idioms from ML code (such as TASK_CREATE, PROP_HANDLER, menu definitions) and annotates each function with a comment telling what tasks call this function.

Therefore, if a function is called from more than one task, it must be thread-safe. The script only highlights those functions that are called from more than one task (that is, those that may require attention).

Still, I have a gut feeling that I'm reinventing the wheel. If you know a better way to do this, please chime in.


Note: in DryOS, tasks == threads.
General Development / Experiment - Dynamic My Menu
January 31, 2017, 09:51:00 PM
Today I was a bit tired of debugging low-level stuff like Lua tasks or camera-specific quirks, but still wanted to write something cool. So here's something I wanted for a long time. The feedback back then wasn't exactly positive, so it never got implemented, but I was still kinda missing it.

Turns out, it wasn't very hard to implement, so there you have it.

What is it?

You already know the Modified menu (where it shows all settings changed from the default value), and My Menu (where you can select your favorite items manually). This experiment attempts to build some sort of "My Menu" dynamically, based on usage counters.

How it works?

After a short while of navigating ML menu as you usually do, your most recently used items and also your frequently used items should appear there. As long as you don't have any items defined for My Menu, it will be built dynamically. The new menu will be named "Recent" and will keep the same icon as My Menu.

Every time you click on some menu item, the usage counter for that item is incremented. All the other items will have a "forgetting factor" applied, so the most recently used items will raise to the top of the list fairly quickly.

Clicking the same item over and over will only be counted once (so scrolling through a long list of values won't give extra priority to menu items). Submenu navigation doesn't count; only changing a value or running an action are counted.

Time is discrete (clicks-based). It doesn't care if you use the camera 10 hours a day or a couple of minutes every now and then.

To have both good responsiveness to recent changes, but also learn your habits over a longer time, I've tried two usage counters: one for short term and another for long term memory. If, let's say during some day, you need to keep toggling a small set of options, it should learn that quickly. But, if no longer need those options after that special day, those menu items will be forgotten quickly, and the ones you use daily (stored in the "long term memory") should be back soon.

So, the only difference between the "long term" and the "short term" counters is the forgetting factor: 0.999 vs 0.9. In other words, the "long term" counters have more inertia.

When deciding whether a menu item is displayed or not, the max value between the two is used, resulting a list of "top 11 most recently or frequently used menus". The small gray bars from the menu are the usage counters (debug info).

I have no idea how well this works in practice - it's something I came up with a few hours ago, and the tuning parameters are pretty much arbitrary.

Source code committed, and if there is interest, I can prepare an experimental build as well.
General Development / Touch-friendly ML menu
January 06, 2017, 07:02:44 PM
Some experiments I did last summer on a 700D (which I no longer have).

I remember it worked to some extent, but had some quirks. Don't remember the exact details, but I hope it could be useful (or at least fun to tinker with).
General Chat / Script for undeleting CR2 files
January 01, 2017, 09:17:31 PM
Looks like my 5D3 decided to reuse the file counters on two different cards. When sorting some photos, one CR2 just got overwritten by another image with the same name (by mistake).

How to undelete it?

Testdisk's undelete tool didn't help (the file wasn't deleted, but overwritten). PhotoRec would have probably worked, given enough time, extra HDD space and patience to sort through the output files (not practical). I found a guide using debugfs, which didn't seem to work (too much low-level stuff I wasn't familiar with), and this article seemed promising. I knew a pretty tight time interval for the missing file (a couple of seconds, from previous and next file in the set), so I wrote a quick Python script to scan the raw filesystem for CR2 files with the EXIF date/time in a given range.

It worked for me.

It's all hardcoded for my system, but should be easy to adjust for other use cases.

# CR2 recovery script
# Scans the entire partition for CR2 files between two given timestamps,
# assuming they are stored in contiguous sectors on the filesystem.
# Hardcoded for 5D Mark III.

import os, sys, re
from datetime import datetime

d0 = datetime.strptime("2016:06:10 17:31:36", '%Y:%m:%d %H:%M:%S')
d1 = datetime.strptime("2016:06:10 17:31:42", '%Y:%m:%d %H:%M:%S')

f = open('/dev/sda3', 'r')

nmax = 600*1024
for k in xrange(nmax):
    p = k*100.0 / nmax*1024*k)
    block =*1024)
    if "EOS 5D Mark III" in block:
        i = block.index("EOS 5D Mark III")
        print k, hex(i), p
        b = block[i : i+0x100]
        date_str = b[42:61]
        try: date = datetime.strptime(date_str, '%Y:%m:%d %H:%M:%S')
        except: continue
        if date >= d0 and date <= d1:
            print date
            out = open("%X.CR2" % k, "w")
  *1024*k + i - 0x100)
Reverse Engineering / ProcessTwoInTwoOutLosslessPath
December 18, 2016, 09:06:41 PM
Managed to call ProcessTwoInTwoOutLosslessPath, which appears to perform the compression for RAW, MRAW and SRAW formats. The output looks like some sort of lossless JPEG, but we don't know how to decode it yet (this should help).

Proof of concept code (photo mode only for now):
Reverse Engineering / lv_set_raw / lv_select_raw
December 11, 2016, 10:16:59 AM
These are used for selecting the LiveView raw stream (aka "raw type", see PREFERRED_RAW_TYPE in raw.c).

There is a function that gives some more information about these modes: lv_select_raw in 70D, 80D, 750D/760D, 5D4 and 7D2. The debug strings also reference the PACK32 module (which is something that can write a raw image to memory), so probably this setting connects the input of PACK32 to various image processing modules from Canon's pipeline.

Some related pages: Register_Map, EekoAddRawPath, raw_twk, 12-bit raw video, mv1080 on EOSM...

The names appear to match between DIGIC 5 and 6 cameras, so here's a summary of the LV raw modes:

      5D4           80D              760D 7D2M 70D 700D     100D 5D3
0x00: DSUNPACK                                                     
0x01: UNPACK24                                                     
0x02: ADUNPACK                                                     
0x03: DARKSUB       <-               <-   <-   <-                   
0x04: SHADING       <-               <-   <-   <-  SHADE    <-   <-
0x05: ADDSUB        TWOLINEADDSUB    <-   <-   <-                   
0x06: DEFC          <-               <-   <-   <-                   
0x07: DFMKII        DEFMARK          <-   <-   <-  <-       <-     
0x08: HIVSHD        <-               <-   <-   <-  <-       <-   <-
0x09: SMI           <-               <-   <-   <-                   
0x0a: PEPPER_CFIL   <-               <-   <-   <-                   
0x0b: ORBIT         <-               <-   <-   <-  <-       <-   <-
0x0c: TASSEN        <-               <-   <-   <-                   
0x0d: PREWIN1       PEPPER_WIN       <-   <-   <-                   
0x0e: RSHD          <-               <-   <-   <-  <-       <-   <-
0x0f: BEATON        <-               <-   <-   <-                   
0x10: HEAD          <-               <-   <-   <-  CCD      <-   <-
0x11: AFY           <-               <-   <-   <-                   
0x12: DEFOE         <-               <-   <-   <-  DEFCORRE <-   <-
0x13: ORBBEN        <-               <-   <-   <-                   
0x14: PEPPER_DOUBLE                                                 
0x15: JUSMI         <-               <-   <-   <-                   
0x16: SUSIE         <-               <-   <-   <-                   
0x17: KIDS          <-               <-   <-   <-                   
0x18: CHOFF         <-               <-   <-   <-                   
0x19: CHGAIN        <-               <-   <-   <-                   
0x1a: CAMPOFF       <-               <-   <-   <-                   
0x1b: CAMPGAIN      <-               <-   <-   <-                   
0x1c: DEGEEN1       <-               <-   <-   <-  DEGEEN   <-     
0x1d: DEGEEN2       <-               <-   <-   <-                   
0x1e: YOSSIE        <-               <-   <-   <-                   
0x1f: FURICORE      <-               <-   <-   <-                   
0x20: EXPUNPACK                                                     
0x21: SUBUNPACK                                                     
0x22: PREFIFO       INVALID,PRE_FIFO <-   <-   <-                   
0x23: SAFARI_IN     <-               <-   <-   <-                   
0x24: DPCM_DEC      <-               <-   <-   <-                   
0x25: MIRACLE                                                       
0x26: FRISK                                                         
0x27: CLEUR         <-                                             
0x28: OTHERS                                                       
0x29: SHREK         SHREK_IN         <-   <-   <-                   
0x2a: DITHER                                                       
0x2b: DFMKII2       DEFMARKII2       <-   <-                       
0x2c: PREWIN2       PEPPER_WIN2      <-   <-                       
0x2d: CDM           <-               <-   <-                       
0x2e: LTKIDS_IN     <-               <-   <-                       
0x2f: PREWIN3       PEPPER_WIN3      <-   <-                       
0x30: SIMPPY                                                       
0x31: PEPPER_DIV_A                                                 
0x32: PEPPER_DIV_B                                                 
0x33: SUBSB_OUT                                                     
0x34: SIBORE_IN                                                     
0x35: PEPPER_DIV                                                   

DIGIC 4 has a different mapping. 60D:

RSHD     => 0x0B
SHADE    => 0x01
HIVSHD   => 0x07
ORBIT    => 0x09
DEFCORRE => 0x04
CCD      => 0x05 (currently used)
DEFMARK  => 0x06

There are more valid raw types than the ones named in the above tables. For example, on 5D3 (trial and error):

0x00 => valid image stream in some unknown format
0x01 => bad
0x02 => scaled by digital ISO (DEFCORRE?)
0x03 => bad
0x04 => SHADE (bad pixels, scaled by digital ISO)
0x05 => bad
0x06 => bad
0x07 => DEFMARK (bad pixels)
0x08 => HIVSHD (bad pixels, appears to fix some vertical stripes)
0x09 => bad
0x0A => bad
0x0B => bad
0x0C => bad
0x0D => bad
0x0E => RSHD (bad pixels, scaled by digital ISO)
0x0F => bad

0x10 => CCD (clean image, some vertical stripes in certain cases)
0x11 => bad
0x12 => DEFCORRE (scaled by digital ISO)
0x13 => bad
0x14 => valid image stream in some unknown format (different from 0)
0x15 => bad
0x16 => bad
0x17 => bad pixels
0x18 => bad
0x19 => bad
0x1A => bad
0x1B => bad
0x1C => bad pixels
0x1D => bad
0x1E => bad pixels
0x1F => bad

0x20 => valid image stream in some compressed format?
0x21 => bad
0x22 => clean image
0x23 => bad
0x24 => bad
0x25 => bad
0x26 => bad
0x27 => bad pixels
0x28 => valid image stream in some compressed format?
0x29 => bad
0x2A => some strange column artifacts
0x2B => bad
0x2C => bad
0x2D => bad
0x2E => some strange posterization
0x2F => bad

0x30 => valid image stream in some compressed format?
0x31 => bad
0x32 => clean image
0x33 => bad
0x34 => valid image stream in some unknown format
0x35 => bad
0x36 => bad
0x37 => bad pixels
0x38 => valid image stream with some missing columns?!
0x39 => same
0x3A => clean image
0x3B => bad
0x3C => bad pixels, strange column artifacts (like 0x2A, but with bad pixels)
0x3D => bad
0x3E => posterization (same as 46)
0x3F => bad

0x40 - 0x7F => same as 0x00 - 0x3F (checked most good modes and some bad modes)

On 5D3 and 60D, the raw type "CCD" is the one we are using for raw video.

On EOS M, the only valid raw types appear to be 7, 11, 48, 50, 75, 80, 87 according to dfort.

Would be nice if somebody has the patience to try all the raw types on the 70D, as it's the only camera that runs ML now and has lv_select_raw.
Reverse Engineering / EDMAC internals
November 26, 2016, 01:28:55 PM
Until now, we didn't know much about how to configure the EDMAC. Recently we did some experiments that cleared up a large part of the mystery.

Will start with size parameters. They are labeled xa, xb, xn, ya, yb, yn, off1a, off1b, off2a, off2b, off3 (from debug strings). Their meaning was largely unknown, and so far we only used the following configuration:

xb = width
yb = height-1
off1b = padding after each line

Let's start with the simplest configuration (memcpy):

xb = size in bytes. 

Unfortunately, it doesn't work - the image height must be at least 2.

Simplest WxH

How Canon code sets it up:

  CalculateEDmacOffset(edmac_info, 720*480, 480):
     xb=0x1e0, yb=0x2cf

Transfer model (what the EDMAC does, in a compact notation):

xb * (yb+1)        (xb repeated yb times)

WxH + padding (skip after each line)

(xb, skip off1b) * (yb+1)

Note: skipping does not change the contents of the memory,
so the above is pretty much the same as:

(xb, skip off1b) * yb
followed by xb (without skip)

xa, xb, xn (usual raw buffer configuration)

xa = xb = width
xn = height-1

To see what xa and xn do, let's look at some more examples (how Canon code configures them):

  edmac_setup_size(ch, 0x1000000):
    xn=0x1000, xa=0x1000

  edmac_setup_size(6, 76800):
    xa=0x1000, xb=0xC00, xn=0x12

  CalculateEDmacOffset(edmac_info, 0x100000, 0x20):
    xa=0x20, xb=0x20, yb=0xfff, xn=0x7

  CalculateEDmacOffset(edmac_info, 1920*1080, 240):
    xa=0xf0, xb=0xf0, yb=0xb3f, xn=0x2

The above can be explained by a transfer model like this:

(xa * xn + xb) * (yb+1)

Adding ya, yn (to xa, xb, xn, yb)

Some experiments (trial and error, 5D3):

  xa = 3276, xb = 1638, xn = 1055                  => 3276*1055 + 1638 transferred
  xa = 3276, xb = 32,   xn = 1055                  => 3276*1055 + 32
  xa = 3276, xb = 0,    xn = 1055                  => 3276*1056 - 20 (?!)
  xa = 3276, xb = 3276, xn = 95, yb = 10           => 3276*96*11
  xa = 3276, xb = 3276, xn = 95, yb = 7,  yn = 3   => 3276*96*11
  xa = 3276, xb = 3276, xn = 10, yb = 62, yn = 33  => 3276*11*96
  xa = 3276, xb = 3276, xn = 10, yb=3, yn=5, ya=2  => 3276*11*19
  xa = 3276, xb = 3276, xn = 10, yb=5, yn=3, ya=6  => 3276*11*27
  xa = 3276, xb = 3276, xn = 10, yb=5, yn=3, ya=7  => 3276*11*30
  xa = 3276, xb = 3276, xn = 10, yb=7, yn=8, ya=9  => 3276*11*88
  xa = 3276, xb = 3276, xn = 10, yb=8, yn=3, ya=28 => 3276*11*96

(xa * xn + xb) REP (yn REP ya + yb)

Here, a REP b means 'perform a, repeat b times' => a * (b+1).

So far, so good, the above model appears to explain the behavior
when there are no offsets, and looks pretty simple.

There is a quirk: if xb = 0, the behavior looks strange.
Let's ignore it for now.

Adding off1b (to xa, xb, xn, ya, yb, yn)

What do we do about the offset off1b?


xa = 3276, xb = 3276, xn = 10, yb=95, off1b=100
=> copied 3276*10*96 + 3276, skipped 100,
   (CP 3276, SK 100) repeated 94 times (95 runs).

It copies a large block, then it starts skipping after each line.
Let's decompose our model and reorder the terms.
Then, let's skip off1b after each xb.

(xa * xn)        REP (yn REP ya + yb)
(xb, skip off1b) REP (yn REP ya + yb)

Let's check a more complex scenario:

xa = 3276, xb = 3276, xn = 10, yb=8, yn=3, ya=28, off1b=100
=> (CP 3276*10*29 + 3276,   SK 100), (CP 3276, SK 100) * 27,
   (CP 3276*10*29 + 3276*2, SK 100), (CP 3276, SK 100) * 27,
   (CP 3276*10*29 + 3276*2, SK 100), (CP 3276, SK 100) * 27,
   (CP 3276*10*9  + 3276*2, SK 100), (CP 3276, SK 100) * 8.

There's some big operation that appears repeated 3 times (yn),
although the copied block sizes are a little inconsistent (first is smaller).

After that, (xa * xn) is executed 9 times (yb+1).
At the end, (xb, skip off1b) is executed 9 times (also yb+1).

In the big operation, the 29 is clearly ya+1.

What if off1b is skipped after all xb iterations, but not the last one?
This could explain why we have an extra 3276 (the *2) on the last 3 log lines.

Regroup the terms like this:

  => ((CP 3276*10*29), (CP 3276, SK 100) * 28, CP 3276) * 3,
      (CP 3276*10*9 ), (CP 3276, SK 100) * 9.

Our model starts to look like this:

   (xa * xn)   (ya+1)
   (xb, skip off1b) *  ya
    xb without skip
  * yn

followed by:

   (xa * xn)   (yb+1)
   (xb, skip off1b) * (yb+1)

So far so good, it's a bit more complex,
but explains all the above observations.
Of course, the last line may be as well:

  (xb, skip off1b) * yb, xb without skip

Adding off1a

Let's try another offset: off1a = 44.
The log from this experiment is pretty long, so I'll simplify it by regrouping the terms.

xa = 3276, xb = 3276, xn = 10, yb=8, yn=3, ya=28, off1a=44, off1b=100
=> (
     ((CP 3276, SK 44)  * 28, CP 3276) * 10,
     ((CP 3276, SK 100) * 28, CP 3276),
   ) * 3,
     ((CP 3276, SK 44)  * 8, CP 3276) * 10,
     ((CP 3276, SK 100) * 8, CP 3276)

This gives good hints about what is happening when:

   ((xa, skip off1a) * ya, xa) * xn
    (xb, skip off1b) * ya, xb
) * yn,

   ((xa, skip off1a) * yb, xa) * xn
    (xb, skip off1b) * yb, xb

Adding the remaining offsets (all parameters are now used)

Let's add off2a, off2b and off3. They are pretty obvious now, so I'll skip the log file (which looks quite intimidating anyway).

   ((xa, skip off1a) * ya, xa, skip off2a) * xn
    (xb, skip off1b) * ya, xb,
     skip off3
) * yn,

   ((xa, skip off1a) * yb, xa, skip off2b) * xn
    (xb, skip off1b) * yb, xb

So, there is a pattern: perform N iterations with some settings, then perform the last iteration with slightly different parameters. The pattern repeats at all iteration levels (somewhat like fractals).

Just by looking at the memory contents, we can't tell what what the skip value is used for the very last iteration. However, by reading the memory address register (0x08) directly from hardware (not from the shadow memory), we can get the end address (after the EDMAC transfer was finished). For a write transfer, this includes the transferred data and also the skip offsets. Now it's straightforward to notice the last offset is off3, so our final model for EDMAC becomes:

EDMAC transfer model

   ((xa, skip off1a) * ya, xa, skip off2a) * xn
    (xb, skip off1b) * ya, xb, skip off3
) * yn,

   ((xa, skip off1a) * yb, xa, skip off2b) * xn
    (xb, skip off1b) * yb, xb, skip off3

The offset labels now start to make sense :)

C code (used in qemu):

for (int jn = 0; jn <= yn; jn++)
    int y     = (jn < yn) ? ya    : yb;
    int off2  = (jn < yn) ? off2a : off2b;
    for (int in = 0; in <= xn; in++)
        int x     = (in < xn) ? xa    : xb;
        int off1  = (in < xn) ? off1a : off1b;
        int off23 = (in < xn) ? off2  : off3;
        for (int j = 0; j <= y; j++)
            int off = (j < y) ? off1 : off23;
            cpu_physical_memory_write(dst, src, x);
            src += x;
            dst += x + off;

The above model is for write operations. For read, the skip offsets are applied to the source buffer - that's the only difference.

Offsets can be positive or negative. In particular, off1a and off1b only use 17 bits (digic 3 and 4) or 19 bits (digic 5), so we have to extend the sign.

The above model explained all the combinations that are not edge cases (such as yb=0 or odd values). Here are the tests I've ran: 5D3 vs QEMU.

For more details, please have a look at the "edmac" and "qemu" branches.

To be continued.
Jenkins is overloading the server too much for my taste lately, so I'm considering rewriting the nightly builds page as static HTML, without any JavaScript. Another reason for the rewrite: the builds page is impossible to load on slow network connections.

Any volunteers to help me with this task? I'm going to use a Python script similar to this one, and here's what I came up with so far (edited manually, based on the previous template):

Here's a proof of concept Python code to retrieve Jenkins build data, using JenkinsAPI:

from jenkinsapi.jenkins import Jenkins
J = Jenkins('')
B = J['500D.111'].get_last_good_build()
artifact = list(B.get_artifacts())[0]
print artifact.url

Feedback is also welcome (while I'm at it). There's some extra functionality I'd like to add, too.
Modules Development / Burst mode tweaks (
September 15, 2016, 08:16:34 PM
A while ago I had a fairly strange problem: I was taking pictures with a manual 200mm lens, and had trouble keeping the subject in the frame. Why? Because, during a burst sequence, the display is turned off. I was focusing manually from LiveView, so couldn't look through the viewfinder.

So, here's a module that implements this tweak: during a burst sequence, it shows a live preview of the captured images. RAW only.

Also included a tweak that limits the number of pictures in a burst sequence (for example, if you want to take 2 pictures on a single shutter press). I'm not sure where this could be useful, but was simple enough to write.

I wrote this about one or two months ago, but didn't get the opportunity to battle-test it yet.


If it works fine on most models and people find it useful, I'll include it in the nightly.
Topic split from

Old discussion about vertical stripes:

With crop_rec, these stripes also appear in H.264. They can be fixed in post, but it would be best if we could avoid them in the first place.

Vertical stripe fix for H.264:


Note: the stripes we are talking about in this thread are visible in highlights (e.g. sky at low ISO).

Quote from: dmilligan on August 17, 2016, 12:52:45 AM
Is the vertical banding present in the highlights or shadows?

If the banding is in the highlights, darkframe subtraction won't help. Banding in the highlights is a sign of a multiplicative defect (the column gains are off slightly, which is fixed by multiplying pixels values by some correction factor).

If the banding is in the shadows, the "vertical stripe fix" won't help. Banding in the shadows is a sign of an additive defect (there is some linear offset, which is fixed by simply subtracting some correction value from the pixel values).

Original message:
I'd say let's review the vertical stripe fix on 1.1.3, and if it's everything alright, I'll include it in the main builds (and fix the 1.2.3 crop build) soon.
Since I'm a bit stuck with DIGIC 6, I took the cpuinfo module from CHDK and integrated it with the portable display code. This should give detailed info about the hardware (CPU, caches, memory configuration and so on).

Besides the DIGIC 6 cameras, I'm also interested in the results from recent ports (70D, 100D, 1200D); tests from other cameras are also welcome, but they are mostly for fun.

Source code: the recovery branch

Binaries (last update Jan14, 2019):
- AUTOEXEC.BIN - portable, for all cameras with boot flag already enabled
- FIR files: TODO.

This code is pretty verbose - it will show a few pages of low-level info.

You will need to take screenshots to be able to read all that stuff.

As I'm on a very slow network connection, please do not upload large screenshots. If possible, it would be best if you could write down the info as plain text. If not, please try to keep the image size small (under 50K each).

edit: updated to save this info to a log file.
Reverse Engineering / MPU communication
July 22, 2016, 11:26:59 AM
There was some progress understanding the communication between the main CPU and the MPU (a secondary CPU that controls buttons, lens communication, shutter actuation, viewfinder and others), so I think it's time to open a new thread.


* QEMU docs: MPU Communication
* Code to dump MPU firmware: modules/mpu_dump
* NikonHacker emulator for TX19A:
* Communication protocol emulated in QEMU: qemu/eos/mpu.c
* How to log the MPU messages: [1] [2] [3] (you can use this build)
* Early discussion regarding button interrupt:
* Button codes in QEMU:
* First trick implemented using a MPU message:

Quote from: Greg on July 21, 2016, 03:22:31 PM
500D, LV
mpu_send(06 04 09 00 00)
mpu_recv(3c 3a 09 00 3c 3c e0 00 3f 80 00 00 38 12 c0 00 b9 cb c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 08 08 11 10 50 49 02 59 88 88 00 32 00 00 00 00 00 01 00 00 00)

0x32 - focal length
0x10 - aperture

Now we can read lens_info in Photo mode.
Just call mpu_send(06 04 09 00 00). CPU receives data and automatically overwrite property lens_info.

Topic split from MLV Lite thread.

I'm trying to squeeze the last bit of performance out of MLV Lite, to make it as fast as (or maybe faster than) the original raw_rec. You can help by testing different builds and checking which one is faster.

Original message:

I made a small experiment with mlv_lite.

In dmilligan's implementation, frames are 4K larger than with the original raw_rec (because of the MLV headers stored for each frame, and also because of alignment issues). In my version, frame sizes are identical to the original raw_rec.

What I'd like to know: is there any noticeable speed difference between the two versions? How does it compare to the old raw_rec?

My build:
to be compared with the one from first post, and to the old raw_rec.

Caveat: my version does not handle 4GB limit well; I'll fix it only if the speed difference is noticeable.
I've just got this card hoping it will reduce wear and tear on my cards and card readers. As you can imagine, constantly swapping the card between camera and PC is not pleasant for any of the devices involved (card, card reader and card slot from the camera - they all suffer).

Short review: (my model is W-03 8GB)

- SLOW (1 minute for downloading a 18MB file...)
- quite hard to set up (it took me about 3 hours from unpacking to being able to copy files on it)
- some things just don't seem to work (such as internet passthrough, or manually enabling/disabling wifi)
- formatting the card removes wifi functionality (!)
+ it has a nice logo with Toshiba printed on it :D
+ it has documentation, developer forum, Lua scripting, all sorts of bells and whistles (too bad the basics aren't working well...)

Side note: a while ago g3gg0 got a Transcend wifi card, and he mentioned it's very slow, so I hoped this one would be better. Looks like it isn't.

How to set up on Linux

So, to save you from hours of fiddling, here's a short guide on how to get it working on Linux, to the point of being able to run "make install" - that is, to upload ML on it without any cables:

- make a backup copy of the files from the card, in particular:
     - DCIM/100__TSB/FA000001.JPG
     - SD_WLAN/*
     - GUPIXINF/*/*
- format card (from camera or from any other device)
- put the files you just backed up, back on the card (to restore wifi functionality)

Network setup (assuming you have a wireless router):
- put this in the config file:

APPSSID=<your network name>
APPNETWORKKEY=<your network password>

- power-cycle the card, so it will connect to your router, just like any other device on the network
- configure your router so it always assigns the same IP address to the card
- check the new ip (ping the card)

Uploading ML:

See below.

Old instructions:
- mount the card as WebDAV in your favorite file browser (on my system: dav://
- find out where that location is mounted, e.g. by opening some file in a text editor and checking its path
- on my system, that path is: /run/user/1000/gvfs/dav:host=,ssl=false
- put this in Makefile.user (in the directory containing ML source code)

UMOUNT=gvfs-mount -u dav:// && echo unmounted

- make sure the camera is not writing to card (important!)
- run "make install" from the platform/camera directory.
- restart the camera by opening the battery door (important! this ensures the camera will no longer write to the card)
- make the card bootable if it isn't already (for example, copy ML-SETUP.FIR manually on the card and run Firmware Update)
- reboot the camera to start the latest ML you just uploaded.

"Unbricking" the card - if you have formatted it by mistake without a backup

- download Toshiba FlashAir utility from here
- install it (installation works under Wine, the utility doesn't)
- find the files you should have copied before formatting, here: c:/Program Files (x86)/TOSHIBA/FlashAirTool/default/W-02
- these files appear to work with W-03 as well; just copy them to the card, then reconfigure it

(note: I do have an Windows XP box, but couldn't manage to install that utility on it; didn't try too hard, gave up after 15 minutes or so)


- make qinstall (to only upload autoexec and maybe the modules you are working on - did I mention it's SLOW?)
- [DONE] restore Toshiba files after format (so you don't lose wifi capabilities when formatting the card from the camera)
During the last few weeks I have finally managed to sit down and implement the 3x crop mode discovered by Greg a while ago, and summarized here. This feature could be very useful for wild life, astro, or just for bragging on the forums about how cool your camera is :)

How it works?

It modifies Canon's 1080p and 720p video modes by altering the sensor area that is going to be captured. Resolution and nearly all other parameters are left unchanged.

That means:
- it works with both H.264 and RAW
- works at all usual frame rates: 24/25/30/50/60 fps (with some quirks at high FPS)
- preview, sound, overlays, HDMI out... most of the stuff should just work as expected.


update: here's one from kgv5

Not yet, but I have a feeling DeafEyeJedi is already on it :) scroll down :)


On 5D3 (other cameras may be different, we'll see):

- framing almost centered (only roughly checked by zooming on a test subject on the camera screen)
- 720p aspect ratio:
   - at 720p (50/60fps), we are sampling the sensor at 1:1 crop, but Canon uses a 5x3 pixel binning
   - that means, H.264 video will be squashed - resize the video in post at 1280x432 or 1920x648
   - however, raw video will have 1:1 pixel ratio (not squashed, just very wide - up to 1920x632)
- there is a small black border at the top of the frame, if you record at max resolution in RAW
- it may have side effects such as sensor overheating, camera exploding or displaying BSODs.

As usual - if it breaks, you get to keep both pieces.


Current implementation only works on 5D3, and I've tested it only on 1.1.3. The module is not yet compatible with current nightlies, so you need a full package (not just the module).

As you can see if you scroll down, it is possible to port this on many other cameras. It's just not very straightforward. But, on the bright side, Maqs is already eager to port it to 6D, and I'm sure others will follow.

Note that 600D and 70D already have this feature from Canon, and 650D, 700D and EOS M already have it in ML with a little hack. All other cameras could already use the crop mode when recording RAW from the 5x zoom view, but with some quirks (mainly bad preview and off-center image). So, this is nothing really new - maybe just a little more usable.

Why a separate build is needed? It's because this module uses an experimental patching library, which seemed to work fine while I wrote the code, but as soon as I took it outside (about one month ago), it crashed almost every time I used ETTR + Dual ISO. I've fixed the bug since then, but you can imagine you don't want this level of "stability" in the nightly builds.

However, this library paves the way to implementing the long-awaited ISO tweaks (with real ISOs lower than 100, including a small dynamic range boost). I've also used this library as a backend for low-level tweaks such as choosing FAT32 or exFAT when formatting a card from the camera. So, let's test it and iron out all the quirks!


- source code
- 5D3 1.1.3: (build log)
- 5D3 1.2.3: (build log) (confirmed by Hans_Punk)
- other cameras: hopefully coming soon


- port it to other cameras
- merge into nightly builds


- grab and from the ISO research thread, then:
- try to understand what those registers do, and which ones need to be changed to achieve various effects
- check black bars with raw_diag, option OB zones (trigger with long half-shutter press in LV)
- optional: check DR, SNR curve, full well and read noise with raw_diag, option "SNR curve (2 shots)", trigger with "Dummy Bracket"
- take your time to read and experiment; it's very time-consuming, but once you get the hang of it, be careful - it's addictive.

Porting checklist

- clean image (without weird artifacts)
- clean turning on and off, in all the supported video modes
- clean switch to/from other modes (5x/10x zoom, other video modes, photo mode - these should not be affected)
- black bars should be larger than or equal to the values assumed in raw.c (check with raw_diag OB zones)
- centered image: put the focus box in the centered image and zoom in; the subject should not move
- menu: if there is any mode where the patch is not working, it should print a warning


Greg - original findings on 500D
Levas - for finding the equivalent registers for 6D
mothaibaphoto - for finding the 5D3 register values for 30/50/60 fps
Maqs - for the lightweight code hooks used in the backend
g3gg0 - for laying out the foundation about ADTG registers, ENGIO registers and other low-level stuff that tends to be forgotten once it's up and running.
General Development / Portable ROM dumper
January 25, 2016, 09:29:53 AM
This is a small program that saves the contents of your camera's ROM contents on the card.
It won't modify your camera - at least, not intentionally :)

This is *NO* Magic Lantern build

Latest download: autoexec.bin (2020Aug17, updated for EOS R/RP/90D/250D - only after enabling bootflag via UART)

DIGIC 4+:  1300D  2000D  4000D
DIGIC 6:  5D4  750D  760D  80D
DIGIC 7:  200D  6D2  77D  800D
DIGIC 8:  EOSR EOSRP 250D 90D M6 II G7X III M50  SX70  SX740
Master/Slave:  5DS  5DSR  7D2 7D
Oldies:  1000D  30D  400D  40D  450D  5D

- green = confirmed working (either the last version, or a slightly older one)
- blue = not tested, but likely to work (based on other similar models, or on previous tests)
- purple = not tested, there may be surprises, but fixable (based on previous tests)
- orange = not tested, but unlikely to work (based on previous failures)
- red = not working, no idea how to fix

Supported cameras:
- tested in QEMU: 5D, 5D2, 5D3, 5D4, 6D, 6D2, 7D, 7D2, 40D, 50D, 60D, 70D, 77D, 80D, 400D, 450D, 500D, 550D, 600D, 650D, 700D, 750D, 760D, 800D, 100D, 200D, 1000D, 1100D, 1200D, 1300D, EOSM, EOSM2, M50, SX70.
- other models from the same generation may work, too (see the FIR list for models not yet running ML).

Not supported:
- cameras running PowerShot firmware (including EOS M3, M5, M6, M10, M100)

- a memory card formatted as FAT12/16/32 (exFAT won't work)
- for autoexec.bin: boot flag enabled in the camera (e.g. ML already installed) + bootable card (EOSCard, MacBoot,, QEMU image)
- FIR versions do not require any boot flags (just place on the card and run Firmware Update)
- check MD5 checksums after dumping (important!)

Source code:
- recovery branch
- compiled from platform/portable.000 with CONFIG_BOOT_FULLFAT=y, CONFIG_BOOT_DUMPER=y and CONFIG_BOOT_SROM_DUMPER=y

Old limitations (for 2018 and earlier dumpers only):


- a very small SD card or filesystem (important!)
- no important files on the card (these routines are buggy and may destroy the filesystem!!!)
- boot flag enabled in the camera (e.g. ML already installed) + bootable card (EOSCard, MacBoot,, QEMU image)
- alternative: FIR version does not require any boot flags
- check MD5 checksums after dumping (important!)

Formatting a larger card at a much lower capacity (e.g. 256MB) does the trick. For example, you can write the SD image that comes with QEMU to your SD or CF card (follow this guide). This image contains the portable display test and is bootable (so, you can test it in the camera straight away).

Original post:

Lately I've got a few softbricked cameras to diagnose, and struggled a bit with the ROM dumper from bootloader: it wasn't quite reliable. A while ago, g3gg0 reimplemented it with low-level routines (which worked on his camera, but not on mine). Today I looked again at the old approach, and it looks like the file I/O routines from bootloader had to be called from RAM, not from ROM.

So, I've updated the code and need some testing. I've emulated this in QEMU, but the results may be different on real hardware.

What you have to do:

- download autoexec.bin
- place it on a card without any important data on it (it might corrupt the filesystem if anything goes wrong)
- the display looks roughly like this:
- after it's finished, look on the card, and you will find 4 files: ROM[01].BIN and ROM[01].MD5.
- you don't have to upload them, just check the MD5 checksum:
  - Windows: you may use
  - Mac, Linux: md5sum -c *.MD5
- repeat the test on the same card (let it overwrite the files), then on a card with different size (and maybe different filesystem).

Some cameras have only ROM1 connected, so dumping ROM0 will give just random noise. In this case, the ROM0 checksum may not match, but that's OK.

The ROM dumper should recognize all ML-enabled cameras, except for 5D2, 50D and 500D. These old models do not appear to have file writing routines in the bootloader (or, at least I could not find them). The QEMU simulation works even on exotic models like 1200D or EOS M2.

So, you don't have to upload any files or screenshots. Simply verify the MD5 checksums on your PC (if in doubt, paste the md5sum output).

That's it, thanks for testing.
Original discussion:

I wanted to split the topic, but that would make the original discussion harder to follow, so I'm just copying the relevant parts here.

Quote from: Audionut on June 07, 2014, 03:09:59 PM
Finally finished stuffing around, and here is a good bunch of results.  Enjoy!

From the above data, I'll try to guess the pixel binning factors from LiveView (and I'll ask SpcCb to double-check what follows):

My quick test, at ISO 6400:

         gain       read noise     ratio (compared to 5x)
720p:    1.43       14.79          14.74
1080p:   0.88       14.75          9.07
5x:      0.097      23.64          1

Numbers from Audionut:

         gain       read noise     ratio (compared to 5x)

ISO 100:
720p:    73.48      6.93           11.9        (note: it's very hard to tell how much is read noise
1080p:   53.78      6.54           8.7          and how much is Poisson noise from a nearly straight line)
5x:       6.15      5.98           1
photo:    5.11      6.77           0.83

ISO 200:
720p:    44.87      7.22           14.4
1080p:   27.50      6.76           8.84
5x:       3.11      6.26           1
photo:    2.58      7.08           0.83

ISO 400:
720p:    22.50      7.34           14.6
1080p:   13.94      6.90           9.05
5x:       1.54      6.70           1
photo:    1.27      7.61           0.82

ISO 800:
720p:    11.40      7.77           14.6
1080p:    7.07      7.32           9.06
5x:       0.78      7.32           1
photo:    0.66      8.60           0.85

ISO 1600:
720p:     5.80      8.78           14.7
1080p:    3.54      8.34           8.98
5x:       0.394     9.94           1
photo:    0.324    11.10           0.82

ISO 3200:
720p:     2.91      10.82          14.9
1080p:    1.81      10.45          9.23
5x:       0.196     14.75          1
photo:    0.166     16.28          0.85

ISO 6400:
720p:     1.41      14.81          14.7
1080p:    0.87      14.67          9.06
5x:       0.096     23.90          1
photo:    0.082     30.09          0.85

ISO 12800:
720p:     0.71      29.69          14.2
1080p:    0.44      29.44          8.8
5x:       0.050     58.40          1

Raw buffer sizes (active area):
- photo mode: 5796x3870
- 1080p: 1932x1290
- 1932x672 stretched (covers roughly 16:9 in LiveView)

Ratio between photo mode and 5x zoom: 0.83. If the 5x zoom captures a little more highlight detail, it's OK. The difference may be also because LiveView uses electronic shutter, while photo mode uses mechanical shutter. So, I'll use the 5x zoom as reference for the other LiveView modes.

From the above data, I now have very strong reasons to believe that 5D3 does a 3x3 binning in 1080p, and a 5x3 binning in 720p (5 lines, 3 columns).

(if you shoot 720p on 5D3, the desqueezing factor - to correct the aspect ratio of your footage-  is therefore exactly 5/3 = 1.67x)

A possible 3x3 binning (and easy to implement in hardware) would be to average each sensel and its 8 neighbours of the same color (considering the two greens as separate colors, as in the well-known four-color demosaicing algorithms). This binning scheme can be easily extended to 720p (5x3), but might cause some interesting artifacts on resolution charts.


A more complex 3x3 binning (very unlikely to be implemented in hardware, since it requires complex logic and knowledge about each pixel's color) could be:

(I'm showing it just for completeness, but I think the first pattern is the much more likely to be used).

If anybody could shoot some resolution charts in LiveView (silent pictures in 5x, 1080p and 720p, without touching the camera - I need more than pixel-perfect alignment), I can verify if these patterns are indeed the correct ones or not. If you don't use a remote release, you can take this test with the "Silent zoom bracket" option from the latest raw_diag to avoid camera movement.

Side note: the registers that control the downsizing factors are:
- Horizontal: CMOS[2], which also controls the horizontal offset; you can select full-res (1:1) or downsized by 3
- Vertical: ADTG 0x800C (2 for 1080p, 4 for 720p and 0 for zoom, so it should be the downsizing factor minus 1; other values are valid too)

Other cameras: I don't have much data, but from what I have, the binning factor seems to be 3. For example, the data from 50D (dsManning) looks like this:

         gain       read noise     ratio (compared to photo)

ISO 100:
1080p:    7.67      5.34           3.4
photo:    2.26      6.15           1

ISO 200:
1080p:    4.20      5.48           3.85
photo:    1.09      6.52           1

ISO 400:
1080p:    2.04      5.89           3.4
photo:    0.60      7.97           1

ISO 800:
1080p:    1.04      7.30           3.4
photo:    0.31     10.94           1

ISO 1600:
1080p:    0.53     10.32           3.5
photo:    0.15     16.12           1

ISO 3200:
1080p:    0.53     10.45          nonsense :)
photo:    0.08     38.06          1

and from 500D (Greg):

         gain       read noise     ratio (compared to photo)
ISO 100:
photo LV: 7.38      6.34           3.3
photo:    2.23      6.82           1

From the resolution charts (the first one I could find was this), most cameras (except 5D3) show artifacts as if they were skipping lines, but not skipping columns.

Therefore, I believe the binning pattern looks like this:

but I'm waiting for your raw_diag tests to confirm (or reject) this theory.

Edit: confirmed on EOS M and 5D Mark II. From visual inspection, this method appears to be used on most other Canons.

An interesting conclusion is that 5D3 does not throw away any pixel in LiveView. Then you may wonder, why binning a full-res CR2 by 3x3 in post is cleaner? Simple: binning in software will average out all noise sources, while binning in the analog domain (like the 5D3 does) will only average out the noise that got introduced before binning (here, the shot noise and maybe a small part of other types of noise), but cannot average out the noise that gets introduced after binning (here, the read noise, which is quite high on Canon sensors).

Therefore, at high ISO (where the shot noise is dominant), the per-pixel SNR on 5D3 1080p is improved by up to*) log2(sqrt(9)) = 1.58 EV, compared to per-pixel SNR in crop mode. On the other cameras (3x1 binning), per-pixel SNR is improved by up to log2(sqrt(3)) = 0.79 EV.

So, the noise improvement from the better binning method is up to 0.8 EV at 1080p (ranging from 0 in deep shadows to 0.8 in highlights). That's right - throwing away 2/3 of your pixels will worsen the SNR by only 0.8 stops (maybe not even that).

*) If the binning would be done in software, you would simply drop the "up to" - the quoted numbers would be the real improvement throughout the entire picture :)
This one is real :P
(and the Linux port is real as well)

I just discovered the 8086tiny emulator - plain C source code, minimal dependencies, so I managed to compile it as a ML module, and now I'm running FreeDOS on the camera :)

- (to be copied on the card, under ML/MODULES)
- bios (to be copied on card root)
- fd.img (to be copied on card root)
- IME modules from g3gg0, to be able to type commands at the DOS prompt

Source code:

- FreeDOS will start on top of DryOS, at camera startup
- press SET to start typing commands in the IME editor
- the only commands I've tested were "dir" and "bogomi16", on 60D.
We, the Magic Lantern Team, are very proud to present you a new milestone in DSLR customization!


(edit: after playing a game, making it look like an April's fool, we can ensure: this is not a fake!)

Starting from our recent discovery about display access from bootloader, we thought, hey, we could now have full control of the resources from this embedded computer. At this stage, we knew what kind of ARM processor we have (ARM 946E-S), how much RAM we have (256MB/512MB depending on the model), how to print things on the display (portable code), how to handle timers and interrupts, how to do low-level SD card access on select models (600D and 5D3), and had a rough idea where to start looking for button events.

So, why not trying to run a different operating system?

We took the latest Linux kernel (3.19) and did the first steps to port it. As we have nearly zero experience with kernel development, we didn't get too far, but we can present a proof of concept implementation that... the Linux kernel 3.19 on Canon EOS DSLR cameras!
- it is portable, the same binary runs on all ML-enabled cameras (confirmed for 60D, 600D, 7D, 5D2 and 5D3)
- allocates all available RAM
- prints debug messages on the camera screen
- sets up timer interrupts for scheduling
- mounts a 8 MiB ext2fs initial ramdisk
- starts /bin/init from the initrd
- this init process is a selfcontained, libc-less hello world
- next step: build userspace binaries (GUI, etc)

Demo video:

Download: autoexec.bin

Source code (WIP):

We hope this proof of concept will encourage you to tinker more with your new embedded computer. Maybe you want to run Angry Birds on it, or maybe Gimp? :)


About one month ago, g3gg0 found a way to access the LCD display from bootloader context, without calling anything from the main firmware. This makes a very powerful tool for diagnosing bricked cameras, and also a playground for low-level reverse engineering.

The only camera-specific bits for printing stuff on the LCD are:
- we have to call a Canon routine that initializes the display (which is in bootloader, not in main firmware): we named it "fromutil_disp_init".
- for the YUV layer, newer cameras use YUV422, while older cameras (only checked 5D2) use YUV411. This difference is not essential (you can print on the BMP layer only).

Today I wrote an autodetection routine that finds the display init routine from ROM strings, and the result is a portable "hello world" binary. That means, it should print something on any ML-enabled camera (and maybe even on cameras without ML). Same binary for all cameras, of course.

I've tested the code on 5D3 and 60D, and I'm looking for confirmation on the other models.

If you are already running ML, just download this autoexec.bin, run it, take a picture of your camera screen (sorry, no screenshots yet) and upload it here.

If you have a Canon DSLR without a ML port available, we need to sign this binary (create a FIR). Just mention your camera model and I'll create one for you. Don't expect this to speed up the porting process for your camera. But I hope this proof of concept will convince you to start tinkering with your new little computer :)
Looks like firmware 1.1.3 is here to stay, so I've built the installer.

1) Format the card from the camera.
2) Make sure you are running Canon firmware 1.1.3.
3) Copy ML files on the card and run Firmware Update.

1) Run Firmware Update from your ML card.
2) Follow the instructions.

Background info
Most Canon firmware upgrades were usually minor (like fixing a typo in the Ukrainian language), so upgrading ML to latest firmware was pretty easy. Not so with 5D3 1.2.3.

Most important changes in 1.2.3 since 1.1.3 (source):
- clean HDMI out
- dual monitor support
- AF at f/8 with teleconverters
- fixed an AFMA bug
- a bunch of minor fixes

Unfortunately, in order to implement the dual monitor feature, Canon made some major changes on the display side (changed some low-level registers) and LiveView implementation (which is now quad-buffered, while on 1.1.3 it's triple-buffered, just like in all other ML-enabled cameras). From ML's point of view, this resulted in the following differences:

- 1.1.3 is slightly faster when recording RAW/MLV (not much, only a few MB/s)
- fast zebras are not working on 1.2.3 or later, but they are OK on 1.1.3
- no full-screen magic zoom on 1.2.3 or later
- no brightness/contrast/saturation adjutments on 1.2.3 or later
- "DIGIC peaking" is a little more limited on 1.2.3 or later (only the basic mode is working, not the ones with fancy backgrounds)
- motion detection in "frame difference" mode does not work on 1.2.3 or later
- I'm not sure if corrupted frames are still an issue (if you experience them, try downgrading to 1.1.3)

Other than that, the two ML versions are pretty much identical.

Bottom line:
- if you need any of Canon's updates from firmware 1.2.3 or later, use 1.2.3 (and prepare for upgrading to 1.3.3)
- otherwise, consider downgrading to 1.1.3 (I did).

Note: the upgrade from 1.2.3 to 1.3.3 was a minor one, and porting ML is straightforward (chris_overseas already did it, I only need to sit down and try it).

I know, maintaining two firmware versions for a single camera is a hassle, but if we want to squeeze the last drops of performance from this camera, we have no choice. And, as you've guessed, fixing some of the above issues is quite hard.

Please discuss raw recording issues in the Raw Video section of the forum, not here.
Reverse Engineering / Images from ROM dumps
October 22, 2014, 10:22:28 AM
Some things found in 60D ROM:

0xf8a8af60-0xf8a9af80, 128x128x4:

0xf8fc0094-0xf8fdaee4, 360x306, already found by Pelican a long time ago:

0xf8ea0000-0xf8ea4000, 84x195 (approx), ROM from @bpress:

0xf8ea0000-0xf8ebfc00, 84x1548 (approx), ROM from me (this section is much larger):

(note: I've called FA_SetDefaultSetting and some other factory functions on my camera, might be related)
Reverse Engineering / EekoAddRawPath
September 25, 2014, 04:11:05 PM
Starting from the latest discovery from g3gg0, I've looked into some routines that appeared to do raw additions and subtractions (EekoAddRawPath). They seem to work, you can also do some scaling, compute min/max, and they are fairly fast (about 60 ms for a full-size raw buffer, enough for stacking some photos in the camera).

Problem: these functions are only available on DIGIC 5 cameras. However, g3gg0 found out they use the TWOADD module (present on all cameras), so there might be some hope.

5D3 stubs:

/* 1.2.3 */
void (*EekoAddRawPath)(void *a, void *b, void *out, int op, int off_a, int gain_a, int black_a, int gain_b, int black_b, int div8, int out_off, void (*cbr)(void*), void* cbr_arg) = 0xFF32C538;
void (*EekoAddRawPath_cleanup_engio)() = 0xFF5127F0;
void (*EekoAddRawPath_cleanup_reslock)() = 0xFF512698;

/* 1.1.3 */
FF327A54 (called after "%s Addsub Count:%d")
FF507CE8 (called after "stsCompleteMultiExp", after BEQ)
FF507B90 (called next)

Basic call to add two images:

    EekoAddRawPath(image_a, image_b, image_out, 0, 0, 4096, 2048, 4096, 2048, 0, 0, (void(*)(void*))give_semaphore, eeko_sem);
    take_semaphore(eeko_sem, 5000);


To figure them out, I've created two gradients (raw image buffers with fake data), horizontal and vertical, with values from 0 to 16383, and used them as operands for the Eeko routine. The third image is the result.

0: a + b

1: a - b

2: max(a, b)

3: min(a, b)

Octave code of what it does (incomplete, doesn't model all overflows):

function out = eeko(a, b, op, off_a, gain_a, black_a, gain_b, black_b, div8, out_off)

    % valid range for the parameters
    op = bitand(op, 3);
    gain_a = bitand(gain_a, 8191);
    gain_b = bitand(gain_b, 8191);
    % offset image A
    a = a + off_a;
    a = max(a, -2048);

    if gain_a ~= 4096,
        % scale image A, relative to black level
        a = round(max((a - black_a) * gain_a / 4096, -2048) + black_a);

        % note: without the "if", clamping to 16383 here will fail some tests
        a = min(a, 16383);
    if black_a > 4096
        % some strange overflow
        a = min(a + 8192, 16383);
    if gain_b ~= 4096,
        % scale image B, relative to black level
        b = round(max((b - black_b) * gain_b / 4096, -2048) + black_b);

        % note: without the "if", clamping to 16383 here will fail some tests
        b = min(b, 16383);
    % perform some operation between A and B
    switch op
        case 0
            out = a + b;
        case 1
            out = a - b;
        case 2
            out = max(a, b);
        case 3
            out = min(a, b);

    out = coerce(out, 0, 16383);

    if div8
        % darken image by 3 stops (not adjustable)
        % optionally offset by image; for some reason, the offset is multiplied by 7/8 (why?)
        out = round((out + out_off*7) / 8);
    out = coerce(out, 0, 16383);

function y = coerce(x, lo, hi)
    y = min(max(x, lo), hi);

- image_out buffer can be reused (you may use either image_a or image_b)
- to change resolution, one has to call some lower-level routines, with uglier syntax

To figure out the meaning of the parameters, I've started from the basic call and changed the values (one or two at a time). Then, by trial and error, I've adjusted the octave code (eeko.m) until it matched the images saved from camera.

Test data, camera code and octave scripts can be found here:
General Development / Dynamic range and Equivalence
September 23, 2014, 04:47:30 PM
Topic split from CMOS/ADTG/Digic register investigation on ISO.

These days I've fiddled with these noise models quite a bit, and I have some interesting results.

First, I've imported all the data from and created some plots from there. If you remember, my noise model only uses the FWC and the read noise. Here are some plots (take them with a grain of salt):

(I can pick any combination of cameras from there, so if you are interested in some particular comparisons, just ask).

Typo on 6D resolution is not 3840x5760, but 3670x5496. If you know how to contact these guys, please let me know.

Note the dynamic range I've used is not log2(full_well) - log2(dark_noise) - this formula is just an approximation, and a very rough one at high ISO - it's overestimated by as much as 0.5 stops on 60D at ISO 12800, but only by 0.05 stops at ISO 100 (just do the math from my SNR plots below). Instead, I've used the DR as defined in ISO 15739 - quote from :

the ratio of the maximum luminance that receives a unique coded representation (the "saturation" luminance) to the lowest luminance for which the signal to noise ratio (SNR) is at least 1.0

So, from the SNR curve, I pick the point where SNR is 0 EV (that means 1.0) and I measure the dynamic range from here until the white level, like this:

Then, I've tried to compare sensorgen data with the measurements obtained with this method. I took the 5D3 and 6D data from Audionut and Levas, and also measured the 60D and the 5D3 myself.

SNR curves for 60D:
SNR curves for 5D3:

Comparison (my method vs sensorgen data):

Notice the outlier on sensorgen's data at 60D ISO 3200 (so, take their numbers - and also mine - with a grain of salt).

If you looked closely at the SNR plots, you can see I've tried to compute some confidence intervals. I know almost nothing about this subject (statistics are not exactly my cup of tea), so I simply changed the model parameters (read noise and full well) until the curve fitting error (sum of absolute deviations) increased by 20% (arbitrary threshold that appears to give reasonable results). It's good for identifying bad fits, but it probably has no statistical interpretation (like some probability of the true values being in this interval or whatever). This stuff is way beyond my math skills.

Example of bad fit (the datasheet parameters for this test case are FWC=13500 and noise=13 electrons):

Just for fun, I've also measured the Nokia Lumia 1020 (test data from g3gg0). You can see the higher uncertainity for dynamic range, because the test data did not include really deep shadows. For some reason, the high-ISO result seem bogus, figure out why.

SNR curves for Lumia 1020:

So, at this point I think we have a pretty good method for evaluating the improvements of the ISO tweaks (both evaluating the DR improvements, and telling where these improvements are - in highlights or shadows). That means, I will be able to tell the exact performance of the tweaks, without the fear of over-promising :P

There's still something not considered in my tests - the response curve (I've assumed it's linear).

For the octave scripts used for these plots, just ask.

@Levas: did you keep the CR2 files for the dynamic range tests? I was curious to check the uncertainity levels.