Show Posts - a1ex

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - a1ex

Pages: [1] 2 3 ... 6
Modules Development / Writing modules tutorial #1: Hello, World!
« on: March 16, 2017, 10:24:14 PM »
So far, if you wanted to write your own module, the best sources of documentation were (and probably still are) reading the source code, the forum, the old wiki, and experimenting. As a template for new modules, you probably took one of the existing modules and removed the extra code.

This is one tiny step to improve upon that: I'd like to write a series of guides on how to write your own modules and how to use various APIs provided by Magic Lantern (some of them tightly related to APIs reverse engineered from Canon firmware, such as properties or file I/O, others less so, such as ML menu).

Will start with the simplest possible module:

Hello, World!

Let's start from scratch:
Code: [Select]
hg clone -u unified
cd magic-lantern/modules/
mkdir hello
cd hello
touch hello.c

Now edit hello.c in your favorite text editor:
Code: [Select]
/* A very simple module
 * (example for module authors)
#include <dryos.h>
#include <module.h>
#include <menu.h>
#include <config.h>
#include <console.h>

/* Config variables. They are used for persistent variables (usually settings).
 * In modules, these variables also have to be declared as MODULE_CONFIG.
static CONFIG_INT("hello.counter", hello_counter, 0);

/* This function runs as a new DryOS task, in parallel with everything else.
 * Tasks started in this way have priority 0x1A (see run_in_separate_task in menu.c).
 * They can be interrupted by other tasks with higher priorities (lower values)
 * at any time, or by tasks with equal or lower priorities while this task is waiting
 * (msleep, take_semaphore, msg_queue_receive etc).
 * Tasks with equal priorities will never interrupt each other outside the
 * "waiting" calls (cooperative multitasking).
 * Additionally, for tasks started in this way, ML menu will be closed
 * and Canon's powersave will be disabled while this task is running.
 * Both are done for convenience.
static void hello_task()
    /* Open the console. */
    /* Also wait for background tasks to settle after closing ML menu */

    /* Plain printf goes to console. */
    /* There's very limited stdio support available. */
    printf("Hello, World!\n");
    printf("You have run this demo %d times.\n", ++hello_counter);
    printf("Press the shutter halfway to exit.\n");

    /* note: half-shutter is one of the few keys that can be checked from a regular task */
    /* to hook other keys, you need to use a keypress hook - see hello2 */
    while (!get_halfshutter_pressed())
        /* while waiting for something, we must be nice to other tasks as well and allow them to run */
        /* (this type of waiting is not very power-efficient nor time-accurate, but is simple and works well enough in many cases */

    /* Finished. */

static struct menu_entry hello_menu[] =
        .name       = "Hello, World!",
        .select     = run_in_separate_task,
        .priv       = hello_task,
        .help       = "Prints 'Hello, World!' on the console.",

/* This function is called when the module loads. */
/* All the module init functions are called sequentially,
 * in alphabetical order. */
static unsigned int hello_init()
    menu_add("Debug", hello_menu, COUNT(hello_menu));
    return 0;

/* Note: module unloading is not yet supported;
 * this function is provided for future use.
static unsigned int hello_deinit()
    return 0;

/* All modules have some metadata, specifying init/deinit functions,
 * config variables, event hooks, property handlers etc.


We still need a Makefile; let's copy it from another module:
Code: [Select]
cp ../ettr/Makefile .
sed -i "s/ettr/hello/" Makefile

Let's compile it:
Code: [Select]

The build process created a file named README.rst. Update it and recompile.

Code: [Select]
make clean; make

Now you are ready to try your module in your camera. Just copy the .mo file to ML/MODULES on your card.

If your card is already configured for the build system, all you have to do is:
Code: [Select]
make install

Otherwise, try:
Code: [Select]
make install CF_CARD=/path/to/your/card

or, if you have such device:
Code: [Select]
make install WIFI_SD=y

That's it for today.

To decide what to cover in future episodes, I'm looking for feedback from anyone who tried (or wanted to) write a ML module, even if you were successful or not.

Some ideas:
- printing on the screen (bmp_printf, NotifyBox)
- keypress handlers
- more complex menus
- properties (Canon settings)
- file I/O
- status indicators (lvinfo)
- animations (e.g. games)
- capturing images
- GUI modes (menu, play, LiveView, various dialogs)
- semaphores, message queues
- DryOS internals (memory allocation, task creation etc)
- custom hooks in Canon code
- your ideas?

Of course, the advanced topics are not for second or third tutorial.

General Development Discussion / Recording RAW and H.264 at the same time
« on: February 11, 2017, 02:34:52 PM »
I was experimenting with shooting raw video while simultaneously recording H.264 [...]

[...]it is too much of a hack[...]

Here's an attempt to make it a bit less of a hack:

Currently, focus peaking gives you the option to use two image buffers: the LiveView one (720x480 when used on internal LCD) and the so-called HD one (usually having higher resolution). Of course, the peaking results with the two options are slightly different.

To simplify the code, I'd like to use only the LiveView buffer, like most other overlays.

Is there any reason to use the high-res buffer? In other words, did any of you get better results by using it?

General Development Discussion / Thread safety
« on: February 05, 2017, 02:12:43 AM »
While refactoring the menu code, I've noticed it became increasingly complex, so evaluating whether it's thread-safe was no longer an easy task (especially after not touching some parts of the code for a long time). The same is true for all other ML code. Not being an expert in multi-threaded software, I started to look for tools that would at least point out some obvious mistakes.

I came across this page, which seems promising, but looks C++ only. This paper appears to be from the same authors (presentation here), and C is mentioned too, so adapting the example is probably doable.

Still, annotation could use some help from a computer. So I found pycparser and wrote a script that recognizes common idioms from ML code (such as TASK_CREATE, PROP_HANDLER, menu definitions) and annotates each function with a comment telling what tasks call this function.

Therefore, if a function is called from more than one task, it must be thread-safe. The script only highlights those functions that are called from more than one task (that is, those that may require attention).

Still, I have a gut feeling that I'm reinventing the wheel. If you know a better way to do this, please chime in.


Note: in DryOS, tasks == threads.

General Development Discussion / Experiment - Dynamic My Menu
« on: January 31, 2017, 09:51:00 PM »
Today I was a bit tired of debugging low-level stuff like Lua tasks or camera-specific quirks, but still wanted to write something cool. So here's something I wanted for a long time. The feedback back then wasn't exactly positive, so it never got implemented, but I was still kinda missing it.

Turns out, it wasn't very hard to implement, so there you have it.

What is it?

You already know the Modified menu (where it shows all settings changed from the default value), and My Menu (where you can select your favorite items manually). This experiment attempts to build some sort of "My Menu" dynamically, based on usage counters.

How it works?

After a short while of navigating ML menu as you usually do, your most recently used items and also your frequently used items should appear there. As long as you don't have any items defined for My Menu, it will be built dynamically. The new menu will be named "Recent" and will keep the same icon as My Menu.

Every time you click on some menu item, the usage counter for that item is incremented. All the other items will have a "forgetting factor" applied, so the most recently used items will raise to the top of the list fairly quickly.

Clicking the same item over and over will only be counted once (so scrolling through a long list of values won't give extra priority to menu items). Submenu navigation doesn't count; only changing a value or running an action are counted.

Time is discrete (clicks-based). It doesn't care if you use the camera 10 hours a day or a couple of minutes every now and then.

To have both good responsiveness to recent changes, but also learn your habits over a longer time, I've tried two usage counters: one for short term and another for long term memory. If, let's say during some day, you need to keep toggling a small set of options, it should learn that quickly. But, if no longer need those options after that special day, those menu items will be forgotten quickly, and the ones you use daily (stored in the "long term memory") should be back soon.

So, the only difference between the "long term" and the "short term" counters is the forgetting factor: 0.999 vs 0.9. In other words, the "long term" counters have more inertia.

When deciding whether a menu item is displayed or not, the max value between the two is used, resulting a list of "top 11 most recently or frequently used menus". The small gray bars from the menu are the usage counters (debug info).

I have no idea how well this works in practice - it's something I came up with a few hours ago, and the tuning parameters are pretty much arbitrary.

Source code committed, and if there is interest, I can prepare an experimental build as well.

General Development Discussion / Touch-friendly ML menu
« on: January 06, 2017, 07:02:44 PM »
Some experiments I did last summer on a 700D (which I no longer have).

I remember it worked to some extent, but had some quirks. Don't remember the exact details, but I hope it could be useful (or at least fun to tinker with).

General Chat / Script for undeleting CR2 files
« on: January 01, 2017, 09:17:31 PM »
Looks like my 5D3 decided to reuse the file counters on two different cards. When sorting some photos, one CR2 just got overwritten by another image with the same name.

How to undelete it?

Testdisk's undelete tool didn't help (the file wasn't deleted, but overwritten). PhotoRec would have probably worked, given enough time, extra HDD space and patience to sort through the output files (not practical). I found a guide using debugfs, which didn't seem to work (too much low-level stuff I wasn't familiar with), and this article seemed promising. I knew a pretty tight time interval for the missing file (a couple of seconds, from previous and next file in the set), so I wrote a quick Python script to scan the raw filesystem for CR2 files with the EXIF date/time in a given range.

It worked for me.

It's all hardcoded for my system, but should be easy to adjust for other use cases.

Code: [Select]
# CR2 recovery script
# Scans the entire partition for CR2 files between two given timestamps,
# assuming they are stored in contiguous sectors on the filesystem.
# Hardcoded for 5D Mark III.

import os, sys, re
from datetime import datetime

d0 = datetime.strptime("2016:06:10 17:31:36", '%Y:%m:%d %H:%M:%S')
d1 = datetime.strptime("2016:06:10 17:31:42", '%Y:%m:%d %H:%M:%S')

f = open('/dev/sda3', 'r')

nmax = 600*1024
for k in xrange(nmax):
    p = k*100.0 / nmax*1024*k)
    block =*1024)
    if "EOS 5D Mark III" in block:
        i = block.index("EOS 5D Mark III")
        print k, hex(i), p
        b = block[i : i+0x100]
        date_str = b[42:61]
        try: date = datetime.strptime(date_str, '%Y:%m:%d %H:%M:%S')
        except: continue
        if date >= d0 and date <= d1:
            print date
            out = open("%X.CR2" % k, "w")
  *1024*k + i - 0x100)

Reverse Engineering / ProcessTwoInTwoOutLosslessPath
« on: December 18, 2016, 09:06:41 PM »
Managed to call ProcessTwoInTwoOutLosslessPath, which appears to perform the compression for RAW, MRAW and SRAW formats. The output looks like some sort of lossless JPEG, but we don't know how to decode it yet (this should help).

Proof of concept code (photo mode only for now):

Reverse Engineering / lv_set_raw / lv_select_raw
« on: December 11, 2016, 10:16:59 AM »
These are used for selecting the LiveView raw stream (aka "raw type", see PREFERRED_RAW_TYPE in raw.c).

There is a function that gives some more information about these modes: lv_select_raw in 70D, 80D, 750D/760D, 5D4 and 7D2. The debug strings also reference the PACK32 module (which is something that can write a raw image to memory), so probably this setting connects the input of PACK32 to various image processing modules from Canon's pipeline.

Some related pages: Register_Map, EekoAddRawPath, raw_twk, 12-bit raw video, mv1080 on EOSM...

The names appear to match between DIGIC 5 and 6 cameras, so here's a summary of the LV raw modes:

Code: [Select]
      5D4           80D              760D 7D2M 70D 700D     100D 5D3
0x00: DSUNPACK                                                     
0x01: UNPACK24                                                     
0x02: ADUNPACK                                                     
0x03: DARKSUB       <-               <-   <-   <-                   
0x04: SHADING       <-               <-   <-   <-  SHADE    <-   <-
0x05: ADDSUB        TWOLINEADDSUB    <-   <-   <-                   
0x06: DEFC          <-               <-   <-   <-                   
0x07: DFMKII        DEFMARK          <-   <-   <-  <-       <-     
0x08: HIVSHD        <-               <-   <-   <-  <-       <-   <-
0x09: SMI           <-               <-   <-   <-                   
0x0a: PEPPER_CFIL   <-               <-   <-   <-                   
0x0b: ORBIT         <-               <-   <-   <-  <-       <-   <-
0x0c: TASSEN        <-               <-   <-   <-                   
0x0d: PREWIN1       PEPPER_WIN       <-   <-   <-                   
0x0e: RSHD          <-               <-   <-   <-  <-       <-   <-
0x0f: BEATON        <-               <-   <-   <-                   
0x10: HEAD          <-               <-   <-   <-  CCD      <-   <-
0x11: AFY           <-               <-   <-   <-                   
0x12: DEFOE         <-               <-   <-   <-  DEFCORRE <-   <-
0x13: ORBBEN        <-               <-   <-   <-                   
0x14: PEPPER_DOUBLE                                                 
0x15: JUSMI         <-               <-   <-   <-                   
0x16: SUSIE         <-               <-   <-   <-                   
0x17: KIDS          <-               <-   <-   <-                   
0x18: CHOFF         <-               <-   <-   <-                   
0x19: CHGAIN        <-               <-   <-   <-                   
0x1a: CAMPOFF       <-               <-   <-   <-                   
0x1b: CAMPGAIN      <-               <-   <-   <-                   
0x1c: DEGEEN1       <-               <-   <-   <-  DEGEEN   <-     
0x1d: DEGEEN2       <-               <-   <-   <-                   
0x1e: YOSSIE        <-               <-   <-   <-                   
0x1f: FURICORE      <-               <-   <-   <-                   
0x20: EXPUNPACK                                                     
0x21: SUBUNPACK                                                     
0x22: PREFIFO       INVALID,PRE_FIFO <-   <-   <-                   
0x23: SAFARI_IN     <-               <-   <-   <-                   
0x24: DPCM_DEC      <-               <-   <-   <-                   
0x25: MIRACLE                                                       
0x26: FRISK                                                         
0x27: CLEUR         <-                                             
0x28: OTHERS                                                       
0x29: SHREK         SHREK_IN         <-   <-   <-                   
0x2a: DITHER                                                       
0x2b: DFMKII2       DEFMARKII2       <-   <-                       
0x2c: PREWIN2       PEPPER_WIN2      <-   <-                       
0x2d: CDM           <-               <-   <-                       
0x2e: LTKIDS_IN     <-               <-   <-                       
0x2f: PREWIN3       PEPPER_WIN3      <-   <-                       
0x30: SIMPPY                                                       
0x31: PEPPER_DIV_A                                                 
0x32: PEPPER_DIV_B                                                 
0x33: SUBSB_OUT                                                     
0x34: SIBORE_IN                                                     
0x35: PEPPER_DIV                                                   

DIGIC 4 has a different mapping. 60D:
Code: [Select]
RSHD     => 0x0B
SHADE    => 0x01
HIVSHD   => 0x07
ORBIT    => 0x09
DEFCORRE => 0x04
CCD      => 0x05 (currently used)
DEFMARK  => 0x06

There are more valid raw types than the ones named in the above tables. For example, on 5D3 (trial and error):
Code: [Select]
0x00 => valid image stream in some unknown format
0x01 => bad
0x02 => scaled by digital ISO (DEFCORRE?)
0x03 => bad
0x04 => SHADE (bad pixels, scaled by digital ISO)
0x05 => bad
0x06 => bad
0x07 => DEFMARK (bad pixels)
0x08 => HIVSHD (bad pixels, appears to fix some vertical stripes)
0x09 => bad
0x0A => bad
0x0B => bad
0x0C => bad
0x0D => bad
0x0E => RSHD (bad pixels, scaled by digital ISO)
0x0F => bad

0x10 => CCD (clean image, some vertical stripes in certain cases)
0x11 => bad
0x12 => DEFCORRE (scaled by digital ISO)
0x13 => bad
0x14 => valid image stream in some unknown format (different from 0)
0x15 => bad
0x16 => bad
0x17 => bad pixels
0x18 => bad
0x19 => bad
0x1A => bad
0x1B => bad
0x1C => bad pixels
0x1D => bad
0x1E => bad pixels
0x1F => bad

0x20 => valid image stream in some compressed format?
0x21 => bad
0x22 => clean image
0x23 => bad
0x24 => bad
0x25 => bad
0x26 => bad
0x27 => bad pixels
0x28 => valid image stream in some compressed format?
0x29 => bad
0x2A => some strange column artifacts
0x2B => bad
0x2C => bad
0x2D => bad
0x2E => some strange posterization
0x2F => bad

0x30 => valid image stream in some compressed format?
0x31 => bad
0x32 => clean image
0x33 => bad
0x34 => valid image stream in some unknown format
0x35 => bad
0x36 => bad
0x37 => bad pixels
0x38 => valid image stream with some missing columns?!
0x39 => same
0x3A => clean image
0x3B => bad
0x3C => bad pixels, strange column artifacts (like 0x2A, but with bad pixels)
0x3D => bad
0x3E => posterization (same as 46)
0x3F => bad

0x40 - 0x7F => same as 0x00 - 0x3F (checked most good modes and some bad modes)

On 5D3 and 60D, the raw type "CCD" is the one we are using for raw video.

On EOS M, the only valid raw types appear to be 7, 11, 48, 50, 75, 80, 87 according to dfort.

Would be nice if somebody has the patience to try all the raw types on the 70D, as it's the only camera that runs ML now and has lv_select_raw.

Reverse Engineering / EDMAC internals
« on: November 26, 2016, 01:28:55 PM »
Until now, we didn't know much about how to configure the EDMAC. Recently we did some experiments that cleared up a large part of the mystery.

Will start with size parameters. They are labeled xa, xb, xn, ya, yb, yn, off1a, off1b, off2a, off2b, off3 (from debug strings). Their meaning was largely unknown, and so far we only used the following configuration:

Code: [Select]
xb = width
yb = height-1
off1b = padding after each line

Let's start with the simplest configuration (memcpy):

Code: [Select]
xb = size in bytes. 

Unfortunately, it doesn't work - the image height must be at least 2.

Simplest WxH

How Canon code sets it up:
Code: [Select]
  CalculateEDmacOffset(edmac_info, 720*480, 480):
     xb=0x1e0, yb=0x2cf

Transfer model (what the EDMAC does, in a compact notation):
Code: [Select]
xb * (yb+1)        (xb repeated yb times)

WxH + padding (skip after each line)

Code: [Select]
(xb, skip off1b) * (yb+1)

Note: skipping does not change the contents of the memory,
so the above is pretty much the same as:
Code: [Select]
(xb, skip off1b) * yb
followed by xb (without skip)

xa, xb, xn (usual raw buffer configuration)

Code: [Select]
xa = xb = width
xn = height-1

To see what xa and xn do, let's look at some more examples (how Canon code configures them):
Code: [Select]
  edmac_setup_size(ch, 0x1000000):
    xn=0x1000, xa=0x1000

  edmac_setup_size(6, 76800):
    xa=0x1000, xb=0xC00, xn=0x12

  CalculateEDmacOffset(edmac_info, 0x100000, 0x20):
    xa=0x20, xb=0x20, yb=0xfff, xn=0x7

  CalculateEDmacOffset(edmac_info, 1920*1080, 240):
    xa=0xf0, xb=0xf0, yb=0xb3f, xn=0x2

The above can be explained by a transfer model like this:
Code: [Select]
(xa * xn + xb) * (yb+1)

Adding ya, yn (to xa, xb, xn, yb)

Some experiments (trial and error, 5D3):
Code: [Select]
  xa = 3276, xb = 1638, xn = 1055                  => 3276*1055 + 1638 transferred
  xa = 3276, xb = 32,   xn = 1055                  => 3276*1055 + 32
  xa = 3276, xb = 0,    xn = 1055                  => 3276*1056 - 20 (?!)
  xa = 3276, xb = 3276, xn = 95, yb = 10           => 3276*96*11
  xa = 3276, xb = 3276, xn = 95, yb = 7,  yn = 3   => 3276*96*11
  xa = 3276, xb = 3276, xn = 10, yb = 62, yn = 33  => 3276*11*96
  xa = 3276, xb = 3276, xn = 10, yb=3, yn=5, ya=2  => 3276*11*19
  xa = 3276, xb = 3276, xn = 10, yb=5, yn=3, ya=6  => 3276*11*27
  xa = 3276, xb = 3276, xn = 10, yb=5, yn=3, ya=7  => 3276*11*30
  xa = 3276, xb = 3276, xn = 10, yb=7, yn=8, ya=9  => 3276*11*88
  xa = 3276, xb = 3276, xn = 10, yb=8, yn=3, ya=28 => 3276*11*96

Code: [Select]
(xa * xn + xb) REP (yn REP ya + yb)

Here, a REP b means 'perform a, repeat b times' => a * (b+1).

So far, so good, the above model appears to explain the behavior
when there are no offsets, and looks pretty simple.

There is a quirk: if xb = 0, the behavior looks strange.
Let's ignore it for now.

Adding off1b (to xa, xb, xn, ya, yb, yn)

What do we do about the offset off1b?

Code: [Select]
xa = 3276, xb = 3276, xn = 10, yb=95, off1b=100
=> copied 3276*10*96 + 3276, skipped 100,
   (CP 3276, SK 100) repeated 94 times (95 runs).

It copies a large block, then it starts skipping after each line.
Let's decompose our model and reorder the terms.
Then, let's skip off1b after each xb.

Code: [Select]
(xa * xn)        REP (yn REP ya + yb)
(xb, skip off1b) REP (yn REP ya + yb)

Let's check a more complex scenario:
Code: [Select]
xa = 3276, xb = 3276, xn = 10, yb=8, yn=3, ya=28, off1b=100
=> (CP 3276*10*29 + 3276,   SK 100), (CP 3276, SK 100) * 27,
   (CP 3276*10*29 + 3276*2, SK 100), (CP 3276, SK 100) * 27,
   (CP 3276*10*29 + 3276*2, SK 100), (CP 3276, SK 100) * 27,
   (CP 3276*10*9  + 3276*2, SK 100), (CP 3276, SK 100) * 8.

There's some big operation that appears repeated 3 times (yn),
although the copied block sizes are a little inconsistent (first is smaller).

After that, (xa * xn) is executed 9 times (yb+1).
At the end, (xb, skip off1b) is executed 9 times (also yb+1).

In the big operation, the 29 is clearly ya+1.

What if off1b is skipped after all xb iterations, but not the last one?
This could explain why we have an extra 3276 (the *2) on the last 3 log lines.

Regroup the terms like this:
Code: [Select]
  => ((CP 3276*10*29), (CP 3276, SK 100) * 28, CP 3276) * 3,
      (CP 3276*10*9 ), (CP 3276, SK 100) * 9.

Our model starts to look like this:
Code: [Select]
   (xa * xn)   (ya+1)
   (xb, skip off1b) *  ya
    xb without skip
  * yn

followed by:

   (xa * xn)   (yb+1)
   (xb, skip off1b) * (yb+1)

So far so good, it's a bit more complex,
but explains all the above observations.
Of course, the last line may be as well:
Code: [Select]
  (xb, skip off1b) * yb, xb without skip

Adding off1a

Let's try another offset: off1a = 44.
The log from this experiment is pretty long, so I'll simplify it by regrouping the terms.

Code: [Select]
xa = 3276, xb = 3276, xn = 10, yb=8, yn=3, ya=28, off1a=44, off1b=100
=> (
     ((CP 3276, SK 44)  * 28, CP 3276) * 10,
     ((CP 3276, SK 100) * 28, CP 3276),
   ) * 3,
     ((CP 3276, SK 44)  * 8, CP 3276) * 10,
     ((CP 3276, SK 100) * 8, CP 3276)

This gives good hints about what is happening when:
Code: [Select]
   ((xa, skip off1a) * ya, xa) * xn
    (xb, skip off1b) * ya, xb
) * yn,

   ((xa, skip off1a) * yb, xa) * xn
    (xb, skip off1b) * yb, xb

Adding the remaining offsets (all parameters are now used)

Let's add off2a, off2b and off3. They are pretty obvious now, so I'll skip the log file (which looks quite intimidating anyway).

Code: [Select]
   ((xa, skip off1a) * ya, xa, skip off2a) * xn
    (xb, skip off1b) * ya, xb,
     skip off3
) * yn,

   ((xa, skip off1a) * yb, xa, skip off2b) * xn
    (xb, skip off1b) * yb, xb

So, there is a pattern: perform N iterations with some settings, then perform the last iteration with slightly different parameters. The pattern repeats at all iteration levels (somewhat like fractals).

Just by looking at the memory contents, we can't tell what what the skip value is used for the very last iteration. However, by reading the memory address register (0x08) directly from hardware (not from the shadow memory), we can get the end address (after the EDMAC transfer was finished). For a write transfer, this includes the transferred data and also the skip offsets. Now it's straightforward to notice the last offset is off3, so our final model for EDMAC becomes:

EDMAC transfer model

Code: [Select]
   ((xa, skip off1a) * ya, xa, skip off2a) * xn
    (xb, skip off1b) * ya, xb, skip off3
) * yn,

   ((xa, skip off1a) * yb, xa, skip off2b) * xn
    (xb, skip off1b) * yb, xb, skip off3

The offset labels now start to make sense :)

C code (used in qemu):
Code: [Select]
for (int jn = 0; jn <= yn; jn++)
    int y     = (jn < yn) ? ya    : yb;
    int off2  = (jn < yn) ? off2a : off2b;
    for (int in = 0; in <= xn; in++)
        int x     = (in < xn) ? xa    : xb;
        int off1  = (in < xn) ? off1a : off1b;
        int off23 = (in < xn) ? off2  : off3;
        for (int j = 0; j <= y; j++)
            int off = (j < y) ? off1 : off23;
            cpu_physical_memory_write(dst, src, x);
            src += x;
            dst += x + off;

The above model is for write operations. For read, the skip offsets are applied to the source buffer - that's the only difference.

Offsets can be positive or negative. In particular, off1a and off1b only use 17 bits (digic 3 and 4) or 19 bits (digic 5), so we have to extend the sign.

The above model explained all the combinations that are not edge cases (such as yb=0 or odd values). Here are the tests I've ran: 5D3 vs QEMU.

For more details, please have a look at the "edmac" and "qemu" branches.

To be continued.

Jenkins is overloading the server too much for my taste lately, so I'm considering rewriting the nightly builds page as static HTML, without any JavaScript. Another reason for the rewrite: the builds page is impossible to load on slow network connections.

Any volunteers to help me with this task? I'm going to use a Python script similar to this one, and here's what I came up with so far (edited manually, based on the previous template):

Here's a proof of concept Python code to retrieve Jenkins build data, using JenkinsAPI:
Code: [Select]
from jenkinsapi.jenkins import Jenkins
J = Jenkins('')
B = J['500D.111'].get_last_good_build()
artifact = list(B.get_artifacts())[0]
print artifact.url

Feedback is also welcome (while I'm at it). There's some extra functionality I'd like to add, too.

Modules Development / Burst mode tweaks (
« on: September 15, 2016, 08:16:34 PM »
A while ago I had a fairly strange problem: I was taking pictures with a manual 200mm lens, and had trouble keeping the subject in the frame. Why? Because, during a burst sequence, the display is turned off. I was focusing manually from LiveView, so couldn't look through the viewfinder.

So, here's a module that implements this tweak: during a burst sequence, it shows a live preview of the captured images. RAW only.

Also included a tweak that limits the number of pictures in a burst sequence (for example, if you want to take 2 pictures on a single shutter press). I'm not sure where this could be useful, but was simple enough to write.

I wrote this about one or two months ago, but didn't get the opportunity to battle-test it yet.


If it works fine on most models and people find it useful, I'll include it in the nightly.

Topic split from

Old discussion about vertical stripes:

With crop_rec, these stripes also appear in H.264. They can be fixed in post, but it would be best if we could avoid them in the first place.

Vertical stripe fix for H.264:


Note: the stripes we are talking about in this thread are visible in highlights (e.g. sky at low ISO).

Is the vertical banding present in the highlights or shadows?

If the banding is in the highlights, darkframe subtraction won't help. Banding in the highlights is a sign of a multiplicative defect (the column gains are off slightly, which is fixed by multiplying pixels values by some correction factor).

If the banding is in the shadows, the "vertical stripe fix" won't help. Banding in the shadows is a sign of an additive defect (there is some linear offset, which is fixed by simply subtracting some correction value from the pixel values).

Original message:
I'd say let's review the vertical stripe fix on 1.1.3, and if it's everything alright, I'll include it in the main builds (and fix the 1.2.3 crop build) soon.

Since I'm a bit stuck with DIGIC 6, I took the cpuinfo module from CHDK and integrated it with the portable display code. This should give detailed info about the hardware (CPU, caches, memory configuration and so on).

Besides the DIGIC 6 cameras, I'm also interested in the results from recent ports (70D, 100D, 1200D); tests from other cameras are also welcome, but they are mostly for fun.

Source code: the recovery branch

autoexec.bin - for all ML-enabled cameras
CPUI1300.FIR (1300D)

This code is pretty verbose - it will show a few pages of low-level info. You will need to take screenshots to be able to read all that stuff.

As I'm on a very slow network connection, please do not upload large screenshots. If possible, it would be best if you could write down the info as plain text. If not, please try to keep the image size small (under 50K each).

Reverse Engineering / MPU communication
« on: July 22, 2016, 11:26:59 AM »
There was some progress understanding the communication between the main CPU and the MPU (a secondary CPU that controls buttons, lens communication, shutter actuation, viewfinder and others), so I think it's time to open a new thread.


* Code to dump MPU firmware: modules/mpu_dump
* NikonHacker emulator for TX19A:
* Communication protocol emulated in QEMU: qemu/eos/mpu.c
* How to log the MPU messages: [1] [2] [3]
* Early discussion regarding button interrupt:
* Button codes in QEMU:
* First trick implemented using a MPU message:

500D, LV
Code: [Select]
mpu_send(06 04 09 00 00)
mpu_recv(3c 3a 09 00 3c 3c e0 00 3f 80 00 00 38 12 c0 00 b9 cb c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 08 08 11 10 50 49 02 59 88 88 00 32 00 00 00 00 00 01 00 00 00)

0x32 - focal length
0x10 - aperture

Now we can read lens_info in Photo mode.
Just call mpu_send(06 04 09 00 00). CPU receives data and automatically overwrite property lens_info.

Topic split from MLV Lite thread.

I'm trying to squeeze the last bit of performance out of MLV Lite, to make it as fast as (or maybe faster than) the original raw_rec. You can help by testing different builds and checking which one is faster.

Original message:

I made a small experiment with mlv_lite.

In dmilligan's implementation, frames are 4K larger than with the original raw_rec (because of the MLV headers stored for each frame, and also because of alignment issues). In my version, frame sizes are identical to the original raw_rec.

What I'd like to know: is there any noticeable speed difference between the two versions? How does it compare to the old raw_rec?

My build:
to be compared with the one from first post, and to the old raw_rec.

Caveat: my version does not handle 4GB limit well; I'll fix it only if the speed difference is noticeable.

I've just got this card hoping it will reduce wear and tear on my cards and card readers. As you can imagine, constantly swapping the card between camera and PC is not pleasant for any of the devices involved (card, card reader and card slot from the camera - they all suffer).

Short review: (my model is W-03 8GB)

- SLOW (1 minute for downloading a 18MB file...)
- quite hard to set up (it took me about 3 hours from unpacking to being able to copy files on it)
- some things just don't seem to work (such as internet passthrough, or manually enabling/disabling wifi)
- formatting the card removes wifi functionality (!)
+ it has a nice logo with Toshiba printed on it :D
+ it has documentation, developer forum, Lua scripting, all sorts of bells and whistles (too bad the basics aren't working well...)

Side note: a while ago g3gg0 got a Transcend wifi card, and he mentioned it's very slow, so I hoped this one would be better. Looks like it isn't.

How to set up on Linux

So, to save you from hours of fiddling, here's a short guide on how to get it working on Linux, to the point of being able to run "make install" - that is, to upload ML on it without any cables:

- make a backup copy of the files from the card, in particular:
     - DCIM/100__TSB/FA000001.JPG
     - SD_WLAN/*
     - GUPIXINF/*/*
- format card (from camera or from any other device)
- put the files you just backed up, back on the card (to restore wifi functionality)

Network setup (assuming you have a wireless router):
- put this in the config file:
Code: [Select]
APPSSID=<your network name>
APPNETWORKKEY=<your network password>
- power-cycle the card, so it will connect to your router, just like any other device on the network
- configure your router so it always assigns the same IP address to the card
- check the new ip (ping the card)

Uploading ML:
- mount the card as WebDAV in your favorite file browser (on my system: dav://
- find out where that location is mounted, e.g. by opening some file in a text editor and checking its path
- on my system, that path is: /run/user/1000/gvfs/dav:host=,ssl=false
- put this in Makefile.user (in the directory containing ML source code)
Code: [Select]
UMOUNT=gvfs-mount -u dav:// && echo unmounted
- make sure the camera is not writing to card (important!)
- run "make install" from the platform/camera directory.
- restart the camera by opening the battery door (important! this ensures the camera will no longer write to the card)
- make the card bootable if it isn't already (for example, copy ML-SETUP.FIR manually on the card and run Firmware Update)
- reboot the camera to start the latest ML you just uploaded.

"Unbricking" the card - if you have formatted it by mistake without a backup

- download Toshiba FlashAir utility from here
- install it (installation works under Wine, the utility doesn't)
- find the files you should have copied before formatting, here: c:/Program Files (x86)/TOSHIBA/FlashAirTool/default/W-02
- these files appear to work with W-03 as well; just copy them to the card, then reconfigure it

(note: I do have an Windows XP box, but couldn't manage to install that utility on it; didn't try too hard, gave up after 15 minutes or so)


- make qinstall (to only upload autoexec and maybe the modules you are working on - did I mention it's SLOW?)
- [DONE] restore Toshiba files after format (so you don't lose wifi capabilities when formatting the card from the camera)

During the last few weeks I have finally managed to sit down and implement the 3x crop mode discovered by Greg a while ago, and summarized here. This feature could be very useful for wild life, astro, or just for bragging on the forums about how cool your camera is :)

How it works?

It modifies Canon's 1080p and 720p video modes by altering the sensor area that is going to be captured. Resolution and nearly all other parameters are left unchanged.

That means:
- it works with both H.264 and RAW
- works at all usual frame rates: 24/25/30/50/60 fps (with some quirks at high FPS)
- preview, sound, overlays, HDMI out... most of the stuff should just work as expected.


update: here's one from kgv5

Not yet, but I have a feeling DeafEyeJedi is already on it :) scroll down :)


On 5D3 (other cameras may be different, we'll see):

- framing almost centered (only roughly checked by zooming on a test subject on the camera screen)
- 720p aspect ratio:
   - at 720p (50/60fps), we are sampling the sensor at 1:1 crop, but Canon uses a 5x3 pixel binning
   - that means, H.264 video will be squashed - resize the video in post at 1280x432 or 1920x648
   - however, raw video will have 1:1 pixel ratio (not squashed, just very wide - up to 1920x632)
- there is a small black border at the top of the frame, if you record at max resolution in RAW
- it may have side effects such as sensor overheating, camera exploding or displaying BSODs.

As usual - if it breaks, you get to keep both pieces.


Current implementation only works on 5D3, and I've tested it only on 1.1.3. The module is not yet compatible with current nightlies, so you need a full package (not just the module).

As you can see if you scroll down, it is possible to port this on many other cameras. It's just not very straightforward. But, on the bright side, Maqs is already eager to port it to 6D, and I'm sure others will follow.

Note that 600D and 70D already have this feature from Canon, and 650D, 700D and EOS M already have it in ML with a little hack. All other cameras could already use the crop mode when recording RAW from the 5x zoom view, but with some quirks (mainly bad preview and off-center image). So, this is nothing really new - maybe just a little more usable.

Why a separate build is needed? It's because this module uses an experimental patching library, which seemed to work fine while I wrote the code, but as soon as I took it outside (about one month ago), it crashed almost every time I used ETTR + Dual ISO. I've fixed the bug since then, but you can imagine you don't want this level of "stability" in the nightly builds.

However, this library paves the way to implementing the long-awaited ISO tweaks (with real ISOs lower than 100, including a small dynamic range boost). I've also used this library as a backend for low-level tweaks such as choosing FAT32 or exFAT when formatting a card from the camera. So, let's test it and iron out all the quirks!


- source code
- 5D3 1.1.3: (build log)
- 5D3 1.2.3: (build log) (confirmed by Hans_Punk)
- other cameras: hopefully coming soon


- port it to other cameras
- merge into nightly builds


- grab and from the ISO research thread, then:
- try to understand what those registers do, and which ones need to be changed to achieve various effects
- check black bars with raw_diag, option OB zones (trigger with long half-shutter press in LV)
- optional: check DR, SNR curve, full well and read noise with raw_diag, option "SNR curve (2 shots)", trigger with "Dummy Bracket"
- take your time to read and experiment; it's very time-consuming, but once you get the hang of it, be careful - it's addictive.

Porting checklist

- clean image (without weird artifacts)
- clean turning on and off, in all the supported video modes
- clean switch to/from other modes (5x/10x zoom, other video modes, photo mode - these should not be affected)
- black bars should be larger than or equal to the values assumed in raw.c (check with raw_diag OB zones)
- centered image: put the focus box in the centered image and zoom in; the subject should not move
- menu: if there is any mode where the patch is not working, it should print a warning


Greg - original findings on 500D
Levas - for finding the equivalent registers for 6D
mothaibaphoto - for finding the 5D3 register values for 30/50/60 fps
Maqs - for the lightweight code hooks used in the backend
g3gg0 - for laying out the foundation about ADTG registers, ENGIO registers and other low-level stuff that tends to be forgotten once it's up and running.

General Development Discussion / Portable ROM dumper
« on: January 25, 2016, 09:29:53 AM »
Lately I've got a few softbricked cameras to diagnose, and struggled a bit with the ROM dumper from bootloader: it wasn't quite reliable. A while ago, g3gg0 reimplemented it with low-level routines (which worked on his camera, but not on mine). Today I looked again at the old approach, and it looks like the file I/O routines from bootloader had to be called from RAM, not from ROM.

So, I've updated the code and need some testing. I've emulated this in QEMU, but the results may be different on real hardware.

What you have to do:

- download autoexec.bin
- place it on a card without any important data on it (it might corrupt the filesystem if anything goes wrong)
- the display looks roughly like this:
- after it's finished, look on the card, and you will find 4 files: ROM[01].BIN and ROM[01].MD5.
- you don't have to upload them, just check the MD5 checksum:
  - Windows: you may use
  - Mac, Linux: md5sum -c *.MD5
- repeat the test on the same card (let it overwrite the files), then on a card with different size (and maybe different filesystem).

Some cameras have only ROM1 connected, so dumping ROM0 will give just random noise. In this case, the ROM0 checksum may not match, but that's OK.

The ROM dumper should recognize all ML-enabled cameras, except for 5D2, 50D and 500D. These old models do not appear to have file writing routines in the bootloader (or, at least I could not find them). The QEMU simulation works even on exotic models like 1200D or EOS M2.

So, you don't have to upload any files or screenshots. Simply verify the MD5 checksums on your PC (if in doubt, paste the md5sum output).

That's it, thanks for testing.

Reverse Engineering / Pixel binning patterns in LiveView
« on: January 21, 2016, 08:52:22 AM »
Original discussion:

I wanted to split the topic, but that would make the original discussion harder to follow, so I'm just copying the relevant parts here.

Finally finished stuffing around, and here is a good bunch of results.  Enjoy!

From the above data, I'll try to guess the pixel binning factors from LiveView (and I'll ask SpcCb to double-check what follows):

My quick test, at ISO 6400:
Code: [Select]
         gain       read noise     ratio (compared to 5x)
720p:    1.43       14.79          14.74
1080p:   0.88       14.75          9.07
5x:      0.097      23.64          1

Numbers from Audionut:
Code: [Select]
         gain       read noise     ratio (compared to 5x)

ISO 100:
720p:    73.48      6.93           11.9        (note: it's very hard to tell how much is read noise
1080p:   53.78      6.54           8.7          and how much is Poisson noise from a nearly straight line)
5x:       6.15      5.98           1
photo:    5.11      6.77           0.83

ISO 200:
720p:    44.87      7.22           14.4
1080p:   27.50      6.76           8.84
5x:       3.11      6.26           1
photo:    2.58      7.08           0.83

ISO 400:
720p:    22.50      7.34           14.6
1080p:   13.94      6.90           9.05
5x:       1.54      6.70           1
photo:    1.27      7.61           0.82

ISO 800:
720p:    11.40      7.77           14.6
1080p:    7.07      7.32           9.06
5x:       0.78      7.32           1
photo:    0.66      8.60           0.85

ISO 1600:
720p:     5.80      8.78           14.7
1080p:    3.54      8.34           8.98
5x:       0.394     9.94           1
photo:    0.324    11.10           0.82

ISO 3200:
720p:     2.91      10.82          14.9
1080p:    1.81      10.45          9.23
5x:       0.196     14.75          1
photo:    0.166     16.28          0.85

ISO 6400:
720p:     1.41      14.81          14.7
1080p:    0.87      14.67          9.06
5x:       0.096     23.90          1
photo:    0.082     30.09          0.85

ISO 12800:
720p:     0.71      29.69          14.2
1080p:    0.44      29.44          8.8
5x:       0.050     58.40          1

Raw buffer sizes (active area):
- photo mode: 5796x3870
- 1080p: 1932x1290
- 1932x672 stretched (covers roughly 16:9 in LiveView)

Ratio between photo mode and 5x zoom: 0.83. If the 5x zoom captures a little more highlight detail, it's OK. The difference may be also because LiveView uses electronic shutter, while photo mode uses mechanical shutter. So, I'll use the 5x zoom as reference for the other LiveView modes.

From the above data, I now have very strong reasons to believe that 5D3 does a 3x3 binning in 1080p, and a 5x3 binning in 720p (5 lines, 3 columns).

(if you shoot 720p on 5D3, the desqueezing factor - to correct the aspect ratio of your footage-  is therefore exactly 5/3 = 1.67x)

A possible 3x3 binning (and easy to implement in hardware) would be to average each sensel and its 8 neighbours of the same color (considering the two greens as separate colors, as in the well-known four-color demosaicing algorithms). This binning scheme can be easily extended to 720p (5x3), but might cause some interesting artifacts on resolution charts.


A more complex 3x3 binning (very unlikely to be implemented in hardware, since it requires complex logic and knowledge about each pixel's color) could be:

(I'm showing it just for completeness, but I think the first pattern is the much more likely to be used).

If anybody could shoot some resolution charts in LiveView (silent pictures in 5x, 1080p and 720p, without touching the camera - I need more than pixel-perfect alignment), I can verify if these patterns are indeed the correct ones or not. If you don't use a remote release, you can take this test with the "Silent zoom bracket" option from the latest raw_diag to avoid camera movement.

Side note: the registers that control the downsizing factors are:
- Horizontal: CMOS[2], which also controls the horizontal offset; you can select full-res (1:1) or downsized by 3
- Vertical: ADTG 0x800C (2 for 1080p, 4 for 720p and 0 for zoom, so it should be the downsizing factor minus 1; other values are valid too)

Other cameras: I don't have much data, but from what I have, the binning factor seems to be 3. For example, the data from 50D (dsManning) looks like this:
Code: [Select]
         gain       read noise     ratio (compared to photo)

ISO 100:
1080p:    7.67      5.34           3.4
photo:    2.26      6.15           1

ISO 200:
1080p:    4.20      5.48           3.85
photo:    1.09      6.52           1

ISO 400:
1080p:    2.04      5.89           3.4
photo:    0.60      7.97           1

ISO 800:
1080p:    1.04      7.30           3.4
photo:    0.31     10.94           1

ISO 1600:
1080p:    0.53     10.32           3.5
photo:    0.15     16.12           1

ISO 3200:
1080p:    0.53     10.45          nonsense :)
photo:    0.08     38.06          1

and from 500D (Greg):

Code: [Select]
         gain       read noise     ratio (compared to photo)
ISO 100:
photo LV: 7.38      6.34           3.3
photo:    2.23      6.82           1

From the resolution charts (the first one I could find was this), most cameras (except 5D3) show artifacts as if they were skipping lines, but not skipping columns.

Therefore, I believe the binning pattern looks like this:

but I'm waiting for your raw_diag tests to confirm (or reject) this theory.

An interesting conclusion is that 5D3 does not throw away any pixel in LiveView. Then you may wonder, why binning a full-res CR2 by 3x3 in post is cleaner? Simple: binning in software will average out all noise sources, while binning in the analog domain (like the 5D3 does) will only average out the noise that got introduced before binning (here, the shot noise and maybe a small part of other types of noise), but cannot average out the noise that gets introduced after binning (here, the read noise, which is quite high on Canon sensors).

Therefore, at high ISO (where the shot noise is dominant), the per-pixel SNR on 5D3 1080p is improved by up to*) log2(sqrt(9)) = 1.58 EV, compared to per-pixel SNR in crop mode. On the other cameras (3x1 binning), per-pixel SNR is improved by up to log2(sqrt(3)) = 0.79 EV.

So, the noise improvement from the better binning method is up to 0.8 EV at 1080p (ranging from 0 in deep shadows to 0.8 in highlights). That's right - throwing away 2/3 of your pixels will worsen the SNR by only 0.8 stops (maybe not even that).

*) If the binning would be done in software, you would simply drop the "up to" - the quoted numbers would be the real improvement throughout the entire picture :)

This one is real :P
(and the Linux port is real as well)

I just discovered the 8086tiny emulator - plain C source code, minimal dependencies, so I managed to compile it as a ML module, and now I'm running FreeDOS on the camera :)

- (to be copied on the card, under ML/MODULES)
- bios (to be copied on card root)
- fd.img (to be copied on card root)
- IME modules from g3gg0, to be able to type commands at the DOS prompt

Source code:

- FreeDOS will start on top of DryOS, at camera startup
- press SET to start typing commands in the IME editor
- the only commands I've tested were "dir" and "bogomi16", on 60D.

General Development Discussion / Linux on your Canon DSLR? Why not?
« on: April 01, 2015, 08:00:24 AM »
We, the Magic Lantern Team, are very proud to present you a new milestone in DSLR customization!


(edit: after playing a game, making it look like an April's fool, we can ensure: this is not a fake!)

Starting from our recent discovery about display access from bootloader, we thought, hey, we could now have full control of the resources from this embedded computer. At this stage, we knew what kind of ARM processor we have (ARM 946E-S), how much RAM we have (256MB/512MB depending on the model), how to print things on the display (portable code), how to handle timers and interrupts, how to do low-level SD card access on select models (600D and 5D3), and had a rough idea where to start looking for button events.

So, why not trying to run a different operating system?

We took the latest Linux kernel (3.19) and did the first steps to port it. As we have nearly zero experience with kernel development, we didn't get too far, but we can present a proof of concept implementation that... the Linux kernel 3.19 on Canon EOS DSLR cameras!
- it is portable, the same binary runs on all ML-enabled cameras (confirmed for 60D, 600D, 7D, 5D2 and 5D3)
- allocates all available RAM
- prints debug messages on the camera screen
- sets up timer interrupts for scheduling
- mounts a 8 MiB ext2fs initial ramdisk
- starts /bin/init from the initrd
- this init process is a selfcontained, libc-less hello world
- next step: build userspace binaries (GUI, etc)

Demo video:

Download: autoexec.bin

Source code (WIP):

We hope this proof of concept will encourage you to tinker more with your new embedded computer. Maybe you want to run Angry Birds on it, or maybe Gimp? :)


About one month ago, g3gg0 found a way to access the LCD display from bootloader context, without calling anything from the main firmware. This makes a very powerful tool for diagnosing bricked cameras, and also a playground for low-level reverse engineering.

The only camera-specific bits for printing stuff on the LCD are:
- we have to call a Canon routine that initializes the display (which is in bootloader, not in main firmware): we named it "fromutil_disp_init".
- for the YUV layer, newer cameras use YUV422, while older cameras (only checked 5D2) use YUV411. This difference is not essential (you can print on the BMP layer only).

Today I wrote an autodetection routine that finds the display init routine from ROM strings, and the result is a portable "hello world" binary. That means, it should print something on any ML-enabled camera (and maybe even on cameras without ML). Same binary for all cameras, of course.

I've tested the code on 5D3 and 60D, and I'm looking for confirmation on the other models.

If you are already running ML, just download this autoexec.bin, run it, take a picture of your camera screen (sorry, no screenshots yet) and upload it here.

If you have a Canon DSLR without a ML port available, we need to sign this binary (create a FIR). Just mention your camera model and I'll create one for you. Don't expect this to speed up the porting process for your camera. But I hope this proof of concept will convince you to start tinkering with your new little computer :)

Pages: [1] 2 3 ... 6