http://www.magiclantern.fm/forum/index.php?topic=19933 -> commands to translate the numbers from the stack trace into source code lines. These addresses are from ML code; if I compile my own from the same source, I would probably get different results, unless I'd have the same compiler version and the same modifications to the source code, if any. Still, let me try it:
hg clone https://bitbucket.org/daniel_fort/magic-lantern/
cd magic-lantern
hg up 69d91c7c4317 -C # changeset from the stack trace log
# all options and source code modifications are visible in autoexec.bin, but as I don't have it, I'm just guessing
echo "CONFIG_QEMU=y" > Makefile.user
sed -i 's!#define CONFIG_HELLO_WORLD!//#define CONFIG_HELLO_WORLD!' src/config-defines.h
cd platform/EOSM2.103
make clean; make
eu-addr2line -s -S --pretty-print -e magiclantern 0x44c8dc 0x477e28 0x44ca64
my_big_init_task+0x58 at boot-hack.c:307
stateobj_start_spy.constprop.0+0x2c at state-object.c:251
ml_assert_handler+0x60 at boot-hack.c:539
eu-addr2line -s -S --pretty-print -e magiclantern 0x0044CA04 0x0044C468
ml_assert_handler at boot-hack.c:530
backtrace_getstr at backtrace.c:877
Looks OK!
The stack trace with -d callstack is more complete (it finds the 0xUNKNOWNs and also contains function arguments); you can get it with a breakpoint on ml_assert_handler and calling print_current_location_with_callstack from there.
Even better - we are debugging code code compiled by us, which - on the qemu branch - also has debug info for gdb,
in the same way as a regular PC program has debug info. This info is not copied to the card - it's kept in the "magiclantern" file, which is actually an elf. With this info, gdb can print a backtrace as well, and it's probably better than ours, as long as the error is in our code. There's no debug info on Canon firmware, other than their (very helpful) debug messages.
. ./export_ml_syms.sh EOSM2.103
./run_canon_fw.sh EOSM2,firmware="boot=1" -d debugmsg,callstack -s -S & arm-none-eabi-gdb -x EOSM2/debugmsg.gdb
...
CTRL-C before the error
(gdb) symbol-file ../magic-lantern/platform/EOSM2.103/magiclantern
(gdb) b ml_assert_handler
(gdb) continue
...
Breakpoint 4, ml_assert_handler (...)
(gdb) bt
#0 ml_assert_handler (msg=msg@entry=0x4a5ad8 "streq(stateobj->type, \"StateObject\")", file=file@entry=0x4a5afd "../../src/state-object.c", line=line@entry=0xfb, func=func@entry=0x49ca48 <__func__.7031> "stateobj_start_spy") at ../../src/boot-hack.c:530
#1 0x00477e2c in stateobj_start_spy (stateobj=0xe51f3e14, spy=0x477cd0 <stateobj_lv_spy>) at ../../src/state-object.c:251
#2 0x0044c8e0 in call_init_funcs () at ../../src/boot-hack.c:307
#3 my_big_init_task () at ../../src/boot-hack.c:448
#4 0x0000ca18 in ?? ()
(gdb) print_current_location_with_callstack
Current stack: [1f39c8-1ef9c8] sp=1f39a8 at [ml_init:44ca04:477e2c] (ml_assert_handler)
0x44C884 my_big_init_task(0, 44c884 my_big_init_task, 19980218, 19980218) at [ml_init:ca14:1f39c0] (pc:sp)
0x477E80 state_init(32, 3, 49d60d "Calling init_func %s (%x)", 477e80 state_init)
at [ml_init:44c8dc:1f39b0] (my_big_init_task) (pc:sp)
0x44CA04 ml_assert_handler(4a5ad8 "streq(stateobj->type, "StateObject")", 4a5afd "../../src/state-object.c", fb, 49ca48 "stateobj_start_spy")
at [ml_init:477e28:1f39a8] (stateobj_start_spy.constprop.0) (pc:sp)
With debug info available, gdb gives a pretty good backtrace. We don't have this luxury when debugging Canon code though, and that's the reason I wrote the callstack analysis. The callstack trace does not require any debug info, but requires the code to be instrumented (therefore it's slower than normal execution). The backtrace from the assert log does not require instrumentation, but also gives a lot less info.
With my WIP version of QEMU (option to identify tail function calls), the stack trace would be:
[...] -d debugmsg,callstack,tail [...]
[...]
(gdb) print_current_location_with_callstack
Current stack: [1f39c8-1ef9c8] sp=1f39a8 at [ml_init:44ca04:477e2c] (ml_assert_handler)
0x44C884 my_big_init_task(0, 44c884 my_big_init_task, 19980218, 19980218) at [ml_init:ca14:1f39c0] (pc:sp)
0x477E80 state_init(32, 3, 49d60d "Calling init_func %s (%x)", 477e80 state_init)
at [ml_init:44c8dc:1f39b0] (my_big_init_task) (pc:sp)
0x477DFC stateobj_start_spy.constprop.0(e51f3e14, 4bd9a4, 0, 477e80 state_init)
at [ml_init:477e9c:1f39b0] (state_init) (pc:sp)
0x44CA04 ml_assert_handler(4a5ad8 "streq(stateobj->type, "StateObject")", 4a5afd "../../src/state-object.c", fb, 49ca48 "stateobj_start_spy")
at [ml_init:477e28:1f39a8] (stateobj_start_spy.constprop.0) (pc:sp)
Note: call_init_funcs() is not listed in my stack trace because the compiler inlined it. Still, gdb did a very good job identifying it. In my trace, it only says state_init was called from 44c8dc which maps to boot-hack.c:307.
Why did my stack trace list 0x4bd9a4 as the second argument of stateobj_start_spy?! In gdb's backtrace, it's 0x477cd0 (which is correct).
Answer: the compiler hardcoded this one, so the compiled function (in assembly) only receives one argument:
(gdb) disas state_init
0x00477e80 <+0>: push {r3, lr}
0x00477e84 <+4>: mov r3, #589824 ; 0x90000
0x00477e88 <+8>: ldr r0, [r3, #1456] ; 0x5b0
0x00477e8c <+12>: bl 0x477dfc <stateobj_start_spy>
0x00477e90 <+16>: mov r3, #262144 ; 0x40000
0x00477e94 <+20>: ldr r0, [r3, #1252] ; 0x4e4
0x00477e98 <+24>: pop {r3, lr}
0x00477e9c <+28>: b 0x477dfc <stateobj_start_spy>
(gdb) disas disas stateobj_start_spy
...
0x00477e58 <+92>: ldr r3, [pc, #28] ; 0x477e7c <stateobj_start_spy+128>
0x00477e5c <+96>: str r3, [r4, #12]
0x00477e60 <+100>: mov r0, #0
0x00477e64 <+104>: pop {r4, pc}
(gdb) x 0x477e7c
0x477e7c <stateobj_start_spy+128>: 0x00477cd0
Anyway. The error from state objects appears to be a check that's working very well

(which means, the state object definitions on working ports can be trusted to be correct).
The error from read_entire_file is unrelated. Exercise: find out where it comes from (using the same technique).