Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - names_are_hard

#1
Camera-specific Development / 200D shoots raw video
February 09, 2024, 04:08:05 AM
Several years of learning ML code, trying to understand fragmentary docs, and in some cases writing the docs myself, 200D has raw video:



Bugs / quirks / limitations:
- only 14 bit is tested
- lossless modes are listed and definitely not supported, these should be hidden (this is a problem with mlv_lite code)
- mlv_lite has several hard-coded defaults that don't make sense for 200D (I think we need to make defaults detect cam model and choose behaviour intelligently)
- it records at 19.051 fps.  I have a few guesses as to why and will fix this
- 1280 * 720 is continuous, higher resolutions are not (200D SD interface is limited to about 40MB/s)
- MLV metadata is wrong in some places (bunch of stuff filled in as 0 or MAX_INT, it's a division by zero bug in ML code)

Code is available: https://github.com/reticulatedpines/magiclantern_simplified/compare/dev...200d_raw_draft
That's the changes compared to main branch; changes are fairly small!  This is a good place to look if you're interested in porting raw video to other Digic 7 models.

No direct links to builds yet, but if you ask in discord and have a good reason for using something that certainly hasn't been tested well, I might hand one over.  Otherwise you can wait till I've cleaned it up.

More cameras are in the works...
#2
General Development / Dual-core cams direct RPC
November 05, 2023, 06:21:21 PM
Digic 7, 8 and X are dual-core parts.  This is distinct from "Dual Digic" - that means a cam with two separate Digic cores, on different "sockets".  Dual-core is two cores on the same die.

We found task_create_ex() a long time ago, that allows one core to create a task that starts on the other.  This task goes into normal task scheduling, which has pre-emption and priorities, so when it runs, or how often, is not strongly controlled by the code creating the task.

Recently I found a method for direct RPC.  A core can send a function address to another core, which will immediately switch to that code, regardless of current task.  In fact, this works before the task system has been initialised, which is valuable.

All addresses are from 200D 1.0.1 unless otherwise stated.  These are fairly easy to find in other cams, there are lots of RPC strings, including to create_named_semaphore().  There's an associated spinlock global that helps, too.

Top-level DryOS function looks like this, 0x34b5a:

int _request_RPC(func *param_1, void *func_param)


The first param is a function pointer that takes void * and returns void.  At least on 200D, the param must fit in a single register (no large structs), because a blx r1 happens at one point.

_request_RPC() disables interrupts then calls an inner function that does the work.  The target func address is written into a global - there's a single address for this so there can only be one RPC func in progress at a time.  On Canon side a global spinlock is used to ensure no conflicts.  ML side we use a semaphore.

The caller then wakes up all CPUs and sends an interrupt: send_software_interrupt(0xc, cpu_mask);

During early init, both CPUs have registered a handler for SGI 0xc, 0x349d6:


void register_SGI_handler_0xc(void)

{
  register_GIC_handler(0x1cc, check_for_RPC, 0);
  return;
}

The handler looks like this, 0x349ba:

void check_for_RPC(void)
{
  uint cpu_id = get_current_cpu();
  if ((1 << (cpu_id & 0xff) & ~*(uint *)(inter_cpu_spinlock - 4)) == 0) {
    call_RPC_func();
    return;
  }
  return;
}


It looks to me like this is generic code that can cope with up to 8 cores, allowing any core to trigger a function on a masked set of cores (including itself if desired).  Presumably this is library code, and it looks a bit redundant when MAX_CPUS is set to 2.  It seems clear that SGI 0xc is reserved for inter-core comms.

The reason this was relevant to me, is that I've been working on MMU remapping code.  For this, you want each CPU to use the remapped addresses as early as possible - or it will call code that uses the old content, a particular problem if it initialises an external device.  Prior to finding this RPC code, I had no generic way to get cpu1 to see patched memory before it initialised its tasks.  Now, for all the D78X cams I've checked, it's possible to get cpu1 to take patched mem before it starts init1_task...  which means all tasks on both cpus will see our updated memory content.

As a side benefit this means we can intercept init1_task in the same way we do for init_task on cpu0.

Using the DryOS functions directly is a little tricky.  When a cpu wakes up and calls the target RPC function, it's in a loop, and will keep calling it, until the function pointer gets set NULL!  In practice, this means the target function is responsible for clearing the global, or the cpu will loop forever, constantly calling it.  You also must ensure the passed in params remain valid until the RPC call completes - this is easy to forget since the call happens on cpu1; you could easily use local vars for storage, they're on cpu0 stack, and your function ends before cpu1 reads from that stack...

Consequently, in ML code, I wrap this in request_RPC(), that tries to make things safe, in dryos_rpc.c:


struct RPC_args
{
    void (*RPC_func)(void *);
    void *RPC_arg; // argument to be passed to above func
};

int request_RPC(struct RPC_args *args)
{
    extern int _request_RPC(void (*f)(void *), void *o);

    // we can only have one request in flight, cpu0 takes sem
    int sem_res = take_semaphore(RPC_sem, 0);
    if (sem_res != 0)
        return -1;

    // storage for RPC params must remain valid until cpu1 completes request,
    // don't trust the caller to do this.
    static struct RPC_args RPC_args = {0};
    RPC_args.RPC_func = args->RPC_func;
    RPC_args.RPC_arg = args->RPC_arg;

    int status = _request_RPC(do_RPC, (void *)&RPC_args);
    return status;
}

void do_RPC(void *args)
{
    extern int clear_RPC_request(void);
    struct RPC_args *a = (struct RPC_args *)args;
    a->RPC_func(a->RPC_arg);
    clear_RPC_request();
    a->RPC_func = 0;
    a->RPC_arg = 0;
    // request complete, cpu1 releases sem
    give_semaphore(RPC_sem);
}

#3
This post contains some lies.  It was a joke, a jape, a ruse.  It's mostly true!  Explanation below.

We've known this for a while, but unusually, the 200D has an FPGA.  These are "Field Programmable", they're designed to be configured at runtime, so you can make them do different things.

Turns out, this one is quite powerful, and after a lot of work I managed to modify the bitstream to run machine-learning algorithms.

I thought real-time object detection would make a nice demo.  Here I'm running YOLOv3, a one-shot, deep neural network algorithm:
https://pjreddie.com/darknet/yolo/

What do you think?


https://i.imgur.com/guA8Tm8.mp4



Apologies for the shaky cam on this one, I've got a second cam on a gorilla pod tucked under my arm.  Quite hard to film yourself filming:

https://i.imgur.com/D4MYBfE.mp4


This creates so many possibilities now we can run modern AI on Canon cams!  Could be quite useful for Exif tagging images.
#4
I am currently the most active dev for modern Digic cams (i.e., badly supported new cams).  Kitor also seems to be back now, which is greatly appreciated!

I've been quite quiet lately, so I thought a big summary post to catch up would be useful.  Also, I'd like opinions from other active devs about future direction re code management.

If you don't want to read about technical stuff around ML code, you may want to skip this one :)
The notes below refer to my repo, here: https://github.com/reticulatedpines/magiclantern_simplified
Note that this is "mine" in the sense of I made it, not that I own the code, and not that I'm the only dev: anyone is welcome to contribute!  Kitor and coon42 are the biggest non-me contributors I think but we've had useful work from quite a few more.

A lot of the work I've been doing recently is on non-visible stuff (oh and I did a 7D2 port).  Qemu, automating ML testing, improving code quality.  This is boring for users but valuable for devs.  The main reason I am cautious here is because while the original intention of this repo was for adding support for modern cams, it's turned into the most active, modern ML repo.  The original regression testing system isn't available to me, so I'm writing another one (I can re-use parts of the old one).  When that is workable, we will have higher confidence that changes for new cams don't break old ones.  At that point I'd like to get user testing for old cams, to ensure feature parity with official repo(s).


Repo summary:

Quite a while back, I ported ML code so it expects to use Git, not hg.  I've done similar modernisation work in other areas.  Builds with modern toolchain.  Greatly reduced compiler warnings (some of these fixed years old bugs).  Some bugfixes that affect all cams, new and old.  Reliable fix for ISOless err on a few models.

This repo supports Digic 6, 7, 8 and X cams, 18 models so far.  It builds and *should* work on all old cams, too.  These are not well tested, more on this later.  Much work remains but it's easy to get started if your cam is on this list; you have working ML GUI, just very few features.  Pick a feature and solve it, one at a time!
See kitor's post about overall ML project status: https://www.magiclantern.fm/forum/index.php?topic=26852

A lot of work has been done to allow adding support for a new cam.  When I have a new cam I can add support getting to ML GUI in a few days in most cases.  This will take longer for someone unfamiliar with ML, but it's not that hard.

Merged multiple branches (lua_fix, unified, qemu, digic6-dumper) - no need to frequently switch branches.

The big missing piece here is that we don't have crop_rec_4k integrated.  I would like to merge crop_rec code but I require it to not cause problems on any supported cams.  I don't want to maintain multiple long-lived branches.  This might mean fixing Digic 4 bugs, or disabling crop_rec features on Digic 4 cams so no bugs can be triggered.  Bilal may be working on this problem (I don't have the experience with crop_rec features, or cams to test it on).

Added MMU based memory patching support for cams which have MMU (D7 and up).  Based on srsa's code (thanks!) but extended.  Unpatching not yet supported.
This allows patching very large amounts of ROM code, much more than is possible on old cams.  In theory this means we can do a lot of very cool stuff.  There is effectively no limit on how many patches we can apply on modern cams.

As part of the above, simplified code for patch manager.  In some ways this makes the UI worse, but it removed a few hundred lines of code, and makes it easier to work with MMU related patching for modern cams.  Future work will introduce patchsets, which will allow unpatching on MMU and return non-MMU patching UI to the old look.

Wifi improvements: 200D can use wifi to send whatever we want, wherever we want, and get data back.  This code should be easy to port to other cams.  You can tether to your phone, and we could make it upload photos as you took them.  Many other possibilities.

Module system improvements:
- build process improved; much faster.
- crash bug removed from module build / load process (modules could be loaded at addresses too far from ML code to be called, this would crash).
- fixed old system including modules in zip that a cam couldn't run (bad dependency checking).
- module compatibility extended to Digic 6, 7, 8 and X.

Qemu-eos moved to separate repo and updated from 2.5.0 to 4.2.1:
- much easier to build.
- much easier to modify (no awkward patch file workflow, it's just a normal repo!).
- improved ARM emulation.
- better / faster SD emulation.
- some features of qemu-eos broke in the update, most have been fixed (90% complete?).

In theory, works as before on old cams.  In practice, I don't know, and this will want testing.  I think this should wait until after I have a working Qemu regression testing system, but that should be soon (ish).  Also when that is available, I'd like to update to Qemu 6 - 4 is no longer supported.  Internally, 4 is much closer to 6 than 2 was to 4: this update should be much less painful.
https://github.com/reticulatedpines/qemu-eos/tree/qemu-eos-v4.2.1


Future repo discussion questions:

Should we try to have a single official repo again?  We currently have lots of forks that different people maintain, for different purposes.  This is confusing for new users, and annoying for devs trying to support them.  Plus, the longer this continues, the harder it becomes to get all the good features in one place.

I feel the old official repo has significant problems.  One major one is that nobody is maintaining it.  We could try to get access again, but it's Mercurial and very few people understand that system - I wouldn't be able to maintain it even if I had access.  I also think that the approach of using long-lived branches and cross-merging code between them is confusing, expensive in terms of effort, and liable to introduce problems due to complex merges.

I think we should move to Git.  All devs know git, it's the de facto standard for source control.  If you don't know any VCS, git is the easiest to find tutorials for.  We have a hard time getting devs, using an obscure system is off-putting.

Merging together the different repos will of course involve work and need co-operation.  I think it's worth it, but it's up to anyone that has their own fork as to whether they want to do this.  If you have ideas about how I can make this easier or more attractive, I'd like to hear them (hopefully when I have a testing system, that's a good thing you can take advantage of!).

Any official repo should be controlled by multiple trusted parties.  Git and github support this, so this is a question about defining and documenting how we want management of the repo to work.  To get things started, I'd suggest we want at least three active users with repo control, and potentially a larger group of people with commit rights / PR review rights.
#5
Don't get too excited - very few features work.  But, it's ML, and it's on 200D.

Edit 2022-10-31: bugfix ROM dump function in Debug menu, offsets needed updating.  Build zip link updated.

Build: https://github.com/reticulatedpines/magiclantern_simplified/releases/download/release_200D_2022-10-31/magiclantern-Nightly.2022Oct31.200D101.zip
Firmware version: 1.0.1
Bootflag enabler: https://a1ex.magiclantern.fm/bleeding-edge/200D/BOOT200D.FIR
Repo: https://github.com/reticulatedpines/magiclantern_simplified

What works:
- ML menus
- 30 min LV timer disable; AKA webcam mode (NOT normal video recording, a 30 min limit remains here)
- Shutter count
- Screenshots
- ML overlays in LV
- various debugging features (crash logs, task mon, etc)

What doesn't work:
- everything else

To exit sub-menus, use Av, not Q.

I would describe current status as a framework for porting ML to new cams.  A lot of the work has been on internals to support the differences between old and new generations, as well as changes to the repo and build system to make it easier to use on more modern systems.  It's much easier now for new devs to join in and work on things without too much pain.

ML boot process, inputs and GUI work on a wider range of cams in test, including: 750D, 850D, M50, RP, R.  These are either not stable enough yet, too early to release a build for, or nobody with time and access to the cam is available to support it.

It is possible to use ML APIs to patch arbitrary RAM and ROM locations on Digic 7 and up.  This means all features that classic ML supports can be ported - if the hardware supports it.  This still leaves many unknowns, but does mean if you want to do dev work, you have a lot of power to investigate capabilities.

For cool features, the main thing we need are devs with time and ability to reverse engineer camera and OS internals, especially the DMA controller.  This is how raw video works, instructing the DMA controller to map devices together in a way Canon GUI doesn't expose.  New cams do this differently than old cams and so far this area isn't well understood.

Newer cams are very powerful, they just need work to free that power!

Large pieces that were required to get to this point:
- boot code for each new digic generation (A1ex, me)
- handling the new display / GPU (A1ex, kitor, me)
- fixing lens info for overlays (kitor)
- fixing task handling (turtius, me)
- MMU patching (srsa, me)
- module support (me)

Special thanks to Kitor for code reviews, design discussions and git help!

Special thanks to coon42 for PCB design for UART connector:
https://github.com/coon42/magic-lantern-dev-kit/tree/master/cable/gerber

Special thanks to Walter for many boring 200D tests on physical cam,
and answering thousands of ML questions in Discord.

https://i.imgur.com/yvZX1V1.mp4
#6
General Development / ML code reviews
July 21, 2022, 03:42:42 PM
Hello!  It's hard to find people with the ML knowledge to do code reviews, so perhaps devs on here would like to arrange something?  We could swap reviews for changes.

I have two recent changes that I'd really appreciate a review for:
- module changes that make cache hack usage compatible with Digic 678X (and fix a bug in the build system that affects all ML repos): https://github.com/reticulatedpines/magiclantern_simplified/compare/dev...d45_cache_fix
- working but minimal framework for MMU memory patches on Digic 678X cams: https://github.com/reticulatedpines/magiclantern_simplified/compare/dev...mmu_investigation

The MMU commits are kind of ugly and need cleaning up before merging, I'm not asking for a full review there but some validation that the general approach is sound.  Testing on cams isn't required, although the D45 branch should make builds that work on older cams (this has had some testing: thanks Walter!).

I'll review your weird hacks in exchange!
#7
Camera-specific Development / Canon 850D / T8i
November 26, 2021, 04:28:37 PM
I was taken by a fey mood, and have produced this:



This was my first time porting to Digic 8, and it's a bit different from the existing Digic 8 cams I've looked at.  There's a very important change to early boot process that was hard to determine, initialising the second core has changed.  Everything else seems normal enough so far, Kitor helped with the Ximr stuff, it seems to be closest to RP.

Dumped the roms with Basic.  Works well enough in Qemu to test the very early code, although Qemu will need updating due to the boot process change, MMIO addresses are used differently and Qemu assumes all D8 will be the same.  I haven't tried UART yet but the connector is under the thumb grip as is common on modern cams.

I don't recommend anyone try to run this yet, but the source may be useful for people looking at other D8 cams: https://github.com/reticulatedpines/magiclantern_simplified/commits/850d_initial_stubs

Buttons aren't mapped yet, graphics don't display how I'd expect, etc, lots of other bugs I'm sure.  I'll try to improve it to the same state as 200D.
#8
Some cams are dual-core.  I've worked out how to start tasks on the second core.  Cores can communicate using at least semaphores, so we can do cooperative multi-tasking.  This should be quite useful for splitting work, especially anything that can be easily pipelined or batched.  E.g., expensive operations on a sequence of frames or images - these can alternate between cores or be picked up from a queue by whichever core becomes ready first.  For CPU bound tasks this may be a large improvement.

I seem to recall CHDK / srsa already knew this, from some other thread, but I couldn't find it and there were no details.

On 200D, 1.0.1, df008f0a is task_create_ex(), which is like task_create() but with an extra arg that selects CPU to use.  I've tested 0 and 1, I believe -1 means "run on either/all CPU(s)", but this is untested. 

Small code example:


static struct semaphore *mp_sem; // for multi-process cooperation

static void cpu0_test()
{
    while(1)
    {
        DryosDebugMsg(0, 15, "Hello from CPU0");
        take_semaphore(mp_sem, 0);
        msleep(1000);
        give_semaphore(mp_sem);
    }
}

static void cpu1_test()
{
    while(1)
    {
        DryosDebugMsg(0, 15, "Hello from CPU1");
        take_semaphore(mp_sem, 0);
        msleep(1000);
        give_semaphore(mp_sem);
    }
}

static void run_test()
{
    mp_sem = create_named_semaphore("mp_sem", 1);
    task_create_ex("cpu1_test", 0x1e, 0, cpu1_test, 0, 1);
    task_create_ex("cpu0_test", 0x1e, 0, cpu0_test, 0, 0);
    return;
}


This leads to the CPUs taking turns to print, once per second.
#9
General Development / Testers wanted: Qemu 4.2.1
November 04, 2020, 01:32:37 AM
Hello!  I have tidied up my work (with some help from a1ex on the multicore stuff :) ) porting Qemu from 2.5.0 (current ML uses this) to Qemu 4.2.1.  It works at least okay, can get to GUI in some cams known to be supported well in Qemu.  It emulates Digic 7 cams better than before.

I would like some testers who can compare results between 2.5.0 and 4.2.1 with different roms.  Note that I'm expecting 4.2.1 to generally be worse at this time.  I want to know how so I can improve it.  Modern Qemu is much easier to build on modern systems and emulates Arm better.  With your help, we can make it work as well as 2.5.0.

To do this testing you will need to be happy getting Qemu 2.5.0 from ML "qemu" branch working, as well as get my port working:
https://github.com/reticulatedpines/magiclantern_simplified/commit/5c05e1e073842b50c4bb0d6d666c3766bb74db24

Then, please try and find differences in behaviour between the two versions.  One significant difference is that 4.2.1 will crash if the cam tries to access unmapped memory - this seems common on the roms I've tested.  I've fixed this for I think 200D, 50D and 80D.  If it's not yet fixed, in my port, this should assert with the address that caused problems and is an easy fix.  This is a good crash - emulation will be better in 4.2.1 once we find these crashing regions and I update the memory maps.

Different -d flags to Qemu will exercise different code and can cause different behaviours.  Try these if you have the time.  Use a bad -d option, e.g. "-d broken", and Qemu will give you a list.  Lines with EOS are worth testing (you will need to run Qemu in different ways to get these to work; see previous posts about ML Qemu).  I am most interested in cases where there are repeatable differences in behaviour between 2.5 and 4.2 - I would like logs from both runs so I can find and fix the problems.

One last thing...  good luck getting both built on the same system.  Qemu 2.5.0 is hard to build on modern systems.  Qemu 4.2.1 is hard to build on old systems.  This is a large part of the reason I want to do the upgrade!
#10
Just found out the Magiclantern build process has some dependency on Python 2.  I don't know exactly what it is, but on my system, where Python 3 is the default, it fails "make zip".  Plain "make" succeeds.  I think something to do with the nasty way module_strings.h is generated.

Anyway - should somebody with some Python experience want to port the build process to v3, it would be much appreciated.  Python 2 will be unsupported in a month.  It would be a good way for someone to contribute, without needing ARM / Assembly / C experience, but able to build ML by following the instructions.

Mostly I am making this topic so people realise ML is very soon to be dependent on completely unsupported software:
https://pythonclock.org/?1
#11
I'm trying to patch in some jump hooks for debugging.  I'm finding it hard to work out efficient ARM assembly for this (I'm an ARM noob).  In x86 I'd JMP 0x12345678 and it would be 5 bytes with no register side effects.  ARM I can't set a dword constant in one go.  I'm also in Thumb mode.  Best I have so far is this, which kind of sucks:

        PUSH {R3, R4}
        MOV R4, 0x1234
        MOV R3, 0x5678
        LSL R4, R4, #16
        ADD R4, R3
        BX R4

Which is 18 bytes, feels bad to me.  Some functions I'm interested in are the same size!  Any better way to jump to an arbitrary offset?  Maybe I'd win by swapping out of Thumb first?

Alternatively, any ideas on how to accomplish the same idea efficiently in ARM would be appreciated - patch in a transfer to my own code to do arbitrary stuff, then cleanup register & stack changes and transfer back.
#12
I thought I would try and make it easier for people to build Magiclantern.  This is a work in progress and only brave people should help me test - but I do want testers!  This should work on Linux, Mac or Windows 10, I have only tested on Linux.

The idea is we'd only need one set of instructions for building on any OS, and as a bonus everyone would be building with the same build tools, which is nice for debugging problems.  I hope the instructions can be much simpler than the current process.  There are some downsides but I think they're manageable.

To help, you will need git, and be happy to run command-line stuff.  Do this:
git clone https://bitbucket.org/stephen-e/ml_docker_builder
Then follow the instructions in the README.txt.

Please use README.txt (I want feedback on those instructions so that I can improve them).  However, so that people can see what I'm trying to do, the process is like this:

<install docker>
<become root or admin>
<copy-paste the following lines...>
docker build -t ml_build .
docker rm ml_build_output
docker create --name ml_build_output ml_build https://bitbucket.org/hudson/magic-lantern/branch/unified 5D3.113
docker start --attach ml_build_output
docker cp ml_build_output:/home/ml_builder/ml_build/autoexec.bin .

You should now have autoexec.bin.  You can change the repo or camera version to get different autoexec.bin.  It works with both Mercurial and Git repos (but this is pretty crude, I'm sure there are cases I haven't considered).

I *think* that is a fairly easy way to get started building ML?  If it isn't helpful, please let me know.  If there are obvious things that should be added, also let me know. I guess I want some ability to make the zipfile?  I don't know what most people use to create the files they need.

I also don't have a cam that works with ML, so I can't test this is currently a good build.  I know ML can build but with broken output with some compiler versions, etc - at the moment I just want to know if autoexec.bin builds for different people on different OS.  If you want to try the autoexec.bin, that's up to you!  I give no guarantees!
#13
I think this affects ML.  Maybe not, they talk about Bitbucket Cloud so perhaps we're on a service where they're not retiring Mercurial?

https://bitbucket.org/blog/sunsetting-mercurial-support-in-bitbucket
"Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020"

I don't like their decision to delete repos.  Putting them in a read-only mode would have been a lot kinder.

There are tools to migrate to Git (so I guess you keep your history?) but I know from personal experience that building ML has dependencies on having hg installed on your system.  It wasn't hard to remove these.

Long term this is probably good for ML - almost no-one uses Mercurial and needing to learn it must put some people off ML.  Short term it's annoying to migrate!
#14
I have an ML build problem in a port in progress.  My simple brain can write 10 line makefiles.  I have added #include vram.h (and bmp.h) to disp_direct.c, and my build gets an error of:

arm-none-eabi-ld: disp_direct.o: in function `disp_set_pixel':
disp_direct.c:(.text+0x158): undefined reference to `bmp_vram_info'

I believe the cause is that the linker isn't trying to link vram.o into disp_direct.o.  How do I add this dependency?  I've tried src/Makefile.src, several variants along the line of:
disp_direct.o: $(PLATFORM_DIR)/vram.o

but no luck.  Anyone got any ideas?  There's a lot of possible makefiles to add things to, and I'm not even sure of the right way of adding this.
#15
General Development / when to use task_create?
July 07, 2019, 06:09:44 PM
I'm doing some work on logging in my 200D port and I'm confused by the advantages of task_create.  Can someone explain the difference between these two examples?

    msleep(1000);
    do_stuff();

and

    msleep(1000);
    task_create("do_stuff", 0x1e, 0x1000, do_stuff, 0 );   

Is the benefit on the second case related to blocking, because tasks go in a queue?  Is that all there is?
#16
I think for 200D that the signature for LoadCalendarFromRTC() has changed. In older cams it looks to take a single argument, a pointer to struct tm. For 200D I see it as taking 5 arguments, with the 5th being the pointer to the struct.

I have two questions: am I right about the sig change? I think LoadCalendarFromRTC() is at 0xe05cd1fe in 200D. A useful comparison point is 0xe00742fc 200D, which == 0xff885058 50D - both call LoadCalendarFromRTC().

Second, and more important; how should I generally handle this problem? Are there existing examples of function signatures differing across camera models that I could copy?
- I could maybe write a 200D specific wrapper function that takes one arg and supplies the extra ones to the real call.  I believe it's possible to have platform/XXX functions override src/ functions?  I haven't tried this yet.
- I could use lots of #ifdef CONFIG_200D whenever LoadCalendarFromRTC() is called.  This seems very ugly.
- I could have an #ifdef CONFIG_200D macro that mangles calls of LoadCalendarFromRTC() to have 5 params and guesses values for the other 4.  I hate this idea.
#17
Reverse Engineering / Ghidra scripts
April 07, 2019, 03:17:37 AM
Ghidra is a free tool similar to IDA Pro.  https://ghidra-sre.org/
You can extend it with scripts, in Java or Python.  I thought we could make some useful ones and collect them here.  I'm going to assume everyone wanting to run scripts has already got Ghidra working and loaded the rom dumps and extra memory regions (eg, parts of the rom that get copied to different locations at runtime).

Here's my first useful script, StubNamer.py - you give it a stubs.S file and it names and disassembles the stubs in your listing:
https://drive.google.com/open?id=17QJSAd-72z_Kp_GgoS6Qn1HdOsQVc832
In Linux, copy to /home/<your_user>/ghidra_scripts/, then it will be visible under Magiclantern when you open "Display Script Manager" (white triangle in green circle icon in button bar).

Limitations:
- it doesn't define a function at the address, because not all stub addresses are at function starts so I didn't want to force this.  Often Ghidra will work out it's a function due to xrefs etc, but sometimes it doesn't.  Could be made better by inspecting the disassembly, detecting common function starts, only then defining a function?
- the NSTUB address extraction only handles the simplest case.  If it's a computed address, it will fail (and report this in Ghidra console so you can manually define it)