Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - names_are_hard

Pages: [1]
1
Camera-specific Development / Canon 850D / T8i
« on: November 26, 2021, 04:28:37 PM »
I was taken by a fey mood, and have produced this:



This was my first time porting to Digic 8, and it's a bit different from the existing Digic 8 cams I've looked at.  There's a very important change to early boot process that was hard to determine, initialising the second core has changed.  Everything else seems normal enough so far, Kitor helped with the Ximr stuff, it seems to be closest to RP.

Dumped the roms with Basic.  Works well enough in Qemu to test the very early code, although Qemu will need updating due to the boot process change, MMIO addresses are used differently and Qemu assumes all D8 will be the same.  I haven't tried UART yet but the connector is under the thumb grip as is common on modern cams.

I don't recommend anyone try to run this yet, but the source may be useful for people looking at other D8 cams: https://github.com/reticulatedpines/magiclantern_simplified/commits/850d_initial_stubs

Buttons aren't mapped yet, graphics don't display how I'd expect, etc, lots of other bugs I'm sure.  I'll try to improve it to the same state as 200D.

2
Reverse Engineering / Running tasks on CPU1 (second CPU)
« on: June 18, 2021, 05:33:01 PM »
Some cams are dual-core.  I've worked out how to start tasks on the second core.  Cores can communicate using at least semaphores, so we can do cooperative multi-tasking.  This should be quite useful for splitting work, especially anything that can be easily pipelined or batched.  E.g., expensive operations on a sequence of frames or images - these can alternate between cores or be picked up from a queue by whichever core becomes ready first.  For CPU bound tasks this may be a large improvement.

I seem to recall CHDK / srsa already knew this, from some other thread, but I couldn't find it and there were no details.

On 200D, 1.0.1, df008f0a is task_create_ex(), which is like task_create() but with an extra arg that selects CPU to use.  I've tested 0 and 1, I believe -1 means "run on either/all CPU(s)", but this is untested. 

Small code example:

Code: [Select]
static struct semaphore *mp_sem; // for multi-process cooperation

static void cpu0_test()
{
    while(1)
    {
        DryosDebugMsg(0, 15, "Hello from CPU0");
        take_semaphore(mp_sem, 0);
        msleep(1000);
        give_semaphore(mp_sem);
    }
}

static void cpu1_test()
{
    while(1)
    {
        DryosDebugMsg(0, 15, "Hello from CPU1");
        take_semaphore(mp_sem, 0);
        msleep(1000);
        give_semaphore(mp_sem);
    }
}

static void run_test()
{
    mp_sem = create_named_semaphore("mp_sem", 1);
    task_create_ex("cpu1_test", 0x1e, 0, cpu1_test, 0, 1);
    task_create_ex("cpu0_test", 0x1e, 0, cpu0_test, 0, 0);
    return;
}

This leads to the CPUs taking turns to print, once per second.

3
General Development / Testers wanted: Qemu 4.2.1
« on: November 04, 2020, 01:32:37 AM »
Hello!  I have tidied up my work (with some help from a1ex on the multicore stuff :) ) porting Qemu from 2.5.0 (current ML uses this) to Qemu 4.2.1.  It works at least okay, can get to GUI in some cams known to be supported well in Qemu.  It emulates Digic 7 cams better than before.

I would like some testers who can compare results between 2.5.0 and 4.2.1 with different roms.  Note that I'm expecting 4.2.1 to generally be worse at this time.  I want to know how so I can improve it.  Modern Qemu is much easier to build on modern systems and emulates Arm better.  With your help, we can make it work as well as 2.5.0.

To do this testing you will need to be happy getting Qemu 2.5.0 from ML "qemu" branch working, as well as get my port working:
https://github.com/reticulatedpines/magiclantern_simplified/commit/5c05e1e073842b50c4bb0d6d666c3766bb74db24

Then, please try and find differences in behaviour between the two versions.  One significant difference is that 4.2.1 will crash if the cam tries to access unmapped memory - this seems common on the roms I've tested.  I've fixed this for I think 200D, 50D and 80D.  If it's not yet fixed, in my port, this should assert with the address that caused problems and is an easy fix.  This is a good crash - emulation will be better in 4.2.1 once we find these crashing regions and I update the memory maps.

Different -d flags to Qemu will exercise different code and can cause different behaviours.  Try these if you have the time.  Use a bad -d option, e.g. "-d broken", and Qemu will give you a list.  Lines with EOS are worth testing (you will need to run Qemu in different ways to get these to work; see previous posts about ML Qemu).  I am most interested in cases where there are repeatable differences in behaviour between 2.5 and 4.2 - I would like logs from both runs so I can find and fix the problems.

One last thing...  good luck getting both built on the same system.  Qemu 2.5.0 is hard to build on modern systems.  Qemu 4.2.1 is hard to build on old systems.  This is a large part of the reason I want to do the upgrade!

4
Just found out the Magiclantern build process has some dependency on Python 2.  I don't know exactly what it is, but on my system, where Python 3 is the default, it fails "make zip".  Plain "make" succeeds.  I think something to do with the nasty way module_strings.h is generated.

Anyway - should somebody with some Python experience want to port the build process to v3, it would be much appreciated.  Python 2 will be unsupported in a month.  It would be a good way for someone to contribute, without needing ARM / Assembly / C experience, but able to build ML by following the instructions.

Mostly I am making this topic so people realise ML is very soon to be dependent on completely unsupported software:
https://pythonclock.org/?1

5
General Development / ARM assembly, efficient jump hook help?
« on: December 08, 2019, 04:38:41 AM »
I'm trying to patch in some jump hooks for debugging.  I'm finding it hard to work out efficient ARM assembly for this (I'm an ARM noob).  In x86 I'd JMP 0x12345678 and it would be 5 bytes with no register side effects.  ARM I can't set a dword constant in one go.  I'm also in Thumb mode.  Best I have so far is this, which kind of sucks:

        PUSH {R3, R4}
        MOV R4, 0x1234
        MOV R3, 0x5678
        LSL R4, R4, #16
        ADD R4, R3
        BX R4

Which is 18 bytes, feels bad to me.  Some functions I'm interested in are the same size!  Any better way to jump to an arbitrary offset?  Maybe I'd win by swapping out of Thumb first?

Alternatively, any ideas on how to accomplish the same idea efficiently in ARM would be appreciated - patch in a transfer to my own code to do arbitrary stuff, then cleanup register & stack changes and transfer back.

6
I thought I would try and make it easier for people to build Magiclantern.  This is a work in progress and only brave people should help me test - but I do want testers!  This should work on Linux, Mac or Windows 10, I have only tested on Linux.

The idea is we'd only need one set of instructions for building on any OS, and as a bonus everyone would be building with the same build tools, which is nice for debugging problems.  I hope the instructions can be much simpler than the current process.  There are some downsides but I think they're manageable.

To help, you will need git, and be happy to run command-line stuff.  Do this:
git clone https://bitbucket.org/stephen-e/ml_docker_builder
Then follow the instructions in the README.txt.

Please use README.txt (I want feedback on those instructions so that I can improve them).  However, so that people can see what I'm trying to do, the process is like this:

<install docker>
<become root or admin>
<copy-paste the following lines...>
docker build -t ml_build .
docker rm ml_build_output
docker create --name ml_build_output ml_build https://bitbucket.org/hudson/magic-lantern/branch/unified 5D3.113
docker start --attach ml_build_output
docker cp ml_build_output:/home/ml_builder/ml_build/autoexec.bin .

You should now have autoexec.bin.  You can change the repo or camera version to get different autoexec.bin.  It works with both Mercurial and Git repos (but this is pretty crude, I'm sure there are cases I haven't considered).

I *think* that is a fairly easy way to get started building ML?  If it isn't helpful, please let me know.  If there are obvious things that should be added, also let me know. I guess I want some ability to make the zipfile?  I don't know what most people use to create the files they need.

I also don't have a cam that works with ML, so I can't test this is currently a good build.  I know ML can build but with broken output with some compiler versions, etc - at the moment I just want to know if autoexec.bin builds for different people on different OS.  If you want to try the autoexec.bin, that's up to you!  I give no guarantees!

7
General Development / Bitbucket set to remove Mercurial support
« on: August 20, 2019, 04:48:31 PM »
I think this affects ML.  Maybe not, they talk about Bitbucket Cloud so perhaps we're on a service where they're not retiring Mercurial?

https://bitbucket.org/blog/sunsetting-mercurial-support-in-bitbucket
"Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020"

I don't like their decision to delete repos.  Putting them in a read-only mode would have been a lot kinder.

There are tools to migrate to Git (so I guess you keep your history?) but I know from personal experience that building ML has dependencies on having hg installed on your system.  It wasn't hard to remove these.

Long term this is probably good for ML - almost no-one uses Mercurial and needing to learn it must put some people off ML.  Short term it's annoying to migrate!

8
General Development / How to add .o dependency for disp_direct.c?
« on: July 11, 2019, 04:06:29 AM »
I have an ML build problem in a port in progress.  My simple brain can write 10 line makefiles.  I have added #include vram.h (and bmp.h) to disp_direct.c, and my build gets an error of:

arm-none-eabi-ld: disp_direct.o: in function `disp_set_pixel':
disp_direct.c:(.text+0x158): undefined reference to `bmp_vram_info'

I believe the cause is that the linker isn't trying to link vram.o into disp_direct.o.  How do I add this dependency?  I've tried src/Makefile.src, several variants along the line of:
disp_direct.o: $(PLATFORM_DIR)/vram.o

but no luck.  Anyone got any ideas?  There's a lot of possible makefiles to add things to, and I'm not even sure of the right way of adding this.

9
General Development / when to use task_create?
« on: July 07, 2019, 06:09:44 PM »
I'm doing some work on logging in my 200D port and I'm confused by the advantages of task_create.  Can someone explain the difference between these two examples?

    msleep(1000);
    do_stuff();

and

    msleep(1000);
    task_create("do_stuff", 0x1e, 0x1000, do_stuff, 0 );   

Is the benefit on the second case related to blocking, because tasks go in a queue?  Is that all there is?

10
I think for 200D that the signature for LoadCalendarFromRTC() has changed. In older cams it looks to take a single argument, a pointer to struct tm. For 200D I see it as taking 5 arguments, with the 5th being the pointer to the struct.

I have two questions: am I right about the sig change? I think LoadCalendarFromRTC() is at 0xe05cd1fe in 200D. A useful comparison point is 0xe00742fc 200D, which == 0xff885058 50D - both call LoadCalendarFromRTC().

Second, and more important; how should I generally handle this problem? Are there existing examples of function signatures differing across camera models that I could copy?
 - I could maybe write a 200D specific wrapper function that takes one arg and supplies the extra ones to the real call.  I believe it's possible to have platform/XXX functions override src/ functions?  I haven't tried this yet.
 - I could use lots of #ifdef CONFIG_200D whenever LoadCalendarFromRTC() is called.  This seems very ugly.
 - I could have an #ifdef CONFIG_200D macro that mangles calls of LoadCalendarFromRTC() to have 5 params and guesses values for the other 4.  I hate this idea.

11
Reverse Engineering / Ghidra scripts
« on: April 07, 2019, 03:17:37 AM »
Ghidra is a free tool similar to IDA Pro.  https://ghidra-sre.org/
You can extend it with scripts, in Java or Python.  I thought we could make some useful ones and collect them here.  I'm going to assume everyone wanting to run scripts has already got Ghidra working and loaded the rom dumps and extra memory regions (eg, parts of the rom that get copied to different locations at runtime).

Here's my first useful script, StubNamer.py - you give it a stubs.S file and it names and disassembles the stubs in your listing:
https://drive.google.com/open?id=17QJSAd-72z_Kp_GgoS6Qn1HdOsQVc832
In Linux, copy to /home/<your_user>/ghidra_scripts/, then it will be visible under Magiclantern when you open "Display Script Manager" (white triangle in green circle icon in button bar).

Limitations:
 - it doesn't define a function at the address, because not all stub addresses are at function starts so I didn't want to force this.  Often Ghidra will work out it's a function due to xrefs etc, but sometimes it doesn't.  Could be made better by inspecting the disassembly, detecting common function starts, only then defining a function?
 - the NSTUB address extraction only handles the simplest case.  If it's a computed address, it will fail (and report this in Ghidra console so you can manually define it)

Pages: [1]