New memory backend

Started by a1ex, September 17, 2013, 05:51:03 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

1%

Looks like the startup lockup is gone!

a1ex

Cool, now try loading as many modules as you can.

1%

Seems to be able to load all of them.. the big test will be enabling GDB again and trying that monster module.

a1ex

Just loaded the big ADTG on 60D, along with 2 pages of other modules. 763K module code, 846 total, peak 1.7M, malloc 82k/381k, AllocMem 763k/1.0M, and... shoot_malloc: 0 used.

1%

What I'm wondering is why the get max region shows lower than total... is that just contiguous memory vs all available? Or is it from the remapping of allocate mem?

a1ex

Yep, max contiguous region. If you try to allocate more than that... err70.

Thejungle

Sorry for posting (as I'm not dev) here, but maybe there could be a chance to use this memory to hold on data that slower cameras (like 600d) could not write enough fast..? :) This could be stupid idea, just sayin.. Sorry!

dmilligan

ML already uses as much memory as is available for this purpose (ever notice the all the 'buffer' stuff? A buffer is simply memory that is being used exactly for this purpose). This thread has to do with the nitty gritty of how we allocate and manage the various types of memory that is available for this and various other purposes.

a1ex

Some backend updates. Started to fix things to allow the 1100D and EOSM to run the memory benchmarks, and found a bunch of other issues in the process. Result: finally managed to run the old-style Lua (which does over 5000 malloc calls just to load the default set of scripts) on 1100D (the camera with the lowest amount of memory)!

Details and torture tests: https://bitbucket.org/hudson/magic-lantern/pull-requests/906/memory-backend-improvements/diff