Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - dpjpandone

#1
Quote from: theBilalFakhouri on October 01, 2022, 04:52:55 PM

This part of code:
    /* be gentle with the CPU, save it for recording (especially if the buffer is almost full) */
    msleep(
        (need_for_speed)
            ? ((queued_frames > valid_slot_count / 2) ? 1000 : 500)
            : 50
    );


Controls when it's okay to refresh Framing preview depending on RAW recording state, this part of code add delays while recording for Framing preview to slow down refresh rate.
If you removed this part of code you will have semi real-time preview in B&W during RAW recording, of course this will make CPU usage 100% all time and there is high chance you will have corrupted frames.

But we can probably fine-tune the delay (reduce it), play with 1000 and 500 values (decrease them).
1000 and 500 are in milliseconds.

I am playing with this now. I think the only way to test for stability is to record in a resolution that is higher than continuous because the slot error happens when the buffer is full, any thoughts?

also:

I noticed Danne modified these values in Raw.c:


     /* scale useful range (black...white) to 0...1023 or less */
+    /* changing from 1024 to 700 for speed reasons */
     int black = raw_info.black_level;
     int white = raw_info.white_level;
     int div = 0;
-    while (((white-black) >> div) >= 1024)
+    while (((white-black) >> div) >= 700)
     {
         div++;
     }

-    uint8_t gamma[1024];
+    uint8_t gamma[700];
     
-    for (int i = 0; i < 1024; i++)
+    for (int i = 0; i < 700; i++)


Does this reduce CPU overhead?
#2
Quote from: names_are_hard on January 31, 2023, 12:03:37 AM
I can give some advice.

Firstly, modules are built outside of the context of any individual cam.  This is because modules are supposed to be portable: people can copy them from one card to another in a different cam and they're supposed to work.  TCC code on the cam handles loading modules and looking up symbols by name.  This is the way that modules find stubs.  If a symbol doesn't exist, the user will see an error and the module won't load (but ML won't crash, it's more like a warning than a real error).

If you define something in features.h for a cam, I think it won't be visible to the module during the build.  Even if it is, it won't work properly if you copy the module and use it with a different cam, so this isn't a correct approach.

I was suggesting that *things which are functions* could be moved to stubs.  I haven't looked at the variables like more_hacks_are_supported, so I don't know what these do or what the best way to handle them is.

You should be able to test your ideas in qemu.  First get the existing code working and test it emulates this far (should do I think).  Then make changes and see if you broke anything.

Quote from: names_are_hard on January 31, 2023, 04:18:17 AM
I suppose it depends what you mean by "doing stubs wrong" :)  In the module code, where it's just an address, I haven't checked how it's used.  In a stub context, it will be treated as a function pointer according to the definition of NSTUB (or ARM32_FN / THUMB_FN if you use those).  Is it appropriate to treat these addresses in that way?  That depends how the module uses them.  In the exact text you gave, you put them in as comments, which will do nothing (and the symbol won't get exported, so the modules won't find it).

Probably, but that's a bigger question than you might realise.  I think the intention when ML added lots of supported models was that they'd all get all the features eventually.  Later on (I guess) as you've found, FEATURE and CONFIG limit or specify features, but that doesn't work well with modules.  Modules are *supposed* to be cam independent, but actually this was never true, there were some edge cases that became apparent when we worked more on modern cams and found that some structs and other internals had changed enough that some assumptions around how modules worked failed.

So, what's an elegant way to let modules behave the right way on a range of cams?  My personal feeling is modules could, instead of doing this:


if (is_camera("5D3", "1.2.3"))
{
    screen_x = 0x0101; screen_y = 0x0202;
}
else if { // many other cameras omitted }


Do something like this:

    get_screen_x_y(screen) // screen is a struct with .x and .y members


That is: cleanly separate modules from ML code, so they can only work via exported functions.  Modules ask the cam what the right values are for that model.  No hard-coding values inside modules.

So in your specific example, create a stub for *all* cams maybe called is_more_hacks_supported() (this is a bad name, but you get the idea).  This would probably default to returning false, and be overridden by cams that did have support.  Then the module code is greatly simplified - no per cam checks, but something like this:


    more_hacks_supported = is_more_hacks_supported();


Is this the best way to go?  That's harder to say.  It would increase the size of the ML binary, while reducing the size of modules.  Size constraints are a real concern on some cams.

Quote from: names_are_hard on January 31, 2023, 04:18:17 AM
you put them in as comments

didn't want those to compile yet

Quote from: names_are_hard on January 31, 2023, 04:18:17 AM


So in your specific example, create a stub for *all* cams maybe called is_more_hacks_supported() (this is a bad name, but you get the idea).  This would probably default to returning false, and be overridden by cams that did have support.  Then the module code is greatly simplified - no per cam checks, but something like this:


    more_hacks_supported = is_more_hacks_supported();




is this the correct syntax to do such a thing?

NSTUB(1 - is_more_hacks_supported)

and cams that do not support it the stub needs to be 0 ? Or you start by saying:

more_hacks_supported = 0; //defaults to 0
more_hacks_supported = is_more_hacks_supported() //returns value set in stub


If there is no stub in another cam's Stubs.s does it return 0 automatically?
#3
Scripting Corner / Preferred Lua Debugger for VSC?
January 25, 2023, 09:57:05 PM
When i attempted to debug my first lua script in Visual Studio Code I was presented with several options from the ext market. Lua, Lua Debug, Lua Debugger, Local Lua Debugger, etc. etc.

anyone have a favorite?
#4
From the Lua doc:

KEY
Key Codes
Fields:
HALFSHUTTER
UNPRESS_HALFSHUTTER
FULLSHUTTER
UNPRESS_FULLSHUTTER
WHEEL_UP
WHEEL_DOWN
WHEEL_LEFT
WHEEL_RIGHT
SET
UNPRESS_SET
JOY_CENTER
UP
UP_RIGHT
UP_LEFT
RIGHT
LEFT
DOWN_RIGHT
DOWN_LEFT
DOWN
UNPRESS_UDLR
ZOOMIN
MENU
INFO
PLAY
TRASH
RATE
REC
LV
Q
PICSTYLE
FLASH_MOVIE
UNPRESS_FLASH_MOVIE
DP
UNPRES_DP
TOUCH_1_FINGER
UNTOUCH_1_FINGER
TOUCH_2_FINGER
UNTOUCH_2_FINGER


like the subject line says, is "DP" short for Depth of field preview button? also, is there a way to intercept the AF point button to use with LUA scripts? I think this button would be ideal since it is the only button (to my knowledge) that isn't currently utilized by some other feature of ML. How about "silent shooting touchpad" ? I'm still learning this body, but aparrently the 5d3's set dial has a four way capacitive touch pad built in!

I'm surprised this was not already implemented (perhaps the author did not own a body with these buttons?)

also, what is "flash movie" ? is this the button that pops up the built in flash on small bodies?

#5
General Development / Card spanning code
January 25, 2023, 01:30:51 AM
I'm trying to track down the repository that has card spanning ported to MLV lite. I realize this exists in Dannes super bleeding edge builds but there's so much going on it's hard for me to follow. For the most part I'm happy with the handling of MLV lite that the latest 2018 crop-rec iteration that Alex left off on.  I would just like to see an example of card spanning ported to MLV lite in its simplest form. Can anyone please  point me in the right direction?
#6
Share Your Videos / "Prime time of your life" (70d MLV)
January 08, 2023, 04:36:18 AM
Not mine,

but very good, had to share:

#7
Raw Video / High ISO = Low noise + Shadow detail
December 31, 2022, 08:37:36 PM
I'm doing research on this topic, this first post is a temporary placeholder where I will link to relevant threads and articles on the subject. Once I have gathered as much data as possible I will replace with a properly formatted essay.

https://www.magiclantern.fm/forum/index.php?topic=24379.0
#8
Hey guys,

Anyone have a solution for live preview that would simulate exposure after post-processing while exposing to the right? For example, I'm exposing to the right for the cleanest shadows, the preview on the monitor or the LCD looks overexposed, load a custom picture style that pulls down exposure so the preview more closely approximates what the final image will look like after post processing? I have downstream hardware (monitors and sdi conveters) that allow me to load custom LUT's, does anyone use this technique for monitoring? It's hard for me to explain to the director or the client why I'm ettr, they generally don't understand and just want to see a rec709 image that is properly exposed. Can anyone reccomend some LUT's or picture styles that would help in this case?

The other thing is since all the footage appears 1.2 stops underexposed when CDNG is imported to resolve I think even when not ETTR it would be useful to make a picture style that pulls at least one stop for the preview.
#9
General Development / .hg not found
December 24, 2022, 07:32:23 PM
please forgive my ignorance, I have never encountered this error until I tried to build from the repository hosted on heptapod

dpjpandone@MSI:/mnt/d/DEVELOPMENT/MLdev5D3/platform/5D3.123$ make
Using /usr/bin/arm-none-eabi-gcc (from PATH).
[ VERSION  ]   ../../platform/5D3.123/version.bin
abort: there is no Mercurial repository here (.hg not found)
abort: no repository found in '/mnt/d/DEVELOPMENT/MLdev5D3/platform/5D3.123' (.hg not found)!
make: *** [../../src/Makefile.src:360: ../../platform/5D3.123/version.bin] Error 255


I have tried the following:

- copying my makefile.user from the folder that I can already build from (danne's eosm experiments)
  - (error persists)

I then copied the entire hg folder from dannes repository and it works, I mean i can build  from the source downloaded from heptapod, is this safe to do? why is there no .hg folder on heptapod? Is there a way to disable it since I'm just working locally?

I'm sure it's something small I'm missing. Thanks in advance.
#10
Can we do this? I no longer have the laptop I was using for development a few years ago, but I DO have the test builds that I published at the time. Is there a way to extract the source from one of those test builds?
#11
Hey guys, can someone point me toward the line(s) in the source where the position of overlays (pertaining to HDMI output) is coded? I want to work on getting proper alignment of focus peaking and zebras on an external monitor
#12
Hey guys, I'm back!

Way back in 2016 We used adobe camera raw via After Effects and Visionlog to develop our MLV files. Nowadays I am exporting CDNG's from MLVapp and dragging the folders directly into Resolve.

Can someone advise the correct settings for the "camera raw" settings in davinci?

It seems that if "Cinema DNG Default" is selected in the (decode) drop-down it defaults to using rec.709 for the color space and gamma

If I choose "clip" in the decode field I have the options to choose between Rec.709, P3 D60, or Blackmagic Design in the color space.

If I choose blackmagic d.esign the only gamma available is BMD Film. If I choose Rec.709 the available gamma options are: 2.4, 2.6, Rec.709, SRBG, and Linear

What settings are you guys using?

Also, why does the camera metadata always have the tint set to 7.42 regardless of camera settings?



#13
It is my opinion that the current magic zoom overlay sizes could be improved. First of all, does anyone ever use it with the size set to "small" ? I mean, it's so small it's pretty much useless (for me at least)

I propose the following values:

   switch(zoom_overlay_size)
    {
        case 0:
            W = os.x_ex / 3;
            H = os.y_ex * 2/5;
            break;
        case 1:
            W = os.x_ex / 2;
            H = os.y_ex / 2;
            break;
        case 2:
            W = os.x_ex * 3/5;
            H = os.y_ex * 3/5;
            break;


BTW,  I have tested on EOSM (which supposedly does not support full screen magic zoom) and the full screen works perfectly in crop mode, but flickers a little in full frame . What do you think?
#14
Magic zoom flickers so bad in crop mode (EOS-M), that it's impossible to use.

One-Percent got it to sync well enough that it's useful, but never pushed those changes to main. I believe I have found them here:

#ifdef SSS_STATE
static int stateobj_sss_spy(struct state_object * self, int x, int input, int z, int t)
{
    int old_state = self->current_state;
int ans = StateTransition(self, x, input, z, t);

    #if defined(CONFIG_5D3) || defined(CONFIG_6D)
    if (old_state == 6 && input == 6) // before sssCompletePrepareLcdJpeg
    {
        raw_buffer_intercept_from_stateobj();
        module_exec_cbr(CBR_HOOK_BEFORE_JPG_PREVIEW);
    }
    #endif
       
    #if defined(CONFIG_5D3) || defined(CONFIG_6D)
    if (old_state == 6 && input == 6) // after sssCompletePrepareLcdJpeg
    {
        module_exec_cbr(CBR_HOOK_AFTER_JPG_PREVIEW);
    }
    #endif

    #if defined(CONFIG_EOSM) || defined(CONFIG_650D) || defined(CONFIG_700D) || defined(CONIFG_100D) //TODO: Check 700D and 100D
    //~ int new_state = self->current_state; // 10 - 11 - 2
    if (old_state == 10 && input == 11) // delayCompleteRawtoSraw
        raw_buffer_intercept_from_stateobj();
    #endif

*/
//~ Not Enough memory to undo?
/*
#ifdef CONFIG_EOSM
    int new_state = self->current_state;
    if (old_state == 10 && input == 11) // delayCompleteRawtoSraw
{ raw_buffer_intercept_from_stateobj();
module_exec_cbr(CBR_HOOK_BEFORE_JPG_PREVIEW);
    }

    if (old_state == 10 && input == 10) // input 8 CompleteRawtoSraw input 10: CompleteRawtoLCDjpeg input 6: may work too
    {
        module_exec_cbr(CBR_HOOK_AFTER_JPG_PREVIEW);
    }
#endif     
    return ans;
}
#endif


Also this from v_sync_lite:

#if defined(CONFIG_EOSM) || defined(CONFIG_600D)
    RegisterVDInterruptHigherPriCBR(lv_vsync_signal, 0);
#endif


I'm afraid I'm in over my head on this one. Can someone please help me with this? Which pieces are relevant to fixing magic zoom?
#15
I am test various fps settings on EOSM, and I need to enable gop and flush rate to find the upper limits of overcranking. I have successfully recorded up to 45fps in 1080p crop mode (h264), but to do this, gop must be set to 1 and flush set to 4. (This was on an earlier build that had video hacks enabled)

I have #define FEATURE_VIDEO_HACKS in my features file, but the video hacks don't show up in the menu. What else do i need to change to get this back?

#16
EOSM does not have a Q button, so it's done by using a 1-finger touch on the screen. This is a major problem because touchscreen is disabled when an external monitor is connected. I have fixed this by adding the following to menu.c

#if defined(CONFIG_EOSM)// No Q Button on EOSM, use PLAY if ml menu is open
    if (button_code == BGMT_PLAY && gui_menu_shown()) button_code = BGMT_Q;
#endif



It works perfectly like this. Now you can open ml menu with long-press trash, and you can enter "q" menus with play button. Finally! I don't have to unplug my monitor just to change ml settings.....

I realize that play normally increments values in ml menu, the loss of that functionality is not important to me, since you can already increment values with left/right, however, I would like to submit this fix in a pull request, so I think to do this in an acceptable way it needs to say:

if if (button_code == BGMT_PLAY && gui_menu_shown() || menu has submenu or we are in a submenu)

this way, play only acts as Q if there is a submenu,or to exit a submenu, otherwise play increments values as normal

How do I write this? Alex?
#17
EOSM does not have a Q button, so it's done by using a 1-finger touch on the screen. This is a major problem because touchscreen is disabled when an external monitor is connected. I have fixed this by adding the following to menu.c

#if defined(CONFIG_EOSM)// No Q Button on EOSM, use PLAY if ml menu is open
    if (button_code == BGMT_PLAY && gui_menu_shown()) button_code = BGMT_Q;
#endif



It works perfectly like this. Now you can open ml menu with long-press trash, and you can enter "q" menus with play button. Finally! I don't have to unplug my monitor just to change ml settings.....

I realize that play normally increments values in ml menu, the loss of that functionality is not important to me, since you can already increment values with left/right, however, I would like to submit this fix in a pull request, so I think to do this in an acceptable way it needs to say:

if if (button_code == BGMT_PLAY && gui_menu_shown() || menu has submenu or we are in a submenu)

this way, play only acts as Q if there is a submenu,or to exit a submenu, otherwise play increments values as normal

How do I write this? Alex?
#18
Jon@Jon-PC /cygdrive/c/magic-lantern/platform/eosm.202
$ make zip
../../Makefile.inc:79: remove /cygdrive/c/magic-lantern/platform/eosm.202/zip
[ RM dir ]  /cygdrive/c/magic-lantern/platform/eosm.202/zip/
mkdir -p /cygdrive/c/magic-lantern/platform/eosm.202/zip
[ VERSION  ]   ../../platform/EOSM.202/version.c
abort: there is no Mercurial repository here (.hg not found)
[ CC       ]   version.o
[ CC       ]   fps-engio.o
../../src/fps-engio.c:237:6: error: #error fixme: FPS_TIMER_B_MIN and FPS_TIMER_B_MIN are plain wrong
     #error fixme: FPS_TIMER_B_MIN and FPS_TIMER_B_MIN are plain wrong
      ^
../../src/fps-engio.c:295:12: warning: 'fps_timer_b_method' defined but not used [-Wunused-variable]
static int fps_timer_b_method = 0;
            ^
../../Makefile.filerules:23: recipe for target 'fps-engio.o' failed
make: *** [fps-engio.o] Error 1


I know I have to fix fps.engio, but what's up w/ .hg not found, never saw that before...
#19
Share Your Videos / Homeland Security - Shot in ML Raw
December 13, 2014, 12:04:59 AM
Hey guys, here is a trailer for a short film i worked on last summer, shot in ML Raw on 7D and 5D2

#20
I seem to get more consistent write speeds when I use the sd card formatter from sdcard.org. I found out about it via this article: http://www.slrlounge.com/really-format-sd-cards-optimal-performance/

When shooting raw on my EOSM I get a solid 37.7mbps write speed after using the formatter. just wanted to share this tool which has helped me.
#21
I would really like to see the gop/flushrate hacks enabled in nightlies for EOSM. Can someone tell me what is wrong with these video hacks in their current state and what I must focus on to make them acceptable for nightlies?
#22
I need help testing DMA flags on DIGIC4 cameras. "Enhanced" shows improvement on 5D3, but it seems that the "Original" flags are working better on older cameras (confirmed on 7D, and likely to show improvement on 50D, 5D2, etc...)

from edmac-memcpy.c (starting on line 122)

/* see wiki, register map, EDMAC what the flags mean. they are for setting up copy block size */
    #if defined(CONFIG_7D)
    uint32_t dmaFlags = 0x20001000; //Original are faster on 7D and possibly other DIGIC IV cameras
    #else   
    uint32_t dmaFlags = 0x40001000; //Enhanced
    #endif


if you have a digic4 camera and can compile, please add your camera to the if statement, and report your results in this thread, or in  pull request #589

Thanks!


#23
Hey guys,

I am doing all my post production on a ASUS motherboard based windows PC I built last year, and I'm looking to purchase a graphics card to replace the onboard graphics (which does not support resolve)

I use the adobe AE and Premiere and I would like to stat using resolve. So I think I want something that has CUDA, OPEN CL, and supports mercury playback engine etc...

I would like to keep the price around $200.00

can someone recommend a card that meets these criteria?
#24
I have created this issue: https://bitbucket.org/hudson/magic-lantern/issue/2110/mlv_play-7d-5d2-garbage-borders-oer-hdmi

to coincide with this post:

from the issue:

When viewing MLV_PLAY on and external HDMI monitor, Top and bottom borders of the display (which should be masked in black per image size) display garbage. You can see an example of this in the following images:

![mlv play psychadelic borders 1.jpg](https://bitbucket.org/repo/8b7b/images/555390794-mlv%20play%20psychadelic%20borders%201.jpg)![mlv play psychadelic borders 2.jpg](https://bitbucket.org/repo/8b7b/images/2571322861-mlv%20play%20psychadelic%20borders%202.jpg)

I will be reading through the mlv_play.c code this afternoon, if someone has any hints about the relevant lines I should focus on, please post them here.

Thanks!
#25
General Development / how to "safely" remap buttons
August 29, 2014, 05:35:16 AM
I often use my 7D with a letus MCS camera cage which is great, but it's very difficult to press the trash button to access ML menus. I never use the raw/jpeg button on 7D, (I prefer to do file conversions on a computer instead...) and it's in a comfortable to reach place, so I decided to lie in the gui.h file and tell ML that raw/jpg is actually trash.

Does anyone see a problem with this? I get the desired result, but I'd really like to know the correct way to do this.

I found this thread, which was related, but not very helpful: http://www.magiclantern.fm/forum/index.php?topic=7816.msg70766#msg70766

#26
Hey guys,

I want to solve issue #2065:  https://bitbucket.org/hudson/magic-lantern/issue/2065/movie-restart-7d-stuck-in-loop-cannot-stop

I have been looking at the following lines for movtweaks.c:

#ifdef FEATURE_MOVIE_RESTART
        static int recording_prev = 0;
       
        #if defined(CONFIG_5D2) || defined(CONFIG_50D) || defined(CONFIG_7D)
        if(!RECORDING_H264 && recording_prev && !movie_was_stopped_by_set) // see also gui.c
        #else
        if(!RECORDING_H264 && recording_prev && wait_for_lv_err_msg(0))
        #endif
        {
            if (movie_restart)
            {
                msleep(500);
                movie_start();
            }
        }
        recording_prev = RECORDING_H264;

        if(!RECORDING_H264)
        {
            movie_was_stopped_by_set = 0;
        }
    #endif


I'm trying to understand how it works, and why it is broken on 7D. One thing that stands out to me right away is that 7D does not use the SET button to start/stop recording (unlike 5D2 and 50D, it has a dedicated start/stop button)

any additional information you can provide would be greatly appreciated. I tried to compare to a build where it is working properly, but there are so many changes by now that it's hard for me to track it down. If you can just point me to the parts that are relevant, that would be awesome...

Thanks!
#27
When the module is loaded, Audio should be enabled by default as it is with .h264 video. Too many times I am testing a new build and I forget to enable audio since I never had to do it before with my cam.  I got great footage for the movie today, but I forgot to enable sound, now I have no reference track and I have to sync the external sound up by eye.... I know, my fault, but shouldn't the raw video behavior be the same as .h264 video? just for sanity?
#28
I wanted to start a thread where I could keep some notes about some of the workarounds I've been using for reliable raw capture while an external monitor is connected.

The following tips apply to 7D, and possibly others, please feel free to add your own camera specific tips below:

1. For the best experience (at the cost of monitor resolution) I advise to use the "force-VGA" option in the "advanced" menu. This limits the hdmi output to 480p and all of the cropmarks, global draw overlays, and even playback via MLV_PLAY works perfectly. This mode does not seem to tax the cpu any more than using the camera's LCD screen as it's the same resolution.

2. If you need to use 1080i because you are pulling focus, or some other reason where monitor resolution is critical, you should disable as many of the overlays as possible for the best performance.


3. for the BEST performance I recommend you set global draw to" "Don't Allow" in the MLV menu. This will clear all the overlays when you hit record, and show the full native 3:2 image from your sensor on the hdmi monitor. If your monitor supports custom cropmarks or framing guides, you can use those instead. I recently bought a marshall vlcd-56md  that has custom scaling and custom cropmarks, they are very nice features to use with ML Raw. My Zacuto EVF also does this, as well as most of the offerings from small HD. If your monitor doesn't have these features, you can always use anti-glare screen protector film to mask off the active recording area on your monitor. This way you can still see through to check your settings.

4. Do lot's of testing of your settings, and check the recorded result on your computer before you go shoot with these settings. Make sure you're not recording a lot of pink frames or a torn image. Something I've recently discovered is that when i'm using HDMI and I set the canon Q menu to 1080 24p, Almost every other frame is torn, and the resulting footage is unusable, but if I set canon Q menu to 1080 30p, and use FPS Override to set my frame rate to 24, (or 23.976) the recorded image is no longer torn, but I still get some pink frames. The only settings I have found to work reliably with no tearing or pink frames with hdmi monitor at 1080i is to set the region to PAL so that 1920 25p becomes available in canon Q menu. Then I use exact framerate FPS override to get 23.976. There is no tearing, and I get very few pink frames (if any). 


Hopefully, we can add tips and tricks for other cameras below and have a nice thread that can save new users some headaches, and possibly even provide some useful information that may lead to new developments to improve external monitor usage.
#29
I know this can be done from the canon menu, but I think it would be useful if this setting could be linked to raw recording so that if you're shooting raw, the iso wheel only selects full stops, but when you switch back to h264 it automatically enables1/2 or 1/3 stops again
#30
I take back my earlier statement about switching to 480p during record, as I have had some success recording with a monitor using certain builds, Here is what I think now:

Even if the alignment is fixed for MLV PLAY hdmi output, the higher resolution preview results in a much slower playback framerate, I think hdmi output should automatically drop to 480p for better performance. I have to set the option manually each time before I use mlv play with a monitor, and it slows me down quite a bit.

Alternative could be to have the "force vga" setting stored with display presets... or even assign an unused button to this setting...
#31
I accidentally downloaded the alpha build for 7D.203 (over one year old) when I used the EOSCard app yesterday. The audio controls on this build are working well for me.

Can we comment this line back in (from 7D internals):

/** We can't control audio settings from ML (yet) **/
//~ #define CONFIG_AUDIO_CONTROLS
?

I remember reading about a memory leak with wave recording, but it looks like it's disabled in 7D features, as well as wind filter and headphones. With the specific problems undefined in features, is it safe to turn audio controls back on from internals?


#32
Image buffer support has been added to QEMU.

If you are experiencing problems with external monitors, this is important to you!  here is how you can help:

We must dump the image buffers in various display modes on as many cameras as possible.

To dump the image buffer:

1.press trash can to enter magic lantern menu
2.from the "DEBUG" menu, select "Dump image buffers" you will now see a countdown "will dump VRAM in 5 seconds"
3.quickly enter the display mode you wish to dump (for example, if you are trying to dump 5xzoom mode, you must quickly press 5x zoom before the 5 second counter runs out)
4.wait for "Dumping VRAMS.............DONE!)"

Now the dump is stored on your memory card.
You must repeat this process for about 75 possible display states

A logical order for this task:

1.dump live view in 1080p video mode (standby)
2.dump live view in 1080p video mode (recording)
3.dump live view with  5x zoom enabled (standby)
4.dump live view with 10x zoom enabled (standby)
5.dump live view in 1080p 3X crop (600D) (Standby)
6.dump live view in 1080p 3X crop (600D) (Recording)
7.dump live view in 720p video mode (standby)
8.dump live view in 720p video mode (recording)
9.dump live view in 480p video mode (standby)
10.dump live view in 480p video mode (recording)
11.dump live view in 480p crop ( 550D) (Standby)
12.dump live view in 480p crop ( 550D) (Recording)
13.switch back to 1080p (via Canon Q menu) press dump and quickly press play (to review a video)
14.enable raw video recording (via ML menu) press dump and quickly press play (to review a mlv video)
15.disable raw video, switch to photo mode (from mode dial or dedicated switch)
16.dump live view in photo mode
17.press dump then quickly press play (to review a photo)
18.For this last one, Image review must be enabled and set to at least 5 sec. (canon menu) after you press "dump VRAM" you must quickly take a picture, and the dump must be captured during image review (automatically review a photo after taken)

***************************************************************************
19. Plug in HDMI monitor and repeat steps 1-18 (1080i output)
20. Now select "Force HDMI-VGA in ML menu" and repeat steps 1-18 (480p output)
21. Disconnect HDMI monitor, Connect SD monitor (canon RCA cable) and repeat steps 1-18
22. From Canon menu, set your Video Region to the opposite of what it is currently set (if it says NTSC, switch to PAL and vice versa) and repeat 1-18.

Don't forget to set your video mode back to your country's settings when you are done.

Not all cameras support all the listed modes, for example,  5D2 has only two (1080p and 480p), 600D has more.... Try to dump every mode that your camera supports. The goal is to collect a dump of every possible display mode on every supported camera.

After you have collected all these dumps, compress the highest level folder into a zip archive to preserve the file/folder structure and upload it here.

If we can collect dumps for most of the cameras it will be possible to emulate these various display modes for automated testing on the nightly build server. This will allow a developer to fix current issues with external displays, and ensure that future features work with external monitors.

Thanks for your help with this task!
#33
Hey guys,

I have been hypothesizing about ways to reduce overheating since summer is here and I am shooting outside on a lot of hot days.

I was thinking about the camera's internal voltage regulation. I'm sure that it has several DC-DC converters that regulate the incoming 8.4v (from battery) to the needed 5v, 3.3v for the various components inside. I was wondering if feeding the camera an externally regulated 7.2v might reduce some of the heat generated by the internal DC-DC converters.  What do you think? I'm not so sure using a battery grip makes as much of a difference as using an external powering solution that regulates the voltage.

I wrote an article about some other techniques I have used to reduce heat here: http://www.dpvisualmedia.com/home/home-3/canon-dslr-overheating-workarounds-482

and I also wrote an article about building a shoulder mount that uses externally regulated Anton Bauer batteries to power the camera here:
http://www.dpvisualmedia.com/home/home-3/setting-up-a-properly-balanced-shoulder-mount-144

I'm not trying to spam the forum with my blog posts, I'm truly looking for feedback about reducing heat with externally regulated power. I just thought I would share the info since it's relevant. Just to show that I'm not biased, here is a great  article not written by me that pertains to this subject:

http://www.diyphotography.net/how-to-make-a-dslr-battery-run-4-times-longer/



#34
General Help Q&A / Wireless Audio Sync Module
May 18, 2014, 04:57:36 AM
Hey guys,

I have been experimenting with arduino mini with built in wireless radio like this: http://www.dpvisualmedia.com/home/home-3/anarduino-low-cost-arduino-compatibles-from-usa-292

one of the projects that has been very useful is I made a wireless sync system for all our dslr's and ext. audio. Using the "start record from shutter half press" feature of ML, I press a button on my base unit, and all cameras + audio start simultaneously. There is a few frames of variance (delay) because the delay between triggering through shutter port and recording actually starting varies from take to take. What I would like to inspire is  a module that triggers shutter half-press when the first frame actually starts recording, so that a camera can be used as master and there will be no variance between the master camera and ext sound. I am currently working on encasing the wireless modules inside of one of the battery slots in a battery grip so the whole thing is self-contained and the arduino can draw power from the camera battery or ac adapter.

The biggest advantage would be for 50D users (I am one), since there is no reference sound for plural eyes, it is of the utmost importance that there is a predictable amount of delay (that I can compensate for on my dr-40 audio) This will also be useful for the other models. I plan to have some pcb's fabbed that fit inside of battery grip since I need several, I could offer a kit if anyone else is interested in building the wireless sync system I am using.

So the way it should work is like this: you press "set" to start recording, once the first frame actually starts recording, the camera pulls the halfshutter pin low, the arduino sees this event through digitalRead, and then transmits to the arduino connected to ext sound device, which triggers sound recording. This way, there is a predictable amount of delay (we can calc time for transmission and program this as pre-delay value on our ext sound recorder)

You can put as many transceivers on the network as you like, to trigger all the other cameras via shutter. The range of the RFM-69HW is well over 300meters. It's such a nice system to use for live events where you are using a lot of static cameras. I am also working on wireless video rx/tx (sadly only composite) but I'm testing some low cost transmitters because Ideally I'd like to have a wireless video feed from each camera and trigger each camera from buttons on a small 7" monitor. So I can enjoy most of the concert, and have visual feedback of battery and memory status so I know when to make my rounds.

What do you guys think? Can you help me get as far as a module that pulls halfshutter pin low on record? I already have everything after that working. I plan on getting into development on ML too, but I thought this would be really easy to implement for someone who is already experienced, and I could focus my efforts on the external hardware for now.
#35
Hey ML forum,

I have been a fan of your work for so long. I have always been very satisfied with the stable build on my 550D and 600D. I recently started experimenting with RAW on the nightly builds,  which has led to me getting a 50D. I have some experience coding on arduino. You can read about my wireless follow focus and other video related projects here: http://www.dpvisualmedia.com/home/home-3/diy-wireless-remote-follow-focus-v2-0-focoloco-327

So I would like to try to contribute to the development efforts. My 50D will arrive tomorrow, I'll be setting up the toolchain and reading a lot today. Thanks for letting me be a part of this community!