Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - a.sintes

#1
General Development / Focus sequencing (5D3)
October 25, 2023, 05:26:31 PM
UPDATED 2024-01-23

Hello,

Just released a new focus sequencing feature allowing us to create and manage a list of focus point created via autofocus with specific transition durations, then easily replay the sequence step-by-step while recording a video, using a single button push.

Demonstration video:


Documentation:
The complete documentation (explaining both why it was a challenge regarding the lens calibration and how to use the feature) is available online using this markdown link.

Download link:
5D3 users, 1.1.3 & 1.2.3 pre-builds are available in this GitHub repository, which is basically an up-to-date fork of Danne's repository, including all the modules but also the ultrafast framed preview feature and the SD/CF dual slots free space display.

FAQ:
- does this replace the previous cinematographer module?
yes, it's a complete replacement of it, including a way more user-friendly GUI, a lens calibration process and above all an autofocus-based sequence creation which was not possible before

- why is it running on a 5D3 only as it's mainly a ML module? does a port to other cameras possible?
basically, because a change in the lens.c ML code was required first to get a reliable lens_focus function used by the focus_seq module (explained in the documentation)...
this is why it's currently embedded in the only specific Magic Lantern GitHub repository that provides this function, anyway as the change doesn't impact yet the legacy code a port in other repos for other cameras remains highly possible (pending)

Greetings:
Thanks a lot to @names_as_hard for code review, support & advices, much appreciated!


23-Jan-2024 update:
Following a user (@bigbe3) comment on YouTube, I just fixed the module to deal properly with lenses that doesn't report the focus distance and/or that are stucked at initial calibration step due to a missing forward lens limit detection, typically like the Canon EF 50mm f/1.8 II: the download link above was updated accordingly.
#2
[UPDATED 2023-10-23]
Hello,

Following a request on our Discord channel, I've implemented a very small feature to replace the legacy "free space" display in the ML top bar (top-right corner) with a new one including:
- a proper refresh (e.g.: after recording a video) of the top bar (previously only set once at camera startup, or after taking a picture)
- an indication of the card concerned by the display (CF or SD prefix)
- a potential display of the two card slots in case of SD+CF usage (e.g.: 5D3)

Note:
- to reduce a little the size the unit was removed, which is implicitly GB
- in case of dual slots usage with sound recording activated, the temperature indication may disappear to leave some space

2023-10-23 update:
The refresh of the values is now working well (no need to reboot the camera): current ML code (fio-ml.c) is based over property callbacks that are triggered only with specific camera events (e.g.: taking a picture, removing a file using Canon menus...), so to avoid taking a picture (shutter consumption) I've finally managed to compute the remaining free space the "hard way", using a custom computation:
- the first time, ask ML about the remaining free space on card slots (using get_free_space_32k) then compute the cumulated size of the files within the DCIM/ folder, it allows then to deduce a close approximation of each card volume size
- each time we need to refresh the UI (happens quite rarely, but typically OK after recording something due to the UI reconfiguration), compute the current cumulated size of the files inside the DCIM folder then use a simple subtraction to deduce the remaining free space
For those worrying about the file size cumulation routine, it took approximatively less than 10ms (depending of the number of pictures/movies present on the cards), which is completely sustainable as the top-bar refresh happens really occasionally.

It look like this:


And it was pushed in my GitHub repository, which is basically an up-to-date fork of Danne's BitBucket one, including also the ultrafast framed preview feature.

You can download and install 5D3 (1.1.3 & 1.2.3) builds (including modules etc.) using these packages.

Hope it may help someone else!

Note: it may potentially solve this old ML thread.

Thanks to @WalterSchulz and @names_are_hard
#3
General Development / Ultrafast framed preview (5D3)
August 23, 2023, 05:10:58 PM
UPDATED on 2023-09-28 to summarize the whole discussion thread :-)


After some time spent tweaking Danne's crop_rec codebase for the 5D3, I finally got some interesting results around the framed preview, with visible performance increases, leading to an ultrafast framed preview feature.


Latest feature overview video:


warning: the "Framed preview" menus seen in this video are not the latest ones, please read below for the updated version.


Technical details:

The purpose of this ultrafast feature is to reduce as much as possible the computation time required to perform the framed RAW preview rendering in LiveView in order to save CPU time to do other tasks (e.g.: recording RAW data).

This is achieved by precomputing everything possible, notably the RAW & LV buffers offsets and the RGB gamma transformations, so the drawing itself may consist in the lightest possible loop doing linear accesses to data thanks to simple pointer dereferencing.

The latest version of this feature only requires a 675KB memory allocation to work, even when dealing with half resolution, which must be sustainable by most cameras.

By doing this in colored and grayscale previews, we instantly get a smoother preview (both previewing & during recording), allowing us to potentially reduce the sleep times defined to leave enough headroom for the CPU to record the RAW data.

The cache precomputation is managed by simply computing a determinant value that may change when selecting a new RAW video resolution, so it's quite transparent code-wise.


Ultrafast feature menu current organisation (following @Grognard recommandations):

Framed preview
    Engine: legacy | ultrafast
    Comportment
        Idle
            Style: colored | grayscaled
            Resolution: half | quarter
        Recording
            Style: colored | grayscaled
            Resolution: half | quarter
    Timing: legacy | tempered | agressive
    Statistics: off | on

about Comportment:
We can now choose both style & resolution based over the raw_recording_state of the camera (idle or recording), which is more natural for the user (adaptive raw preview).
Anyway, we can also continue to call the raw preview routine by forcing RAW_PREVIEW_COLOR_HALFRES or RAW_PREVIEW_GRAY_ULTRA_FAST (legacy comportment), meaning we can again switch between half resolution colored and quarter resolution grayscale during mlv_play replay (was broken before) with both the legacy or ultrafast framed preview engine.
Adaptive mode also helps to avoid the colored/grayscale switch during recording when dealing with "LV freeze" framing, which is more confortable.

about Timing:
- legacy relies on the current sleep statements as in Danne's repository
- tempered only tries to speed-up things when idling (or replaying via mlv_play), which is a good compromise (faster before / safe during recording)
- agressive tries to also reduce the sleep values when recording to speed-up the display a bit (need to be tested more: may lead to unexpected recording stop depending of the write buffer saturation)


Performance increases:

We can find below the current benchmark statistics dumped on my 5D3, the values being averaged on 1000 display loops:

Quote
style
Quote
engine
Quote
hz. res.
Quote
timing
Quote
draw(ms)
Quote
gain
Quote
fps
Quote
gain
colorlegacyhalflegacy180.211ref.4.451ref.
colorultrafasthalflegacy78.703x2.298.200x1.84
colorultrafasthalfultrafast78.230x2.309.360x2.10
colorultrafastquarterlegacy51.162x3.52(*1)10.429x2.34(*1)
colorultrafastquarterultrafast53.157x3.39(*1)12.602x2.83(*1)

Quote
style
Quote
engine
Quote
hz. res.
Quote
timing
Quote
draw(ms)
Quote
gain
Quote
fps
Quote
gain
grayscalelegacyquarterlegacy46.452ref.10.840ref.
grayscaleultrafasthalflegacy33.810x1.37(*2)12.639x1.17(*2)
grayscaleultrafasthalfultrafast31.939x1.45(*2)17.840x1.65(*2)
grayscaleultrafastquarterlegacy22.598x2.0615.184x1.40
grayscaleultrafastquarterultrafast18.137x2.5623.475x2.17

(*1): no legacy reference
(*2): no legacy reference, but we got performance gain even when compared to legacy quarter resolution!

As we can see, we generally get a preview drawing routine which is between 2 and 2.5 times faster than before using this look-up table technique, leading to a global increase of performances (display frame rate) around 2 times faster: the good news about this is we can start to have a very usable quarter-resolution colored preview (~13fps) and above all an almost "realtime" (24fps) grayscale preview, even when recording.


Source code and Pull Request:

The (5D3) source code is actually available on my GitHub repository, which is basically a fork of Danne's BitBucket one in which I continue to play (currently working on lens.c modifications and a reliable focus sequencing module).

Developpers, please help the ML community by reviewing the ultrafast framed preview Pull Request opened on the "magiclantern_simplified" repository, so it may be profitable for non-5D3 users.


Download links:
5D3 (1.1.3 & 1.2.3 firwmares) download links are now available directly here.

These packages are a complete replacement of Danne's ones, the code being up to date with his latest code changes (February), including all the ML modules plus the Cinematographer-mode one: you'd better then do a fresh install to use it properly.


Final words:

Thanks a lot to names_are_hard, WalterSchulz, Danne and every testers out there!
#4
Raw Video / HDMI monitoring on 5D3 in crop mode
August 18, 2023, 09:56:53 AM
Hi!

I really (really) enjoy shooting videos with my 5D3 using Danne's ML latest build and to be honest the only BIG missing feature for me currently to be completely happy is the incompatibility between crop modes (e.g.: 3.5K preset) and HDMI output so I'm unable to use an external monitor with a proper framed preview.

The question is: is there something we can do about it?
I really want to understand more how it works and eventually help to fix it if possible...

My current understanding is the ML developer's general approach of previewing video in LiveView and/or outputting in HDMI has been always first to try to do it the hard(ware) way by tweaking camera registers, which is very understandable as it may result in glorious real-time displays that are potentially very close to a proper framed preview.

Anyway, sometimes it's just not possible due to hardware limitations and this is why the current "framed preview" process in the 5D3 fallbacks to a 100% software way to do it, capturing the RAW buffer data window and displaying it by drawing pixels in the LiveView (YUV422 buffer feeding).

Of course this is not ideal because of the low CPU horsepower, so the framed preview is quite slow and with debatable quality (potentially switch to half horizontal resolution, in grayscale) in order to leave enough resources available for the RAW data writing itself, but at least it's always properly framed and usable enough (and I'm now convinced we can speed-up a bit this LV buffer feeding process using look-up-tables and pointer dereferencing - working on it).

My final question is then: is it possible to achieve an HDMI output process using the same software-only principle used for framed-preview?
Something like: when HDMI connection is detected, set the HDMI output to a fixed monitor-standard resolution (e.g.: 1280x720, YUV422) then use a software callback to create a properly framed image of this resolution from the RAW data buffer that will be streamed through HDMI (potentially disabling the LV display to increase performances).

Does this approach sounds possible or am I missing something?

I'm not at all comfortable with low-level registers stuff, but I can help in the pure software programming part (YUV framed image creation).

Thanks in advance for your insights.
#5
Hello dear ML enthousiats,

Just released this music video for a dark folk band (with Leo Margarit from Pain Of Salvation on the drums) that I shot entirely with a 5D3 & Magic Lantern


camera: Canon EOS 5D mark III
lens: Canon EF 24mm f/2.8 IS USM
cards: 64GB CF (Komputerbay Professional 1066x UDMA 7) + 64GB SD (SanDisk Extreme PRO SDXC 200MB/s, UHS-I, class 10, U3, V30)

firmware:
Canon firmware 1.1.3
Magic Lantern Danne's crop_rec_4k_mlv_snd_isogain_1x3_presets build (2023-02-03)
cinematographer-mode module to deal with focus sequences

Magic Lantern configuration:
3.5K 1:1 centered x5 (x1.7 crop factor) preset
3072x1728 resolution (16:9)
14-bits lossless data format
fast grayscale framed preview
240MHz SD overclock
card spanning

camera settings:
23.976 fps
100-200 ranged ISO
f/5.6 aperture
1/48 (180°) shutter speed

lighting setup:
2x Aputure Amaran 200x (key lights)
6x construction lights (back lights)
Neewer RGB LED (eye light)

post-production:
Magic Lantern Video File System
Adobe AfterEffect (DNG to Apple ProRes conversion via Camera Raw: white balance, exposure, highlights & shadows, micro-contrasts, dehaze, color mixing, color noise removal, masked sharpening...) applying the ML-Log profile
Topaz Video AI (4K upscaling via Gaia engine, 2x slow-motion via Apollo engine)
BlackMagic DaVinci Resolve Studio (initial review, editing, crop, stabilization, wires removal, clip & timeline color-grading...) with Neat video (noise removal) and Dehancer (film print emulation) plugins
Apple ProRes export via Voukoder
#6
Modules Development / focus sequencing
March 16, 2023, 10:48:36 PM
Hello!

Having fun creating my very first Magic Lantern module on my 5D3, trying to extend the embedded rack focus feature so we can deal with a more complex sequence of focus points that is easily dynamically editable, including also variable transition speeds.
We can then replay the sequence during video recording by simply push a single button to switch from a focus point to the next one in the list.

Detailed purpose, source code, pre-built .mo and documentation can be found here.

Enjoy, and thank you all for the amazing work around ML, it's simply stunning!