Danne's crop_rec_4k experiments for EOS M

Started by Danne, December 03, 2018, 06:10:17 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Danne

Thanks for sharing findings.
You can enable auto focus in hack section under raw video. Check under asvanced snd enable af on. Not very reliable.

omygolly

Just wanted to pop in and say a big thanks to @Danne (and everyone else who contributed) for the latest builds. The added funtionality to the customized buttons are brilliant! I LOVE being able to toggle false colors, a huge help for me as I've been struggling with getting correct exposure. Now with the fuctions sorted I'm eagerly watching all the @gabriielangel posts and testing out optimal performance for each mode (w/wo HDMI). You guys are great.

masc

Dots can be removed using focus pixel maps. Interpolation method 3 looks nice for this example:
5D3.113 | EOSM.202

gabriielangel

Quote from: masc on May 04, 2022, 09:26:43 PM
Dots can be removed using focus pixel maps. Interpolation method 3 looks nice for this example:

Yes, I just tried with the other clips and it gives a massive improvement, Thanks!

Danne

Chroma smooth 3x3 is not that bad either.

gabriielangel

Quote from: Danne on May 05, 2022, 09:17:34 PM
Chroma smooth 3x3 is not that bad either.

Actually, this works a lot on the hair images I were referring to. It gives it the shine back. Thanks!

Your suggestions prompted me to dig deeper into MLV app (I'm still new to the whole ML pipeline)

A question: I saw in MLV app that there is an option to stack TIFF's to get a clean image (Averaged Frame).
Would it be in the realm of possibilities to, instead of scanning the whole .mlv file to average to a single photo image, Output a sequence of Tiff's where each frame in the .mlv file is averaged with the nearby frames? (And just clean the shadows like magic)

For example, given frames 1,2,3,4,5,6,7,8,9...
Frame 1= avg(1,2,3,4,5,6,7,8)
Frame 2= avg(2,3,4,5,6,7,8,9)

Or maybe
Frame 4=avg(1,2,3,4,5,6,7,8)
Frame 5=avg(2,3,4,5,6,7,8,9)
etc.

I "Think" that 8 frames plus slow camera movements would work, maybe?

Danne

Possible. Check scripts included in Mlv app. Yould tweak those directly without recompiling.

gabriielangel

Quote from: Danne on May 06, 2022, 07:08:30 AM
Possible. Check scripts included in Mlv app. Yould tweak those directly without recompiling.

Are you referring to TIF_CLEAN.command and enfuse_average.command?

If so, yes this is within my reach.

Danne


gabriielangel

Frame Staking Noise Reduction used on Video

Note: Google makes a preview of the videos in the links with Youtube-like compression.
To get the full Prores 422 quality (Necessary to evaluate the noise processing), you can download the files.


I have been running a lot of tests lately to devise an optimal pipeline for a project. Noise being something to always take into consideration, I were wondering if Frame Stacking, used regularly when taking stills, could be applied to video. There is a built-in script in MLV App based on this idea. I think the original intent was to use the method mainly with frame burst sequences, but I decided to give it a try. Although the results are far from perfect, I thought it was still worth posting as it can still be used creatively if taking some precautions. Also, maybe someone reading this will have a stroke of genius and figure out how to make it more "General-Purpose".

The script is called TIF_CLEAN.command and can be found  in MLVApp's export settings:

00-MLVApp-Export-Settings" border="0

The process will export your .mlv file as a .tiff sequence which will be post-processed by the script. A Prores .mov file will also be created from the Tiff's.
On first run, Hugin, the program on which this script is built around, will be installed along with its dependencies. The download and install process should take less than 20 minutes. Unfortunately, in my case, in part because I am using an older OSX, I ran into permission problems and a lot had to be installed manually. There are plenty of cues appearing in the terminal window for you to figure it out, IF you have done those kinds of installs before... So it took me about an hour to get it running. If you happen to be terminal or Command-Line Averse, save this for a day where you have plenty of spare time.

Some Examples. (Video files links after the Images, All shot with EOS-M, 23.976fps 1/48th Shutter)

In order to highlight the strenghts and weaknessses of the process, I went a little outside of what would be done normally. I exposed at least 2 stops below optimal and then added 2 stops in MLVApp to increase the noise.
Being able to record underexposed  (and know you can deal with some of the noise later) has its advantages, as it allows you to record higher resolutions in 14bit almost continuous (More details in shadows), have fully intact highlights, etc. Also, when shooting in dimly-lit environments, such as Churches and concert halls, a tiny inconspicuous EOS-M which can be fitted to a tiny gimbal can be preferable to a heavier Full-Frame.

Example 1: Slow Pan, 1 axis Pan, Tripod EF-M 32mm f1.4 @ f2.2, ISO 100

01-M06-1204-Original" border="0
Areas or interest with the green arrows. We have Shadows, ok-ish exposed and underexposed.

02-M06-1204-Original-Plus2-Stops" border="0
With 2 Stops added (The noise is clearly visible to the right of the image.)

03-M06-1204-Processed" border="0
After processing. The noise is completely gone on the Recycling bin, and a little movement is left in the shadows which were lifted to the right of the image.

Original Video: https://drive.google.com/file/d/1lsmxm3iyz4rUvq5QkqQcVY2XobWURuGb/view?usp=sharing

Processed Video: https://drive.google.com/file/d/1n6rY61uKT1Fc-6CLzxsnWyZxfR1ho9J-/view?usp=sharing


Example 2: Slow Pan, 2 axis Pan, Tripod EF-M 32mm f1.4 @ f2.2, ISO 100

04-M07-1426-Original" border="0
Original.

05-M07-1426-Original-Plus2-Stops" border="0
2 Stops Added.

06-M07-1426-Processed" border="0
Processed. See how the details in the hair and on the shirt are preserved, and the noise in the leaves is almost gone.

07-M07-1426-Processed-And-Smooth-Alias-3-pass" border="0
Smooth Aliasing 3 pass added on top for good measure. This adds a motion blur to the footage, which could be helpful in some cases, such as the next examples.

Original Video: https://drive.google.com/file/d/1AqdLu6tQyxHspNqyFfKuEKqDb7xg8uGN/view?usp=sharing

Processed Video: https://drive.google.com/file/d/1wVQbrAL9cuOreMxAcWJaWgW4b0gLdVHY/view?usp=sharing

Processed with Smooth Aliasing 3 pass added: https://drive.google.com/file/d/1D1X3WZ4V_TZUJ7CfXBX4YswOojvmTbWT/view?usp=sharing


Example 3: Fast Motion, All Axes, Handheld EF-M 15-45mm @ 15mm f3.5, ISO 800

08-M07-2344-Original" border="0
After a few exchanges with Danne, it became obvious that this method was severely challenged when having to align images with movement on several axes at the same time. So I shot this one handheld, but with a stabilized lens. One thing to note is that in its original form, the script will report a silent error when images cannot be aligned properly (You end up with a video with a frozen frame where the error occurred. The error tolerance is 3 pixels) In order to give it a larger margin, I edited the script so that it has a 100 pixel margin, to allow plenty of movement. The following options need to be added to align_image_stack (-t 100 -g 20). You end up with: align_image_stack --use-given-order -t 100 -g 20 -a ...
The -g option gives a 20x20 grid (As opposed to the default 5x5) so that, hopefully, the movement would not affect the whole image (More apparent in example 4)

09-M07-2344-Switch-t-100-g-20-Plus2-Stops" border="0
Processed. As you can see, when an object is moving fast enogh to cross the 5 frame boundary within a sampling period, we end up with ghosting (The script stacks a total of 5 frames, Current, 2 before and 2 after).

08b-M07-2344-Processed-Plus-Smooth-Aliasing-1-Pass" border="0
Processed with Smooth Aliasing 1 pass added. This adds some motion blur, but not enough to hide the ghosting.

10-M07-2348-Switch-t-6-g-10" border="0
Here, I filmed the same sequence but without a car in the middle of the frame. You can still see some ghosting on the cars afar (In the video), but the bulk of the image isn't too affected.

Original Video: https://drive.google.com/file/d/12ScYGKewG3GkjNaP_xzQ2JEJ9X0FLyqT/view?usp=sharing

Processed Video: https://drive.google.com/file/d/1n3_ORRINhAWGQ8vbx9NbhSWLzjD7BsK2/view?usp=sharing

Process with Smooth Aliasing 1 pass added: https://drive.google.com/file/d/1VL225cvRZLfh22tLGQMQSy5XWofCMVeK/view?usp=sharing

Sequence without  Car: https://drive.google.com/file/d/17LyqUbVZk0YpyDo7yq76W3TnQhq8KnHc/view?usp=sharing


Example 4: Fast Motion, All Axes, Handheld EF-M 15-45mm @ 15mm f3.5, ISO 100

11-M09-1650-Original" border="0
Original. To find out if night time had anything to do with it, I reproduced example 3 during the day.

12-M09-1650-Switch-t-100-g-1-Plus2-Stops" border="0
Processed with options -t 100 -g 1
The ghosting is still there and when the car passes by, the whole image shakes as if it were an earthquake!

13-M09-1650-Switch-t-100-g-35-Plus2-Stops" border="0
Processed with options -t 100 -g 35
This processes using a 35x35 grid (As opposed to the 1x1 of the previous example). The image is a lot more stable, but still not enough. A finer grid would be required and the ghosting still needs to be addressed.

Original Video: https://drive.google.com/file/d/1Er3r13Vu9n-d1SNKwcvYNESK1MCuno1h/view?usp=sharing

Processed Video 1: https://drive.google.com/file/d/1qW6M0MEFI8TZE2eq9O2sdP9N4mcJ92ZR/view?usp=sharing

Processed Video 2: https://drive.google.com/file/d/1qW6M0MEFI8TZE2eq9O2sdP9N4mcJ92ZR/view?usp=sharing


Example 5: No Motion, Tripod EF-M 32mm f1.4  @f2.0, ISO 3200

14-M09-2246-Original" border="0
Original. Here only the moon as a light source at iISO 3200. This example would benefit from more frames in the stack, and maybe from dual ISO.

15-M09-2246-Processed" border="0
Processed. The noise has been reduced substantially. There is still some movement in the noise, but it shows how powerful this stacking method can be under ideal conditions.

Original Video: https://drive.google.com/file/d/1FrkHYPpgIQbZ_HL4DY1mPzYivtqZQ6af/view?usp=sharing

Processed Video: https://drive.google.com/file/d/17lLcWzV-Lu-xLSV4MkfN8oyWMZOj_EzK/view?usp=sharing

Conclusion

The way it is now, if planned accordingly, this method could allow to film still subjects such as flowers and wildlife (As long as it isn't too windy, as too much movement creates artifacts); Statues, landscapes, Architechture, etc. Using a tripod (or maybe a gimbal) is mandatory.

This is beyond my current technical abilities, but maybe someone with more skills can find a clever way to use the deghosting_mask function present in Hugin. Using a much finer grid would also help a lot (I could not go beyond 35x35 and get a complete video sequence). If there was a way for the system to detect motion and avoid blending frames there, it would also help.

Also, this method isn't for everyone. With a Six Core 3.33GHz, it takes 6 minutes per second of footage to process at 2.5k resolution.

I have seen similar results (minus the major artifacts) with Topaz Vid Enhance AI at 4-6 minutes per frame when using the cpu.
As some of you know, when using temporal noise reduction such as Neat Video or the NR present in DaVinci studio, some details are lost. This is why Frame Stacking is interesting, as fine details are preserved a lot more.

So, this has been tested so far, so anyone with the same idea reading this will have a few examples to start with :)


References:

https://wiki.panotools.org/index.php?title=Align_image_stack&oldid=15916
https://manpages.debian.org/testing/hugin-tools/deghosting_mask.1.en.html
http://enblend.sourceforge.net/enfuse.doc/enfuse_4.0.0.pdf

Example of deghosting_mask  applied on  a still panorama: https://www.miltonpanoramic.co.uk/deghosting.php

.mlv file for Example 4: https://bit.ly/3yvi1yf

























far.in.out

Hi. Thanks for the updated FW!
What is the diff Apr 30 vs May 04?
EOS M (was 600D > 50D)

Danne


gabriielangel

Resolution, Crop Modes and FOV

I ran a few tests to decide which crop mode / resolution I would use for an upcoming project, so I share my results here, in case it may be of use to anyone wondering...

I could not find a proper, affordable test target, so I decided to create one myself. The chart is 23x13 inches (60.96 x 33.02cm) 600 DPI. It can be cropped, but should not be resized, otherwise you'll lose some details. I had it printed at 600 DPI, inkjet, glossy paper at the local print shop. Total $25 and it gets the job done.
You could also print portions of it on a regular sheet of paper. Just set your printer to 600DPI.

Link to the chart: https://bit.ly/3lL7jft

Gabriiels-Test-Chart-V1-Preview" border="0

The Videos:

In the first video, I recorded the chart with different lenses at 2.5k, 2.8k and 5k frtp. I then recorded various related real-life scenes to validate what I observed when reviewing the chart recordings.

In the second video, I recorded a football field right from the center at 2.5k, 2.8k and 5k frtp, at different focal lengths, to have a reference for the relationship between Focal Length, Crop Mode and FOV.

Link to the Resolution Video: https://bit.ly/3ajK6hL
Link to the FOV Video: https://bit.ly/38LQ2jn
Link to the MLV files for the resolution tests with the efm 32mm Lens: https://bit.ly/3LVaGea
(I did not include every lens tested because of space restrictions)
You have to download the videos in order to view at full Prores 422 resolution.

What to pay attention to when looking at the chart:

Gabriiels-Test-Chart-V1-Instructions2" border="0

A: When looking at those patterns, you can see the size of details at which the camera starts creating moiré and other artifacts.

B: The various wheels help exposing any aliasing.

C: The text at various sizes, along with the various patterns, help with the evaluation sharpness.

D: The checkerboard patterns also help at observing moiré, and artifact created when recording repetitive patterns of different sizes. The sharp contrast between blocks will also expose aberrations or high contrast related artifacts, if present.

The Resolution Test

In order to facilitate comparison, I resized the 2.8K and 5K to 2520x1054 (2.5k) in the video. Downscaling the Pixel-Binned 5k has a serious advantage: A significantly sharper image. When using DaVinci Resolve or Topaz Video Enhance AI to upscale, sharpness and most details are preserved, so you will end up with a sharper image than if you left the 5k image as-is.

Also to facilitate comparison, I moved the camera closer to or further away from the target to get the same chart framing, regardless of the recording resolution / crop mode.

As expected, the 2.5k and 2.8k recordings are quite clean, with moiré present only at the smallest detail levels.There are focus dots "Trapped" between the narrowest-spaced lines. Changing the interpolation method in "Fix Focus Dots" in MLV App helps, but will not get rid of all of those. The interpolation method 3 gave the best results.

At 5k, some aliasing is present when recording steep oblique lines, but luckily, the artifacts are not as obvious (if at all) when recording real-life targets (See second portion of the Video).

The FOV Test

FOV-Test-Example" border="0

For this test, the camera was put right at the center of the football field. Only the focal length was modified on the ef-m 15-45mm lens. As you can see (In the video) crop modes have a serious impact on the FOV (Field-of-view). Also, by looking at the net in the goal, you can evaluate when the Focal length / detail size is large enough to avoid moiré when filming repetitive-detailed objects (Nets, fences, hair, fur, screens, rooftop tiles, etc.)

Why is 5k frtp so important?

There is very little difference between 2.8k and 2.5k. You can, for the most part, compensate by moving a few steps closer or away from the subject. The price to pay for the extra pixels in 2.8k is a less fluid recording experience and shorter recording times.

5k frtp gives not only a wider field of view, but also a significantly different perspective when compared to a similar 2.8k or 2.5k framing. frtp allows real-time previews (As opposed to the framing of 2.8k and 2.5k) and downscaling 5k to 2.5k for editing makes the noise finer, less obtrusive and quite pleasing.
The mode has issues with steep, contrasty oblique lines, but moiré has a different quality, far less obvious than in the other modes.

As a rule of thumb, avoiding fine details in the frame, getting as close as possible to the subject and avoiding extra-wide lenses will allow you to get around most problems. If shooting a scene with a lot of steep obliques, using 2.8k or 2.5k would be a good idea. It would also be good to favor lower contrast, less sharp lenses (But keep in mind that Vintage sharpness still has clarity, which is definitely not the case when looking at the softness of the ef-m 15-45mm at some focal lengths and apertures, as you can see in the video)
I do not own the proper adapters to fit older glass on my ef-m, but it would be interesting to compare different lenses to the well known ef-m lenses in such a controlled fashion, for reference.

And the Focus Dots?

When using manual focus, is there a way to forbid the camera to generate those completely? It would help a lot when shooting naked product or things like fences and lace.


Reference for Beginners:
https://martinbaileyphotography.com/2017/04/10/the-effect-of-subject-distance-and-focal-length-on-perspective-podcast-568/



tupp

Quote from: gabriielangel on May 10, 2022, 08:52:32 PM
Frame Staking Noise Reduction used on Video
[snip]
I have been running a lot of tests lately to devise an optimal pipeline for a project. Noise being something to always take into consideration, I were wondering if Frame Stacking, used regularly when taking stills, could be applied to video. There is a built-in script in MLV App based on this idea.
[snip]
After processing. The noise is completely gone on the Recycling bin, and a little movement is left in the shadows which were lifted to the right of the image.

Isn't that basically what the ML "HDR Video" feature does?  It "stacks" two consecutive video frames, each with differing gain (iso), to yield higher capture dynamic range.  Hence, lower noise from combining those two consecutive frames.

Here is an old ML HDR Video tutorial that shows the DR increase along with the motion smearing.

gabriielangel

Quote from: tupp on May 27, 2022, 08:01:04 AM
Isn't that basically what the ML "HDR Video" feature does?  It "stacks" two consecutive video frames, each with differing gain (iso), to yield higher capture dynamic range.  Hence, lower noise from combining those two consecutive frames.

Here is an old ML HDR Video tutorial that shows the DR increase along with the motion smearing.

Dual ISO is not the same as Frame Stacking (or Averaging)

For example, given this frame, shot at ISO 800: (Click on image to see full size)
M22-2300-Before" border="0

Which looks like this after averaging 75 frames (In MLV App export settings, you select TIFF and then change "Sequence" to "Average Frame":
M22-2300-After-Stacking-75-Frames" border="0

Look at the details on the brick wall. Everything is almost intact after stacking, where those details were mostly hidden by the snowstorm...

Using dual ISO on this sequence would avoid the lights being clipped, but it would not help reduce noise, because there is very little light hitting the scene as a whole. You would end up with either a lot of noise in the shadows, or a very underexposed image overall.
Dual ISO is used to preserve highlights. It lowers the sensitivity to properly expose the highlights and then uses a normal sensitivity to properly expose the rest. When you have plenty of light available to begin with, you can expose everything properly using Dual ISO (But there are often artifacts at the transition points).

Frame Stacking increases the Signal-to-Noise ratio. The noise is random and is completely different across frames.
On a still shot, the "Signal" (the subject) doesn't move or at least varies at a much lower pace than the noise itself.
Therefore, when you average, the "Signal" average is higher than the "Noise" average, so you end up with an image having a lot more "Signal" and a lot less "Noise", as you can see in the example above.

tupp

Quote from: gabriielangel on May 27, 2022, 02:06:08 PM
Dual ISO is not the same as Frame Stacking
Agreed, but "HDR Video" IS the same as frame stacking.

"HDR Video" is a different ML feature from "Dual ISO."

Did you watch the video that I linked above?  Go to a 07:05, and you will see the playback of alternating frames with differing gain (iso).  ML HDR Video creates those alternating frames so that they can be stacked.


Quote from: gabriielangel on May 27, 2022, 02:06:08 PMUsing dual ISO on this sequence would avoid the lights being clipped, but it would not help reduce noise, because there is very little light hitting the scene as a whole. You would end up with either a lot of noise in the shadows, or a very underexposed image overall.
I do not agree here.  Although I have never tried Dual ISO, I have seen the work of others, and Dual ISO and HDR Video (frame stacking) give similar noise reduction and dynamic range results.

It is conceivable that Dual ISO reduces detail, because it necessarily halves the vertical resolution.

Nevertheless, my post above referred to the "HDR Video" feature of ML -- it didn't refer to "Dual ISO."


Quote from: gabriielangel on May 27, 2022, 02:06:08 PM
Dual ISO is used to preserve highlights. It lowers the sensitivity to properly expose the highlights and then uses a normal sensitivity to properly expose the rest. When you have plenty of light available to begin with, you can expose everything properly using Dual ISO (But there are often artifacts at the transition points).
Artifacts and noise are two different things.

The primary drawbacks that Dual ISO suffers are aliasing/moire and the halving of the vertical resolution.  The main problems with HDR Video are motion smearing and the halving of the frame rate.

It is my understanding that Dual ISO and HDR Video are based on the same principle.  Dual ISO "stacks" "interlaced" frames while HDR Video stacks consecutive frames.


Quote from: gabriielangel on May 27, 2022, 02:06:08 PM
Frame Stacking increases the Signal-to-Noise ratio.
So does the ML Dual ISO feature, as it increases dynamic range (very similar to signal-to-noise ratio), thus reducing the relative noise.

Again, what I was referring to in the above post is the old ML "HDR Video" feature, not "Dual ISO."


Quote from: gabriielangel on May 27, 2022, 02:06:08 PM
The noise is random and is completely different across frames.
On a still shot, the "Signal" (the subject) doesn't move or at least varies at a much lower pace than the noise itself.
Therefore, when you average, the "Signal" average is higher than the "Noise" average, so you end up with an image having a lot more "Signal" and a lot less "Noise", as you can see in the example above.
Yes.  The same dynamic range increase happens with Dual ISO and with HDR Video.

By the way, the two "interlaced" frames which are "stacked" in Dual ISO have different noise patterns (barring fixed-pattern noise).

Again, I was referring in the above post to the ML feature of "HDR Video."  I was not referring to "Dual ISO."

clubsoda

In newest build i am not able to change iso with customized buttons when set to info switch (i want to change aperture or iso quickly). It opens a canon menu instead. Every other option is disabled and only info switch is enabled under gain. Is this a known bug or am i missing something? Thank you :) Newest build is fire though! Thanks Danne!

Danne

Clean install, does the iso button work then? If so, what causes the conflict? Try adding one thing t the time.

gabriielangel

I confirm the behaviour here (May04, April30, April29 builds).

If setting gain to INFO_switch or SET_switch (and disabling the selected button in the other menus),
if you toggle so that aperture is selected (Press INFO or SET a few times, according to the button selected in gain)
When aperture is toggled, up / down varies the aperture.
When ISO is toggled, up goes to the single shooting / continuous shooting menu, down does nothing.

The feature was working properly up to the 2022_Apr24 build

Danne

Thanks for reports. Narrowed down the issue. Added a fresh build to my first post.

clubsoda

Thanks Danne! New build fixed it :) Awesome!

Grognard

EOS M vs 5D Mark III

https://www.youtube.com/watch?v=VV7a5Ffz5s8

5D III is far better but 1080p is  bad compared to anamorphic or crop mode even on EOS M

huyct666

great build is there a way to get rid of canon's black letterbox while recording? im an avid fan of 3:2 aspect ratio, i shoot everything in 3:2 and i love to have the live view to be 3:2

thanks


anto

I think 5d3 shoots are more contrasted, and it seems sharper than it is.

gabriielangel

Hello Everyone, I just purchased a used EOS M and the Menu Button doesn't work.
I updated the firmware to the 2.0.2 version using the EOS Utility App, but the Menu Button is still dead.
Anyone else ever had this problem before?