Frame Staking Noise Reduction used on VideoNote: Google makes a preview of the videos in the links with Youtube-like compression.
To get the full Prores 422 quality (Necessary to evaluate the noise processing), you can download the files.I have been running a lot of tests lately to devise an optimal pipeline for a project. Noise being something to always take into consideration, I were wondering if Frame Stacking, used regularly when taking stills, could be applied to video. There is a built-in script in MLV App based on this idea. I think the original intent was to use the method mainly with frame burst sequences, but I decided to give it a try. Although the results are far from perfect, I thought it was still worth posting as it can still be used creatively if taking some precautions. Also, maybe someone reading this will have a stroke of genius and figure out how to make it more "General-Purpose".
The script is called TIF_CLEAN.command and can be found in MLVApp's export settings:

The process will export your .mlv file as a .tiff sequence which will be post-processed by the script. A Prores .mov file will also be created from the Tiff's.
On first run, Hugin, the program on which this script is built around, will be installed along with its dependencies. The download and install process should take less than 20 minutes. Unfortunately, in my case, in part because I am using an older OSX, I ran into permission problems and a lot had to be installed manually. There are plenty of cues appearing in the terminal window for you to figure it out, IF you have done those kinds of installs before... So it took me about an hour to get it running. If you happen to be terminal or Command-Line Averse, save this for a day where you have plenty of spare time.
Some Examples. (Video files links after the Images, All shot with EOS-M, 23.976fps 1/48th Shutter)
In order to highlight the strenghts and weaknessses of the process, I went a little outside of what would be done normally. I exposed at least 2 stops below optimal and then added 2 stops in MLVApp to increase the noise.
Being able to record underexposed (and know you can deal with some of the noise later) has its advantages, as it allows you to record higher resolutions in 14bit almost continuous (More details in shadows), have fully intact highlights, etc. Also, when shooting in dimly-lit environments, such as Churches and concert halls, a tiny inconspicuous EOS-M which can be fitted to a tiny gimbal can be preferable to a heavier Full-Frame.
Example 1: Slow Pan, 1 axis Pan, Tripod EF-M 32mm f1.4 @ f2.2, ISO 100

Areas or interest with the green arrows. We have Shadows, ok-ish exposed and underexposed.

With 2 Stops added (The noise is clearly visible to the right of the image.)

After processing. The noise is completely gone on the Recycling bin, and a little movement is left in the shadows which were lifted to the right of the image.
Original Video:
https://drive.google.com/file/d/1lsmxm3iyz4rUvq5QkqQcVY2XobWURuGb/view?usp=sharingProcessed Video:
https://drive.google.com/file/d/1n6rY61uKT1Fc-6CLzxsnWyZxfR1ho9J-/view?usp=sharingExample 2: Slow Pan, 2 axis Pan, Tripod EF-M 32mm f1.4 @ f2.2, ISO 100

Original.

2 Stops Added.

Processed. See how the details in the hair and on the shirt are preserved, and the noise in the leaves is almost gone.

Smooth Aliasing 3 pass added on top for good measure. This adds a motion blur to the footage, which could be helpful in some cases, such as the next examples.
Original Video:
https://drive.google.com/file/d/1AqdLu6tQyxHspNqyFfKuEKqDb7xg8uGN/view?usp=sharingProcessed Video:
https://drive.google.com/file/d/1wVQbrAL9cuOreMxAcWJaWgW4b0gLdVHY/view?usp=sharingProcessed with Smooth Aliasing 3 pass added:
https://drive.google.com/file/d/1D1X3WZ4V_TZUJ7CfXBX4YswOojvmTbWT/view?usp=sharingExample 3: Fast Motion, All Axes, Handheld EF-M 15-45mm @ 15mm f3.5, ISO 800

After a few exchanges with Danne, it became obvious that this method was severely challenged when having to align images with movement on several axes at the same time. So I shot this one handheld, but with a stabilized lens. One thing to note is that in its original form, the script will report a silent error when images cannot be aligned properly (You end up with a video with a frozen frame where the error occurred. The error tolerance is 3 pixels) In order to give it a larger margin, I edited the script so that it has a 100 pixel margin, to allow plenty of movement. The following options need to be added to
align_image_stack (-t 100 -g 20). You end up with:
align_image_stack --use-given-order -t 100 -g 20 -a ...
The -g option gives a 20x20 grid (As opposed to the default 5x5) so that, hopefully, the movement would not affect the whole image (More apparent in example 4)

Processed. As you can see, when an object is moving fast enogh to cross the 5 frame boundary within a sampling period, we end up with ghosting (The script stacks a total of 5 frames, Current, 2 before and 2 after).

Processed with Smooth Aliasing 1 pass added. This adds some motion blur, but not enough to hide the ghosting.

Here, I filmed the same sequence but without a car in the middle of the frame. You can still see some ghosting on the cars afar (In the video), but the bulk of the image isn't too affected.
Original Video:
https://drive.google.com/file/d/12ScYGKewG3GkjNaP_xzQ2JEJ9X0FLyqT/view?usp=sharingProcessed Video:
https://drive.google.com/file/d/1n3_ORRINhAWGQ8vbx9NbhSWLzjD7BsK2/view?usp=sharingProcess with Smooth Aliasing 1 pass added:
https://drive.google.com/file/d/1VL225cvRZLfh22tLGQMQSy5XWofCMVeK/view?usp=sharingSequence without Car:
https://drive.google.com/file/d/17LyqUbVZk0YpyDo7yq76W3TnQhq8KnHc/view?usp=sharingExample 4: Fast Motion, All Axes, Handheld EF-M 15-45mm @ 15mm f3.5, ISO 100

Original. To find out if night time had anything to do with it, I reproduced example 3 during the day.

Processed with options
-t 100 -g 1 The ghosting is still there and when the car passes by, the whole image shakes as if it were an earthquake!

Processed with options
-t 100 -g 35This processes using a 35x35 grid (As opposed to the 1x1 of the previous example). The image is a lot more stable, but still not enough. A finer grid would be required and the ghosting still needs to be addressed.
Original Video:
https://drive.google.com/file/d/1Er3r13Vu9n-d1SNKwcvYNESK1MCuno1h/view?usp=sharingProcessed Video 1:
https://drive.google.com/file/d/1qW6M0MEFI8TZE2eq9O2sdP9N4mcJ92ZR/view?usp=sharingProcessed Video 2:
https://drive.google.com/file/d/1qW6M0MEFI8TZE2eq9O2sdP9N4mcJ92ZR/view?usp=sharingExample 5: No Motion, Tripod EF-M 32mm f1.4 @f2.0, ISO 3200

Original. Here only the moon as a light source at iISO 3200. This example would benefit from more frames in the stack, and maybe from dual ISO.

Processed. The noise has been reduced substantially. There is still some movement in the noise, but it shows how powerful this stacking method can be under ideal conditions.
Original Video:
https://drive.google.com/file/d/1FrkHYPpgIQbZ_HL4DY1mPzYivtqZQ6af/view?usp=sharingProcessed Video:
https://drive.google.com/file/d/17lLcWzV-Lu-xLSV4MkfN8oyWMZOj_EzK/view?usp=sharingConclusionThe way it is now, if planned accordingly, this method could allow to film still subjects such as flowers and wildlife (As long as it isn't too windy, as too much movement creates artifacts); Statues, landscapes, Architechture, etc. Using a tripod (or maybe a gimbal) is mandatory.
This is beyond my current technical abilities, but maybe someone with more skills can find a clever way to use the
deghosting_mask function present in Hugin. Using a much finer grid would also help a lot (I could not go beyond 35x35 and get a complete video sequence). If there was a way for the system to detect motion and avoid blending frames there, it would also help.
Also, this method isn't for everyone. With a Six Core 3.33GHz, it takes 6 minutes per second of footage to process at 2.5k resolution.
I have seen similar results (minus the major artifacts) with Topaz Vid Enhance AI at 4-6 minutes per frame when using the cpu.
As some of you know, when using temporal noise reduction such as Neat Video or the NR present in DaVinci studio, some details are lost. This is why Frame Stacking is interesting, as fine details are preserved a lot more.
So, this has been tested so far, so anyone with the same idea reading this will have a few examples to start with

References:
https://wiki.panotools.org/index.php?title=Align_image_stack&oldid=15916https://manpages.debian.org/testing/hugin-tools/deghosting_mask.1.en.htmlhttp://enblend.sourceforge.net/enfuse.doc/enfuse_4.0.0.pdfExample of deghosting_mask applied on a still panorama:
https://www.miltonpanoramic.co.uk/deghosting.php.mlv file for Example 4:
https://bit.ly/3yvi1yf