A 11min video to explain a feature that could have been explained in 30 seconds...
But yeah, I get your point garry. I just don't see how ML can get better, it's kinda hardware-limited by now.
While we are at it, I've been testing frame interpolation using
RIFE and I immediately thought about how this could benefit low-framerate/high-resolution MLVs. For example, recording at 12fps 4k and post-processing using RIFE to get 24fps.
Another idea: people are using canon cameras to build datasets to train machine learning. Magic Lantern could help to annotate this dataset, adding metadata that could be manually selected on-the-fly. For example: you could input 10 keywords. After each photograph is taken, ML asks what keyword to pick and automatically writes to the metadata that can then be parsed on the computer.
I have also been testing with video super-resolution, but there's not much future for amateurs because networks like EDVR/TecoGAN requires too much VRAM. Only ESRGAN is possible at the moment, but it has no temporal consistency so it's not really usable for video (although the results on photographs are amazing). There's also denoising, where DPIR works very well.
So, indeed, machine learning software can really help with the hardware limitations that (old) canon cameras have.