Magic Lantern Forum

Developing Magic Lantern => General Development => Topic started by: bpv5P on April 03, 2017, 05:27:32 AM

Title: Hypothesis about HDR
Post by: bpv5P on April 03, 2017, 05:27:32 AM
Just as an hypothesis: now that some cameras can do 60fps on MLV, would it be possible to intercalate exposures between frames, kinda like the old HDR video feature?
This could be a interesting project, since the resulting footage could be processed using the ZeroNoise technique (using software like the HDRMerge - very slow, but still), to get a 24bit footage with better dynamic range and less noise... the footage would have a great SNR.

Now the problem I see is: on the old HDR feature, if I remember correctly, the ML used the ISO settings to get different exposures. That wouldn't be possible with ZeroNoise, since it uses something like noise avaraging[1] to get the noise pattern, from what I understand.
So, it would be needed to change the shutter speed or the lens aperture (for non-manual lenses - if that's even possible on ML). The first would generate ghost images in high speed objects and the letter would turn the feature to be useless for everyone using manual lenses.

Anyway, could be a interesting idea to some developer. Just wanted to share.


[1] http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm
Title: Re: Hypothesis about HDR
Post by: a1ex on April 04, 2017, 09:58:44 PM
You gave me an idea: it *might* be possible to alternate a short exposure with a 360-degree one, therefore ending up with two adjacent exposures (in time). Something like this:


|------frame 1-----||-----frame 2------||-----frame 3------||-----frame 4------|
[...blanking....][s][looooooooooooooong][...blanking....][s][looooooooooooooong]...
|------------HDR frame 1---------------||-------------HDR frame 2--------------|


where "blanking" means the sensor is not capturing light, and "s" is a short exposure.

which would definitely result in ghosting in moving subjects, but at least the two "motion blurs" should be adjacent (with little or no gap between them).

That's just theory based on my current understanding of LiveView, but didn't try.
Title: Re: Hypothesis about HDR
Post by: DeafEyeJedi on April 04, 2017, 10:16:05 PM
Holy moly this is tempting...
Title: Re: Hypothesis about HDR
Post by: bpv5P on April 05, 2017, 02:04:44 AM
Seems a interesting idea a1ex. I have no advanced programming skills to do it, but I can compile ML and do all necessary tests on 600D if you're willing to put some effort on it.
I think the adtg research is pretty interesting, but the DR and SNR enhancement would be minimal. On this ZeroNoise the enhancement would be pretty clear, I think. It may not be something you would use for everyday work, but that's certainly a great feature for people doing low-budget film.
Thanks for all the work.
Title: Re: Hypothesis about HDR
Post by: bpv5P on April 05, 2017, 02:35:18 AM
Actually 600D can't do 48fps on a usable resolution, but I could try to get a MKIII just to test it.
Title: Re: Hypothesis about HDR
Post by: otherman on April 05, 2017, 09:25:07 AM
Dual ISO make the same thing without ghosting, am I correct?
24bit hdr raw video? Is there something on the market capable of this?
It's seams a gymnastick thing, but... Why not!
Port it to eos-m please  :P
Title: Re: Hypothesis about HDR
Post by: bpv5P on April 05, 2017, 11:09:44 AM
Dual ISO is not the same. The image has half the resolution. It could generate FPN. If can make moire even worst. It has to suffer a interpolation to function.

The proposed solution would have the entire resolution, and could be processed with the ZeroNoise, which would result in better dynamic range and less noise (greater SNR).

You could compare still images: take a picture using dual iso. Then take other two pictures in the same place, but with different exposures.
Use cr2hdr to process the dual iso file. Then, use HDRMerge[1] to process the other two pictures. Open on some raw processor and see the results for yourself. Fewer noise, full resolution and better DR.

[1] https://jcelaya.github.io/hdrmerge/
Title: Re: Hypothesis about HDR
Post by: stevefal on May 19, 2017, 09:54:20 PM
Quote from: a1ex on April 04, 2017, 09:58:44 PM
You gave me an idea
It's cool that motion blurs would be adjacent, but that's at the expense of desirable motion blur in that exposure in the first place. If the short exposure is, say, 90 degrees for a 2-stop recovery, that motion blur will be 45 degrees on output, or 1/4 the blur length. That means that any object seen from that exposure will have more of the strobe effect we usually don't want. That artifact might be more noticeable since those areas will have more defined edges.

I think the OP's idea, assuming alternating ISOs, is pretty interesting in that if each exposure is 360 degrees, the resulting HDR frames will have desirable 180 degree blur, with half the spatial error due to temporal offset between exposures - 1/60 vs 1/30s. And the long blur will help hide it.
Title: Re: Hypothesis about HDR
Post by: a1ex on May 19, 2017, 10:01:05 PM
What about reconstructing the motion blur with temporal interpolation?

https://github.com/dthpham/butterflow
https://unix.stackexchange.com/questions/178503/ffmpeg-interpolate-frames-or-add-motion-blur
Title: Re: Hypothesis about HDR
Post by: bpv5P on May 19, 2017, 10:20:46 PM
Quote from: a1ex on May 19, 2017, 10:01:05 PM
What about reconstructing the motion blur with temporal interpolation?

https://github.com/dthpham/butterflow
https://unix.stackexchange.com/questions/178503/ffmpeg-interpolate-frames-or-add-motion-blur

Wow, this "butterflow" seems great, thanks for sharing. I thought only AVISynth scripts was doing it on the open source realm...
All these ideas seems interesting, but we have to take into account the processing time., let's not be too much idealists.
Other idea I have was to use something from the ADTG research and find ISO values that match the noise pattern, so no need to change shutter speed... if that's even possible.
Title: Re: Hypothesis about HDR
Post by: Danne on May 19, 2017, 10:46:31 PM
This looks promising. Thanks for sharing the butterflow link.
Check some examples here.
https://github.com/dthpham/butterflow/blob/master/docs/Demonstrations.md
Title: Re: Hypothesis about HDR
Post by: stevefal on May 19, 2017, 10:54:14 PM
Yes, optical flow can be used to simulate motion. Of course the simulation is based on a simplistic assumption of motion, not what really happened in the scene. For example, a light moving between two frames could actually be two alternating lights. The practical impact of such a model when applied to a subset of pixels is hard to predict, and image-dependent. Optical flow is also compute intensive.

Heh, if optical flow was perfect, it could be used to generate all the missing exposures in the original ISO flip-flop model, and eliminate temporal artifacts in the first place.

At 48/60fps, the ISO flip-flop method has its drawbacks, but the artifacts are less mysterious. Another neat benefit is that you essentially get two videos for one, both 24/30p and 180 degrees. Merging is not required, but could be used only when recovery is needed.

Or... I think the knock-out merge algorithm would use a motion vector threshold to forfeit DR when it will produce a (significant) artifact. That way static scenes are 100% HDR, and less so as motion gets crazy.

To me, all this new stuff is more interesting wrt supersampling than it is to slo-mo and rez-mania. This video drives it home: https://www.youtube.com/watch?v=t7N1BOqmVOw. Look at those 1080p shots from the Alexa.
Title: Re: Hypothesis about HDR
Post by: Danne on May 19, 2017, 10:58:14 PM
Alternating iso can also be handled directly to adress the usually halfening of fps when transcoding/merging frames. Here is an example of 50fps coming out in 50fps exported on the fly with ffmpeg tblend filter(averaging consecutive frames) This workflow can be handled exactly the same with enfuse etc. Just process/merge frames 1+2,2+3,3+4 etc.


Title: Re: Hypothesis about HDR
Post by: stevefal on May 20, 2017, 03:02:03 AM
Simple averaging is not enough for quality HDR though, right?

Reading a bit, I see that "ghost removal" is a feature of some HDR stacking algorithms, including Photoshop's. Lot of discussion, but not sure if there's open source code for it.

https://www.google.com/search?q=opencv+hdr+ghost+removal
http://docs.opencv.org/3.1.0/d2/df0/tutorial_py_hdr.html
Title: Re: Hypothesis about HDR
Post by: Danne on May 20, 2017, 06:07:14 PM
Aligning is more useful than ghosting removal imo. Check out hugin project.
Also check out the averaging quality when working with the tblend filter. It's always given me better results over enfuse which has to have settings tweaked for better contrast and other tone mapping settings.
Title: Re: Hypothesis about HDR
Post by: stevefal on May 20, 2017, 10:36:05 PM
For video, aligning would not be desired whenever unaligned frames are due to intended camera movement, e.g. panning.

I don't know why you'd want to average clipped highlights or noisy shadows into your final image. Maybe I don't understand the math, but I assume you only want to include pixels representing detail.
Title: Re: Hypothesis about HDR
Post by: Danne on May 20, 2017, 10:38:58 PM
I want speed and a good looking image. If you can come up with something better I'd be more than happy to switch to that workflow.
Title: Re: Hypothesis about HDR
Post by: stevefal on May 20, 2017, 11:05:52 PM
Quote from: stevefal on May 20, 2017, 10:36:05 PM
For video, aligning would not be desired whenever unaligned frames are due to intended camera movement, e.g. panning.
No, brain fart. You're right, in this context, alignment and ghost removal both are desirable when used to solve the ISO flip-flop motion artifacts.

However since alignment would leave missing pixels on the trailing boundary of the image, that would have to be accounted for.
Title: Re: Hypothesis about HDR
Post by: Danne on May 20, 2017, 11:20:04 PM
Yes, smooth motion is problematic. It's hard work with hdr but sometimes nice to have 50, 60 fps works ok, otherwise not so good. That butterflow code looks interesting. Worth a try.
Title: Re: Hypothesis about HDR
Post by: bpv5P on May 25, 2017, 07:24:35 PM
How would  alignment solve the problem if it's an motion blur issue?
For example, if you shoot one frame at 180 degree and the other at 15 degree. One will have a big motion blur and the one almost none. How would alignment solve this? It probably would not. We need an interpolation to solve this, and the butterflow seems nice...
Title: Re: Hypothesis about HDR
Post by: Danne on May 25, 2017, 07:29:31 PM
Also check out pixel motion and frame mix in after effects. The late filter minterpolate in ffmpeg.
Tried butterflow a little. Exports to mp4. Gotta do some more testing.
Frame mix could be something good. Tried short clips and it blends frames creating a nice motion blur.
My thinking now is to record with 60fps to reduce ghosting and in post reduce framerate to 24 adding some nice motion blur algoritm.
Title: Re: Hypothesis about HDR
Post by: 50mm1200s on March 18, 2018, 07:48:33 PM
Bumping. Nice idea.