Hypothesis about HDR

Started by bpv5P, April 03, 2017, 05:27:32 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

bpv5P

Just as an hypothesis: now that some cameras can do 60fps on MLV, would it be possible to intercalate exposures between frames, kinda like the old HDR video feature?
This could be a interesting project, since the resulting footage could be processed using the ZeroNoise technique (using software like the HDRMerge - very slow, but still), to get a 24bit footage with better dynamic range and less noise... the footage would have a great SNR.

Now the problem I see is: on the old HDR feature, if I remember correctly, the ML used the ISO settings to get different exposures. That wouldn't be possible with ZeroNoise, since it uses something like noise avaraging[1] to get the noise pattern, from what I understand.
So, it would be needed to change the shutter speed or the lens aperture (for non-manual lenses - if that's even possible on ML). The first would generate ghost images in high speed objects and the letter would turn the feature to be useless for everyone using manual lenses.

Anyway, could be a interesting idea to some developer. Just wanted to share.


[1] http://www.cambridgeincolour.com/tutorials/image-averaging-noise.htm

a1ex

You gave me an idea: it *might* be possible to alternate a short exposure with a 360-degree one, therefore ending up with two adjacent exposures (in time). Something like this:


|------frame 1-----||-----frame 2------||-----frame 3------||-----frame 4------|
[...blanking....][s][looooooooooooooong][...blanking....][s][looooooooooooooong]...
|------------HDR frame 1---------------||-------------HDR frame 2--------------|


where "blanking" means the sensor is not capturing light, and "s" is a short exposure.

which would definitely result in ghosting in moving subjects, but at least the two "motion blurs" should be adjacent (with little or no gap between them).

That's just theory based on my current understanding of LiveView, but didn't try.

DeafEyeJedi

Holy moly this is tempting...
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

bpv5P

Seems a interesting idea a1ex. I have no advanced programming skills to do it, but I can compile ML and do all necessary tests on 600D if you're willing to put some effort on it.
I think the adtg research is pretty interesting, but the DR and SNR enhancement would be minimal. On this ZeroNoise the enhancement would be pretty clear, I think. It may not be something you would use for everyday work, but that's certainly a great feature for people doing low-budget film.
Thanks for all the work.

bpv5P

Actually 600D can't do 48fps on a usable resolution, but I could try to get a MKIII just to test it.

otherman

Dual ISO make the same thing without ghosting, am I correct?
24bit hdr raw video? Is there something on the market capable of this?
It's seams a gymnastick thing, but... Why not!
Port it to eos-m please  :P

bpv5P

Dual ISO is not the same. The image has half the resolution. It could generate FPN. If can make moire even worst. It has to suffer a interpolation to function.

The proposed solution would have the entire resolution, and could be processed with the ZeroNoise, which would result in better dynamic range and less noise (greater SNR).

You could compare still images: take a picture using dual iso. Then take other two pictures in the same place, but with different exposures.
Use cr2hdr to process the dual iso file. Then, use HDRMerge[1] to process the other two pictures. Open on some raw processor and see the results for yourself. Fewer noise, full resolution and better DR.

[1] https://jcelaya.github.io/hdrmerge/

stevefal

Quote from: a1ex on April 04, 2017, 09:58:44 PM
You gave me an idea
It's cool that motion blurs would be adjacent, but that's at the expense of desirable motion blur in that exposure in the first place. If the short exposure is, say, 90 degrees for a 2-stop recovery, that motion blur will be 45 degrees on output, or 1/4 the blur length. That means that any object seen from that exposure will have more of the strobe effect we usually don't want. That artifact might be more noticeable since those areas will have more defined edges.

I think the OP's idea, assuming alternating ISOs, is pretty interesting in that if each exposure is 360 degrees, the resulting HDR frames will have desirable 180 degree blur, with half the spatial error due to temporal offset between exposures - 1/60 vs 1/30s. And the long blur will help hide it.
Steve Falcon


bpv5P

Quote from: a1ex on May 19, 2017, 10:01:05 PM
What about reconstructing the motion blur with temporal interpolation?

https://github.com/dthpham/butterflow
https://unix.stackexchange.com/questions/178503/ffmpeg-interpolate-frames-or-add-motion-blur

Wow, this "butterflow" seems great, thanks for sharing. I thought only AVISynth scripts was doing it on the open source realm...
All these ideas seems interesting, but we have to take into account the processing time., let's not be too much idealists.
Other idea I have was to use something from the ADTG research and find ISO values that match the noise pattern, so no need to change shutter speed... if that's even possible.

Danne

This looks promising. Thanks for sharing the butterflow link.
Check some examples here.
https://github.com/dthpham/butterflow/blob/master/docs/Demonstrations.md

stevefal

Yes, optical flow can be used to simulate motion. Of course the simulation is based on a simplistic assumption of motion, not what really happened in the scene. For example, a light moving between two frames could actually be two alternating lights. The practical impact of such a model when applied to a subset of pixels is hard to predict, and image-dependent. Optical flow is also compute intensive.

Heh, if optical flow was perfect, it could be used to generate all the missing exposures in the original ISO flip-flop model, and eliminate temporal artifacts in the first place.

At 48/60fps, the ISO flip-flop method has its drawbacks, but the artifacts are less mysterious. Another neat benefit is that you essentially get two videos for one, both 24/30p and 180 degrees. Merging is not required, but could be used only when recovery is needed.

Or... I think the knock-out merge algorithm would use a motion vector threshold to forfeit DR when it will produce a (significant) artifact. That way static scenes are 100% HDR, and less so as motion gets crazy.

To me, all this new stuff is more interesting wrt supersampling than it is to slo-mo and rez-mania. This video drives it home: https://www.youtube.com/watch?v=t7N1BOqmVOw. Look at those 1080p shots from the Alexa.
Steve Falcon

Danne

Alternating iso can also be handled directly to adress the usually halfening of fps when transcoding/merging frames. Here is an example of 50fps coming out in 50fps exported on the fly with ffmpeg tblend filter(averaging consecutive frames) This workflow can be handled exactly the same with enfuse etc. Just process/merge frames 1+2,2+3,3+4 etc.



stevefal

Simple averaging is not enough for quality HDR though, right?

Reading a bit, I see that "ghost removal" is a feature of some HDR stacking algorithms, including Photoshop's. Lot of discussion, but not sure if there's open source code for it.

https://www.google.com/search?q=opencv+hdr+ghost+removal
http://docs.opencv.org/3.1.0/d2/df0/tutorial_py_hdr.html
Steve Falcon

Danne

Aligning is more useful than ghosting removal imo. Check out hugin project.
Also check out the averaging quality when working with the tblend filter. It's always given me better results over enfuse which has to have settings tweaked for better contrast and other tone mapping settings.

stevefal

For video, aligning would not be desired whenever unaligned frames are due to intended camera movement, e.g. panning.

I don't know why you'd want to average clipped highlights or noisy shadows into your final image. Maybe I don't understand the math, but I assume you only want to include pixels representing detail.
Steve Falcon

Danne

I want speed and a good looking image. If you can come up with something better I'd be more than happy to switch to that workflow.

stevefal

Quote from: stevefal on May 20, 2017, 10:36:05 PM
For video, aligning would not be desired whenever unaligned frames are due to intended camera movement, e.g. panning.
No, brain fart. You're right, in this context, alignment and ghost removal both are desirable when used to solve the ISO flip-flop motion artifacts.

However since alignment would leave missing pixels on the trailing boundary of the image, that would have to be accounted for.
Steve Falcon

Danne

Yes, smooth motion is problematic. It's hard work with hdr but sometimes nice to have 50, 60 fps works ok, otherwise not so good. That butterflow code looks interesting. Worth a try.

bpv5P

How would  alignment solve the problem if it's an motion blur issue?
For example, if you shoot one frame at 180 degree and the other at 15 degree. One will have a big motion blur and the one almost none. How would alignment solve this? It probably would not. We need an interpolation to solve this, and the butterflow seems nice...

Danne

Also check out pixel motion and frame mix in after effects. The late filter minterpolate in ffmpeg.
Tried butterflow a little. Exports to mp4. Gotta do some more testing.
Frame mix could be something good. Tried short clips and it blends frames creating a nice motion blur.
My thinking now is to record with 60fps to reduce ghosting and in post reduce framerate to 24 adding some nice motion blur algoritm.

50mm1200s