HDR Video Workflow (Interframe script) questions

Started by Rush, January 04, 2013, 12:02:45 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Rush

Hello!
First of all - thanks for Interframe script! (v.051) It works great and results are superb! I was really impressed of how easy to use it.

I have some questions:

1) Why it forced to compress luma range from 0-255 to 16-235? Is it ok? I think that full range can store more image information than with 'gray blacks and gray whites'.

2) Should I update bundled "enfuse" with the newer one from official site of this tool?

3) How to make this script to work without need of manual renaming source video to "RAW.MOV" and make batch conversions?

4) How to make it multithreaded? (It uses only one thread and works slow)

p.s. who had made this script? He deserves many thanks!
Greetings from Russia!

Yoshiyuki Blade

1) The range compression happens because it's the standard for almost all the video you see. Upon playback with a video player, you shouldn't see gray blacks and gray whites; it should scale to the full range automatically and look as intended. As a matter of fact, if you play the original MOV files, the shadows and highlights will actually be clipped since the player assumes TV range. That little script compresses the range for you. Yeah, it sucks that we aren't utilizing the full range all the time, but that's just how things are atm, along with having other junk like interlacing and chroma subsampling. :) The latter is understandable since it saves a little bit of bandwidth, but video game footage tends to suffer most because of it.

I'm going to investigate this a little later, but I'm curious as to why the script does this operation *before* it enfuses everything. Unless Interframe only operates in the TV range, this operation should be saved for last.

edit: Interframe seems to operate just fine without having to "pre-crush" the levels. I think it could use an update:

1) Decode the video
2) Separate the odd and even frames
3) Double the FPS with Interframe
4) Convert to RGB with rec.601 coefficients (or rec.709, depending on which camera is used I guess)
5) Extract the frames as image files (by default its jpg, but you can change it to tif or something for a lossless workflow)
6) Enfuse frames
7) Return the image files as frames
8) Convert back to YV12 with rec.709 coefficients

I think this would make better use of the available data.

Rush

Thank you very much for your response!
Can you post here this tweaks - what should I change?
Greetings from Russia!

Yoshiyuki Blade

I made these changes to hdr_split.avs, but I'm not sure how it all ties into the entire workflow. I don't use the entire automated process and usually stop after the "C" frames are produced, so use it with caution.

SetMemoryMax(1024)
Import(ScriptDir()+"..\Avisynth-plugins\InterFrame.avsi")
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\ffms2.dll")
Import(ScriptDir()+"..\Avisynth-plugins\FFMS2.avsi")
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\RemoveGrainSSE3.dll")
LoadPlugin(ScriptDir()+"..\Avisynth-plugins\mvtools2.dll")

A = FFVideoSource("..\RAW.MOV")
A = selecteven(A)             # select even or odd frames and interpolate them
A = assumefps(A, 12000, 1001)
A = InterFrame(A, NewNum=24000, NewDen=1001, GPU=false, FlowPath=ScriptDir()+"..\Avisynth-plugins\")
A = trim(A, 1, 0)
A = ConvertToRGB(A, matrix="pc.601")
A = ImageWriter(A, "..\frames\A", type = "jpg")

B = FFVideoSource("..\RAW.MOV")
B = selectodd(B)              # select even or odd frames and interpolate them
B = assumefps(B, 12000, 1001)
B = InterFrame(B, NewNum=24000, NewDen=1001, GPU=false, FlowPath=ScriptDir()+"..\Avisynth-plugins\")
B = ConvertToRGB(B, matrix="pc.601")
B = ImageWriter(B, "..\frames\B", type = "jpg")

return Interleave(A,B)


Looking at this had me thinking of a few things, like outputting to 16 bits and doing manual adjustments with photoshop from there. The final result is usually pretty flat, so further adjustments will just destroy what little bits there are in 8 bit. That's a little bit more advanced though. It also had me wondering if there are other ways to simulate HDR, like capturing at 30 fps, merging to 15 fps, then interpolating to 24 fps. Interframe has also been updated quite frequently, so I wonder if it does things better or not. This will be fun to experiment.


ilguercio

Any new method of converting  flickering video into an HDR sequence?
What is the best option so far?
Canon EOS 6D, 60D, 50D.
Sigma 70-200 EX OS HSM, Sigma 70-200 Apo EX HSM, Samyang 14 2.8, Samyang 35 1.4, Samyang 85 1.4.
Proud supporter of Magic Lantern.