HDR Video workflow error using Wine on Mac

Started by helloiamrory, January 12, 2013, 09:19:12 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

helloiamrory

I am using the interframe user friendly workflow found here:

http://wiki.magiclantern.fm/userguide#hdr_video

I was advised to use wine in another topic on here by one of the forum admins.

I find that using Wine does not work. I get error messages. They are as follows:

Virtual Dub Error
AVI Import Filter error: (Unkown)(80040154)

and once I hit ok for that then I get

Virtual Dub Error:

Cannot open file "C000000.jpg": File not found.

And then nothing happens

cheers


deleted.account

The Avisynth in the zip is version 2.6.0.

Try this download for the 2.5.8 version:

http://sourceforge.net/projects/avisynth2/files/AviSynth%202.5/AviSynth%202.5.8/Avisynth_258.exe/download

Avisynth Sourceforge site is a freaking mess, 'Download Latest Version 110525" is version 2.6 not 2.5

I think that's the problem you're having you need Avisynth 2.5.

Also the hdr-split script still has a totally unnecessary ConvertToRGB then convert to YV12, two color space conversions at 8bit integer that are pointless, the original MOV's are already YV12, that's what FFVideoSource hands to the script anyway.

Yoshiyuki Blade

Quote from: y3llow on January 14, 2013, 11:00:05 PM
Also the hdr-split script still has a totally unnecessary ConvertToRGB then convert to YV12, two color space conversions at 8bit integer that are pointless, the original MOV's are already YV12, that's what FFVideoSource hands to the script anyway.

I think the author's intent with that was to "crush" the levels to TV range, which is standard for YV12. By default, the video recorded by the Canon cameras are at full range, but in the YV12 space. But yeah, I mentioned in another thread that it's unnecessary. The Interframe filter will work fine without having to crush the levels first, so that step can be saved for last.

deleted.account

There's no reason to do a color space conversion and loose quality to scale luma it can be done in YV12 with a PC to TV type levels adjustment.

Regarding Canon movs, yes they are full range, but JFIF, so chroma has been normalised over the full range as well in camera before its encoded to h264.

Its not necessary to rescale luma at all for the purposes of merging.

I'd suggest do all the interframe stuff then use ConvertToRGB(matrix="PC. 601") for T2i's etc PC.709 for 5D's etc best use mediaingo to establish which, that gives YCC levels in RGB.

Rescaling luma at 8bit int and if done poorly will intoduce quantization error in the RGB, evidence of that would be a combed spiky histogram when viewing the image frame output.


Yoshiyuki Blade

I think we are in full agreement that converting to RGB and back to YV12, before Interframe is called, is unnecessary. However, the levels do have to be crushed eventually when converting back to YV12, preferably at the very end of the chain (like in the hdr_join script).

Quote from: y3llow on January 15, 2013, 09:44:50 AM
Rescaling luma at 8bit int and if done poorly will intoduce quantization error in the RGB, evidence of that would be a combed spiky histogram when viewing the image frame output.
That's normal, yeah? Since almost all standard video under the sun are TV range, they must be scaled back to full range upon playback, which introduces those combs on the histogram. I can open up a random blu-ray stream, take a screenshot and paste it into photoshop. There will be little spikes across the histogram.

Quote from: y3llow on January 15, 2013, 09:44:50 AM
Its not necessary to rescale luma at all for the purposes of merging.

I'd suggest do all the interframe stuff then use ConvertToRGB(matrix="PC. 601") for T2i's etc PC.709 for 5D's etc best use mediaingo to establish which, that gives YCC levels in RGB.
Yeah, in this thread, I removed the little back-and-forth conversion and it works just fine.

Quote from: y3llow on January 15, 2013, 09:44:50 AM
There's no reason to do a color space conversion and loose quality to scale luma it can be done in YV12 with a PC to TV type levels adjustment.
PC to TV adjustments do virtually the same thing and crush the levels too, at least in AVIsynth. The filter coloryuv(levels="pc->tv") crushes the levels to TV range without dealing with an RGB conversion, but it's not aware of what matrix is used. All-in-all though, as we established, this step is totally unnecessary anyway.

deleted.account

Quote from: Yoshiyuki Blade on January 15, 2013, 02:58:08 PMThat's normal, yeah? Since almost all standard video under the sun are TV range. I can open up a random blu-ray stream, take a screenshot and paste it into photoshop.

I'm not talking about bluray or DVD or other cameras, I'm talking about JFIF used by a Canon T2i for example. Check out Poynton's comments about JFIF maybe.

Quotethey must be scaled back to full range upon playback

Yes, absolutely and that's what the fullrange h264 VUI option flag set 'on' in Canon MOV's is for, to force a rescale of levels at playback or transcode, as long as the decompressing codec honor's the flag, like QT, ffmpeg etc. But that's playback, not intermediate step to RGB for processing to then go back to 4:2:0.

QuoteHowever, the levels do have to be crushed eventually when converting back to YV12, preferably at the very end of the chain (like in the hdr_join script).

Well I'd disagree, would you mind explaining why in the case of this merge script and this source you think they must?

Perhaps try it, take a Canon MOV which had an ITU BT601 color matrix used and do a PC.601 conversion to RGB and then try with a coloryuv(levels="pc->tv") first and then rec601 conversion to RGB both in AVIsynth and look at the histograms. :-) Which do you prefer?

Yoshiyuki Blade

Quote from: y3llow on January 15, 2013, 03:20:31 PM
Well I'd disagree, would you mind explaining why in the case of this merge script and this source you think they must?

Perhaps try it, take a Canon MOV which had an ITU BT601 color matrix used and do a PC.601 conversion to RGB and then try with a coloryuv(levels="pc->tv") first and then rec601 conversion to RGB both in AVIsynth and look at the histograms. :-) Which do you prefer?

Ok so the first example,

convertorgb24(matrix="pc.601")

will keep the full detail and will look similar to a RAW still image with the same settings converted in DPP. However, this is a special condition to have full range video in the YV12 space. IMO, this should only be an intermediate step before reaching the final output.

The 2nd example,

coloryuv(levels="pc->tv")
convertorgb24() #by default, it assumes rec.601 so it doesn't have to be specified

crushes the levels to TV range first then scales it back to 0-255 when converted to RGB. Even though the histogram is bumpy now, this is pretty much what I'd expect from typical video.

I prefer keeping things in full range (option #1) simply because more quality is preserved. However, by convention, the YV12 color space assumes the TV range, so most players decode video under that assumption. It's possible to keep the full range in YV12 and set some flags for it, but like you said video players have to honor it. The generally they don't. If you upload an unedited MOV file to youtube, it'll just clip off that extra luma range. I don't think full range is supported in the Blu-ray specs either. For the final output, converting to the standard TV range is the safest bet to get the video looking as intended for most people.

I think the general process should flow like this:

Decode the video and do Interframe interpolation
convertorgb24(matrix="pc.601")
Extract frames, enfuse, re-import to video
converttoyv12(matrix="rec709")
Do any other misc filtering in the YV12 space
encode

Converting back to YV12 will crush the levels, but will fit the standards of HD video. A properly functioning video player should decode the video with its intended look. If you do this instead:

converttoyv12(matrix="pc.709")

The full range is kept, but most video players will simply clip the extra range upon playback, which causes crushed shadows and highlights. In fact, if you open a .MOV file straight from the camera, that's what it will look like because its stored pretty much the exact same way (but with rec.601 coefficients). Specific video player configurations would be required to benefit from using full rage.

deleted.account

Everything you've said is based on final delivery and media player handling, which I agee with but you neglect the edit, grade steps before final encode.

Only at the last ste,p as you have previously suggested does any luma rescaling 'actually' need doing, when you step outside of the color management of an decent modern NLE and rely on a media players handling, inside the NLE if I want to 'simulate' a rec709 output for example, I can drop a view LUT on, or ICC profile which affects display, or a levels adjustment or whatever other way the color management is handled. It is not necessary to actually scale luma until encode.

QuoteIn fact, if you open a .MOV file straight from the camera, that's what it will look like because its stored pretty much the exact same way (but with rec.601 coefficients). Specific video player configurations would be required to benefit from using full rage.

As mentioned the h264 has a full range flag set 'on' and any decent media player will honour that flag and scale luma but thats academic because at final delivery we scale luma and encode into 16-235, 16-240 chroma to meet standards, whether that be ITU BT709 for HD or whatever.

Media player classic, QT (latest version), VLC (with HW accel off), ffmpeg based players all respect the h264 fullrange flag, so does a PS3, so does Premiere CS5 onwards and FCPX.

Its up to the viewer to ensure their media player works correctly, I use a test h264 file to do this when I'm unsure of a players handling but again academic as mentioned above.

QuoteIf you do this instead:

converttoyv12(matrix="pc.709")

The full range is kept, but most video players will simply clip the extra range upon playback

Again you talk of playback, using pc.709 for final encode unless going to h264 and flagging it fullrange is a misuse of the pc function, it's in Avisynth to give us YCC levels in RGB and back again for intermediate RGB processing, not for final encode.

This is what I'm getting at the final output from the hdr merge process is not assumed final delivery, it's an intermediate for editing and grading and there's absolutely nothing wrong with converting from full range luma source to RGB via PC matrix, all we get is a higher gamma output, ie: brighter.

There's also nothing wrong with encoding out to a full range luma YCC intermediate and dropping it into a decent modern NLE, (maybe 32bit processing option is necessary in some NLE's), because although the preview will appear 'crushed' as you say, in reality it's just the display, in reality the YCC levels and all info is still there even if the NLE's converted to RGB internally, if we do a levels or luma curves adjustment in the NLE the so called clipped shadows and highlight detail will slide into view, not lost irretrievably.

Also running at a slightly increased gamma into the NLE is not a problem because many of use use a desaturated, low contrast Neutral Picture Style or LOG type style to capture as much info as possible without baking too much in, so the idea we have to maintain some sort of black level is not an issue.

Here's a link to an interframe hdr merge script I put together in December year before last, that does no conversion to RGB, merges in 16bit linear light YCC and output either 10bit lossless h264 or 16bit tif or 16bit linear EXR's via AVS2yuv or yuvpipe, Dither Tools and Imagemagick.

Doesn't work correctly now, needs up dating as things have moved on including output of 10bit lossless h264 but have found a way to batch process large video files as they use a lot of memory, but all the same.

From two exposures don't know whether all the enfuse RGB stuff is really necessary.

http://blendervse.wordpress.com/2011/12/24/canon-magic-lantern-hdr-feature-to-10bit-lossless-h264/

A post about luma levels at encode time:

https://blendervse.wordpress.com/2012/12/23/is-it-just-video-compression-that-kills-detail/

Dither Tools to 10 or 16bit:

https://blendervse.wordpress.com/2011/09/16/8bit-video-to-16bit-scene-referred-linear-exrs/

Again script needs updating.



Yoshiyuki Blade

Right, I've been saying that levels should be crushed in terms of the "final delivery" as a standard practice:

QuoteHowever, the levels do have to be crushed eventually when converting back to YV12, preferably at the very end of the chain (like in the hdr_join script).

By "end of chain," that's basically what I meant. With an automated process like the HDR script, the levels should be at TV range at the very end of the chain as opposed to the very beginning (the way it is currently).

I agree with every thing you say too. Everything up to the final delivery should be left alone to have more information to process. However, the automated HDR script doesn't really leave room for that. If it were up to me (and this is what I usually do), I stop at the point where enfuse merges all the frames, then I do other various adjustments in photoshop before loading it back to video and doing further adjustments. It depends on the program used really. Most AVISynth filters work in YV12 so you'll have to make adjustments before you convert to RGB or after you enfuse and convert to YV12 again.

I think we've just been misunderstanding each other, but I've been speaking in the context of the HDR workflow script that's currently available. A more advanced workflow is beyond the scope of what this script offers, as you know.

deleted.account

I think the misunderstanding is the assumption that the output of the hdr split script is for delivery, when infact its a preprossessing step, so its not the end of the filter chain but actually the start in most cases. The interframe script doesn't do any filtering really, it intetpolates frames, does a couple of color space conversions and screws with levels. :-)

Put aside the 'hdr' aspect of the MOVs and we're left with a memory card of video files, the first step is to put them into an NLE foe editing, grading, titles then final output and we assume there will be many files to batch preprocess for editing with 'hdr' feature, just like ordinary MOVs off the card.

I did agree early on with you that levels need scaling, but asked why with regard to the hdr merge that the levels scale must be done and that doing it in Avisynth at 8bit int was detrimental to the source, that PC matrix output YCC levels in RGB and gave a better histogram.

The 'best' place to do the levels adjustment is in an NLE at 32bit float precision at final encode.

That's all I was trying to illustrate, but seemed to get bogged down in playback handling, which for the purposes of the hdr merge script as a preprossing stage seemed to be missing the point. :-)