Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - deleted.account

#101
Quote from: 3pointedit on May 22, 2013, 09:45:35 AM
y3llow, why would you need to do temporal NR on a RAW sequence? There is no inter-frame referal of data?

By "no inter-frame referral of data' you mean because it's not "video"? Every frame refers to each other as it's in motion, we view it in motion? :-)

By temporal NR I mean that the NR tool does motion analysis, creates vectors and the noise reduction algo's check frames backwards and forwards, maybe just a couple each way, maybe 10 or 50 each way depending on processing power.

The algos attempt to establish what over time it assumes to be detail, noise, high frequency jitter and shimmer and sets about reducing it based on users set parameters and the movement or lack of movement in the noise.

Noise reduction in singular image processing is a subjective and visual process, we adjust the sliders, we decide on the parameters for a single image, until we're happy with it, at the point that a balance is struck between noise and detail.

But at 24fps we're going to need something a bit more than profiling the camera and creating a dark frame.

I think noise reduction is best done in a number of small increments targeting specific problems whether it be fixed noise, shimmer, flicker whatever. And then the adding of controlled levels of noise back in to avoid banding.

Really don't see how anything other than a temporal approach would be beneficial.
#102
Quote from: Audionut on May 22, 2013, 08:43:27 AM
You're talking about software denoising right?  That pattern noise is the noise floor of the camera (a hardware problem).

Pattern noise removal would be dark frame subtraction?

Any noise reduction for raw video would be best done temporal and possibly with motion compensation.  So basic spatial and or dark frame in raw then temporal noise reduction, luma sharpening etc either in image sequence stage or during video edit / grading stage if chosen raw wrangling apps don't support temporal noise reduction.
#104
Quote from: squig on May 15, 2013, 06:12:30 PM
macgregor wrote "since there's no metadata on the raw files, there are no color profiles asigned to the images. Camera raw I suspect is using the standard adobe color profile, which in my opinion sucks. Canon profiles are much better. Adobe over saturates blues and skintones are less nice. So I wonder if the ML guys could apply the canon profiles to the dng so we could fix this. We could even use VSCO film raw picture profiles, for some extra fun."

There's no color profile assigned because its raw, color primaries and therefore gamut are not even defined, when raw is 'developed' in whatever app used you define the color space, dcraw for example lets you develop as XYZ, Prophoto, Adobe RGB or sRGB, lets you choose any gamma to be applied, so use a color managed app, define your working color space, choose different working spaces, import dng's, compare histograms and channel clipping, go from there.

I believe for example Apple products choose ProPhoto by default if no color space is chosen.

Another thing dcraw will do is let you set sensor saturation default which varies with camera model and manufacturer to avoid the magenta tint.
#105
Quote from: mindogas on May 07, 2013, 10:31:46 PM
Your post should start from last line i think. The thing is that question was about 422toimage software and for it the answer is clear - renaming files extension messing things up.

Of coarse you're correct as the author of the application. It 'messes things up' due to the way 422toimage handles the source, my apologies.
#106
Quote from: mindogas on May 07, 2013, 07:16:33 PM
Bad answer. *.YUV extension is for YUV422 videos and *.422 for still pictures. *.YUV files contains some extra information about its frames so  this extra data is like trash for .422 conversion. converting *.YUV files is quite easy but you must know that this is experimental feature. ML and 422ToImage sources changes everyday so you should use the last versions then you experimenting with YUV videos.

Is it really a bad answer? I rename the .422 files to .yuv, import into avisynth, rescale chroma as the source yuv is JFIF chroma over full 8bit range and encode to h264 without creating any intermediate image sequences at all. :-)

We already know images sizes for .422 and can specify them in Avisynth so no big deal.

Its raw 8bit yuv, whether we call it image or video, whether we name it .422 or yuv.

Its down to how the host application handles it?
#107
Shoot Preparation / Re: Picture Style settings
May 07, 2013, 09:39:07 AM
Personally I use waveform, zebras set at Y 255 and histogram.

Zebras tell us whats happening with luma but not whether we're clipping color in R, G or B channels when converted back to RGB for display and CC.

The histogram suggests when we're clipping those RGB channels, it is possible to have zebras under Y 255 and still clip a color RGB channel.

Waveform I use as much for where dark shadows are forming in the 8bit levels range in scenes of high contrast where even clipping highs shadows crush.

#108
Shoot Preparation / Re: Picture Style settings
May 07, 2013, 12:35:40 AM
I think the general premise is ETTR, expose to the right, so you expose for the highlights. Personally I'd not like the idea of a PS lifting highlights better to compress them slightly and roll them off.

Lifting shadows is to reduce loss in compression, throwing too much of the data in the low levels away.
#109
Modules Development / Re: 14bit RAW DNG silent pics!
April 29, 2013, 07:19:05 PM
Magenta cast due to choosing too high a sensor saturation level in dcraw.

This link suggests leave dcraw to set black level but specify -S saturation point to suit camera performance.

http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm

Also I'm finding just interpreting as rec709 sRGB primaries in linear domain gives more 'accurate' color than some of the default settings in ufraw, darktable etc which appear much warmer.
#110
Yeah, I was seeing yellow tinged highlights with Cinestyle, but concluded it was a white balance issue, specifically using a combination of ML features, custom auto WB and swapping picture profiles between recording and non recording.

For example, its recommended to set exposure using a more Neutral PS and swap to Cinestyle to record, ML lets us do that simply, however if first the WB is auto adjusted based on the Neutral PS before shooting, when swapping to Cinestyle to shoot, the WB setting is then not necessarily accurate and that's where I think the yellow highlights come from.

Since now I set WB in recording PS rather than viewing PS no more yellow highlights, unless I'm imagining it. Not done a definitive test.
#111
And now Vision Colors Cinetech, less teal & orange than Visiontech / VisionColor.

Personal preferance is Visiontech and Cinestyle. Not yet tried Cinetech.
#112
I think the misunderstanding is the assumption that the output of the hdr split script is for delivery, when infact its a preprossessing step, so its not the end of the filter chain but actually the start in most cases. The interframe script doesn't do any filtering really, it intetpolates frames, does a couple of color space conversions and screws with levels. :-)

Put aside the 'hdr' aspect of the MOVs and we're left with a memory card of video files, the first step is to put them into an NLE foe editing, grading, titles then final output and we assume there will be many files to batch preprocess for editing with 'hdr' feature, just like ordinary MOVs off the card.

I did agree early on with you that levels need scaling, but asked why with regard to the hdr merge that the levels scale must be done and that doing it in Avisynth at 8bit int was detrimental to the source, that PC matrix output YCC levels in RGB and gave a better histogram.

The 'best' place to do the levels adjustment is in an NLE at 32bit float precision at final encode.

That's all I was trying to illustrate, but seemed to get bogged down in playback handling, which for the purposes of the hdr merge script as a preprossing stage seemed to be missing the point. :-)
#113
Everything you've said is based on final delivery and media player handling, which I agee with but you neglect the edit, grade steps before final encode.

Only at the last ste,p as you have previously suggested does any luma rescaling 'actually' need doing, when you step outside of the color management of an decent modern NLE and rely on a media players handling, inside the NLE if I want to 'simulate' a rec709 output for example, I can drop a view LUT on, or ICC profile which affects display, or a levels adjustment or whatever other way the color management is handled. It is not necessary to actually scale luma until encode.

QuoteIn fact, if you open a .MOV file straight from the camera, that's what it will look like because its stored pretty much the exact same way (but with rec.601 coefficients). Specific video player configurations would be required to benefit from using full rage.

As mentioned the h264 has a full range flag set 'on' and any decent media player will honour that flag and scale luma but thats academic because at final delivery we scale luma and encode into 16-235, 16-240 chroma to meet standards, whether that be ITU BT709 for HD or whatever.

Media player classic, QT (latest version), VLC (with HW accel off), ffmpeg based players all respect the h264 fullrange flag, so does a PS3, so does Premiere CS5 onwards and FCPX.

Its up to the viewer to ensure their media player works correctly, I use a test h264 file to do this when I'm unsure of a players handling but again academic as mentioned above.

QuoteIf you do this instead:

converttoyv12(matrix="pc.709")

The full range is kept, but most video players will simply clip the extra range upon playback

Again you talk of playback, using pc.709 for final encode unless going to h264 and flagging it fullrange is a misuse of the pc function, it's in Avisynth to give us YCC levels in RGB and back again for intermediate RGB processing, not for final encode.

This is what I'm getting at the final output from the hdr merge process is not assumed final delivery, it's an intermediate for editing and grading and there's absolutely nothing wrong with converting from full range luma source to RGB via PC matrix, all we get is a higher gamma output, ie: brighter.

There's also nothing wrong with encoding out to a full range luma YCC intermediate and dropping it into a decent modern NLE, (maybe 32bit processing option is necessary in some NLE's), because although the preview will appear 'crushed' as you say, in reality it's just the display, in reality the YCC levels and all info is still there even if the NLE's converted to RGB internally, if we do a levels or luma curves adjustment in the NLE the so called clipped shadows and highlight detail will slide into view, not lost irretrievably.

Also running at a slightly increased gamma into the NLE is not a problem because many of use use a desaturated, low contrast Neutral Picture Style or LOG type style to capture as much info as possible without baking too much in, so the idea we have to maintain some sort of black level is not an issue.

Here's a link to an interframe hdr merge script I put together in December year before last, that does no conversion to RGB, merges in 16bit linear light YCC and output either 10bit lossless h264 or 16bit tif or 16bit linear EXR's via AVS2yuv or yuvpipe, Dither Tools and Imagemagick.

Doesn't work correctly now, needs up dating as things have moved on including output of 10bit lossless h264 but have found a way to batch process large video files as they use a lot of memory, but all the same.

From two exposures don't know whether all the enfuse RGB stuff is really necessary.

http://blendervse.wordpress.com/2011/12/24/canon-magic-lantern-hdr-feature-to-10bit-lossless-h264/

A post about luma levels at encode time:

https://blendervse.wordpress.com/2012/12/23/is-it-just-video-compression-that-kills-detail/

Dither Tools to 10 or 16bit:

https://blendervse.wordpress.com/2011/09/16/8bit-video-to-16bit-scene-referred-linear-exrs/

Again script needs updating.


#114
Quote from: Yoshiyuki Blade on January 15, 2013, 02:58:08 PMThat's normal, yeah? Since almost all standard video under the sun are TV range. I can open up a random blu-ray stream, take a screenshot and paste it into photoshop.

I'm not talking about bluray or DVD or other cameras, I'm talking about JFIF used by a Canon T2i for example. Check out Poynton's comments about JFIF maybe.

Quotethey must be scaled back to full range upon playback

Yes, absolutely and that's what the fullrange h264 VUI option flag set 'on' in Canon MOV's is for, to force a rescale of levels at playback or transcode, as long as the decompressing codec honor's the flag, like QT, ffmpeg etc. But that's playback, not intermediate step to RGB for processing to then go back to 4:2:0.

QuoteHowever, the levels do have to be crushed eventually when converting back to YV12, preferably at the very end of the chain (like in the hdr_join script).

Well I'd disagree, would you mind explaining why in the case of this merge script and this source you think they must?

Perhaps try it, take a Canon MOV which had an ITU BT601 color matrix used and do a PC.601 conversion to RGB and then try with a coloryuv(levels="pc->tv") first and then rec601 conversion to RGB both in AVIsynth and look at the histograms. :-) Which do you prefer?
#115
There's no reason to do a color space conversion and loose quality to scale luma it can be done in YV12 with a PC to TV type levels adjustment.

Regarding Canon movs, yes they are full range, but JFIF, so chroma has been normalised over the full range as well in camera before its encoded to h264.

Its not necessary to rescale luma at all for the purposes of merging.

I'd suggest do all the interframe stuff then use ConvertToRGB(matrix="PC. 601") for T2i's etc PC.709 for 5D's etc best use mediaingo to establish which, that gives YCC levels in RGB.

Rescaling luma at 8bit int and if done poorly will intoduce quantization error in the RGB, evidence of that would be a combed spiky histogram when viewing the image frame output.

#116
The Avisynth in the zip is version 2.6.0.

Try this download for the 2.5.8 version:

http://sourceforge.net/projects/avisynth2/files/AviSynth%202.5/AviSynth%202.5.8/Avisynth_258.exe/download

Avisynth Sourceforge site is a freaking mess, 'Download Latest Version 110525" is version 2.6 not 2.5

I think that's the problem you're having you need Avisynth 2.5.

Also the hdr-split script still has a totally unnecessary ConvertToRGB then convert to YV12, two color space conversions at 8bit integer that are pointless, the original MOV's are already YV12, that's what FFVideoSource hands to the script anyway.
#117
@ivanatora, the noise in your shadows that you now see I'd hazard a guess is because you've been used to looking at your source files in a media player that doesn't respect the full range flag in the h264 source, so shadows have been crushed and highlights blown, now you've converted the full range luma levels into 16 - 235 because ffmpeg does respect the full range flag so squeezes full levels into 16-235, detail you'd not been seeing is now visible.

You'll notice on ffmpeg CLI it complains of incompatible pixel format yuvj420p - yuv420P with many transcodes from Canon h264.

If you play your original MOVs with ffplay on the CLI you may see what you are now seeing in the transcode.

#118
It could well be you are running Avisynth 2.6 alpha and that may be the problem. The link provided above to AVISynth_110525.exe  is the 2.6 alpha, Sourceforge's Avisynth setup is well stuffed up, so easy to download 2.6 alpha rather than 2.5.

Try this link:

http://sourceforge.net/projects/avisynth2/files/AviSynth%202.5/AviSynth%202.5.8/

Then before trying to use the HDR script do a simple script to check all is working, make sure you have ffmpegsource2 in the plugins folder:

http://code.google.com/p/ffmpegsource/downloads/detail?name=ffms-2.17.7z

Then in a text file create test.avs, with this one line:

ffmpegsource2("myvid.mov", threads=1)

Drag into Virtualdub and see if you get a preview.

Have got the link to the HDR script you're using, been a long time since I used the feature, here's my attempt, using a different Avisynth route on my blog:

http://blendervse.wordpress.com/2011/12/24/canon-magic-lantern-hdr-feature-to-10bit-lossless-h264/
#119
Reverse Engineering / Re: (M)JPEG encoder
November 03, 2012, 11:56:38 PM
Quote from: 1% on November 03, 2012, 06:34:08 PM
Looks pretty similar... just one is scaled down and one is scaled up. Same YUV source.

Do you see the fine horizontal lines across the LV jpg? especially visible across smoother surfaces? I see these on all LV silentpics (whilst not recording) I've captured.

#120
So you can differentiate between what was shot in sRGB & what was shot in AdobeRGB colorspaces and therefore know what icc profile to apply in your image editor in order to view them correctly on an sRGB monitor. If you say you get IMG in AdobeRGB mode I think you are mistaken. Perhaps double check.
#121
Nice selection of Meyer Optic Gorlitz zebras including the 30mm Lydith, 50mm, 80mm Orestors, 135mm bokeh monster and the 400mm amongst a few others, majority are presets so smooth aperture rings for riding aperture when necessary. Love them. T2i.
#122
General Chat / Re: Magic Lantern for 700D?
October 17, 2012, 03:11:29 PM
This camera works great with a zacuto Z finder.  Canon APS-C T2i, T3i, T4i range further relegated into consumerland, RIP. To make room for a full frame lower cost range like 6D.
#123
hi, skydragon. Media Player Classic offers Color Management & control over levels. Also VLC will do the job too, just need to test with the fullrange samples and tweak settings. Alternatively ffmpeg's ffplay will do the job too.

http://ffmpeg.zeranoe.com/builds/

A static build will provide ffplay in the bin folder.

Good luck.

**EDIT**

If using AVISynth, makes sure to download ffmpegsource2 for your import plugin and add to a script like this:

ffmpegsource2("mymov.MOV", threads=1)

http://code.google.com/p/ffmpegsource/
#124
Quote from: skydragon on October 13, 2012, 02:40:19 AM
Update;

Ok...I think I'm getting somewhere now, in terms of understanding the 'problem'.

I've just realised... that If I try to play back a .MOV video file from my Cannon 600D (t3i) on my PC using windows media player or VLC player, it plays the video levels back incorrectly, in terms of the blacks getting darkened. I presume this is down to the 0-255 range of the video being expanded by the player and the blacks (and whites?) getting clipped?

It's taking the 16 - 235 levels range and mapping to 0 - 255 RGB without first squeezing the YCC levels into 16 - 235.

QuoteIf I open and play back a .MOV video file from my 600D on my PC using Apple Quicktime player, it then plays back correctly.I don't normally use QT player, so hadn't seen this.

I'd suggest not using QT or QT player for anything, really. :-) QT / Player converts the 4:2:0 YCC to 4:2:2, and auto adjusts levels. Results will vary, not reliable.

Quote(As an aside if I deselect 'Use hardware YUV-RGB conversion' in VLC Player's preferences, it then also plays back the .MOV video correctly...any ideas why?)

You're telling VLC to get the vid card / driver to do the YCC to RGB conversion, whether that gets done correctly and that the right color matrix is used, ie: BT601 for T2i, BT709 for 5D MK III is anyone's guess depending on vid card and driver version.

QuoteSo...now I know that I've been viewing 'darkened' video clips all the time outside of my editor, back to Sony Vegas...

If I use a 'Computer RGB to Studio RGB levels' FX applied to the Cannon .MOV clips in the Vegas Studio  timeline, then the resulting H264 .mp4 output render is 100% ok and then also plays back ok in all the players on my PC

Perhaps double check with the fullrangetest.zip

QuoteAs a 2nd test if I use Adobe Premiere Pro CS6 and put the Cannon .MOV files straight on the Premiere timeline with no FX and render them straight out H264 .mp4, then the resulting render is 100% ok and then also plays back ok in all the players on my PC

Premiere CS5, 5.5 & 6 use MainConcept h264 codec and they respect the Canon h264 stream ' fullrange' flag so the full range levels of the Canon h264 get squeezed into 16 - 235 YCC before conversion to RGB, which has to be done at some point before encoding out. The encoded out h264 is no longer full range levels, it's 16 - 235, different from the incoming Canon h264.

The correct levels range for YCC is 16 - 235 (240 for chroma) so that the 'correct' YCC to RGB conversion is more likely done in the multitude of media player / codec handling out there and there is no reliance on the fullrange flag, which is only available in h264 anyway.

QuoteI presume that both of the above H264 output renders result in a sRGB .mp4 file which the players are happy with?

Phew...surely this shouldn't be this difficult ??!!??

No sRGB is RGB color model and has a 2.2 gamma curve applied so it's Display Referred :-). The h264 is YCbCr color model and has a rec709 gamma curve applied which differs from sRGB and rec709 is Scene Referred. This really only matters when linearizing the source in a 32bit float linear RGB workspace ensuring the correct reverse gamma function is applied.

QuoteThe core problem was the fact that my previewing of my camera's .MOV files on my PC desktop using WMP and VLC, resulted in me believing that contrast/levels were different to what they really were...and then doubting the NLE software...

Example of what I mean in terms of video at https://vimeo.com/51237605

Yep, a common problem and why it is not a good idea to rely on a media player for any kind of analysis or screen grabs because so much interpretation of the YCC source happens before we actually see a result on our RGB monitors, a tool like AVISynth will prove much better for that.

Also a decent color managed and calibrated screen combined with a tested and color managed media player helps.

**EDIT**

Posted whilst you were replying, you're welcome.
#125
Quote from: skydragon on October 11, 2012, 09:40:05 PM
Some (many?) Sony Vegas users have probably already figured this out...

But I've just wasted a few days trying to figure out why video from my Canon 600D (T3i) DSLR didn't seem to colour correct or export from Sony Vegas Studio 12 well - the resulting (rendered-out as H264 .mp4) finished video when played back on my PC or uploaded to Vimeo was crushed blacks/detail and blown highlights.

Why...read on below (this contains some generalisations, but you'll get my point)

A Canon EOS DSLR's video files are Computer RGB (cRGB 0-255) I use a Canon 600D but I believe the same applies to 7D, 550D, etc

h264 is YCbCr (YCC for short) they don't create RGB, two different color models. They're not cRGB whatever that is, 'Computer RGB', Sony speak I guess.

QuoteWhen you view the .MOV clips straight from the EOS camera on a PC using windows media player etc all is well. Windows media player etc handles playback of cRGB 0-255 video correctly.

What WMP does depends on the underlying DirectShow codec decompressing the h264 and that can vary. But what happens generally is that the media player handles the YCbCr to RGB conversion, which can be in combination with the graphics card, hardware assisted and as a result levels handling can vary.

The Canon MOV's are encoded into the h264 with full 8bit levels, well luma is chroma not. The feed to the camera h264 encoder in camera is raw JFIF YCC that is chroma and luma normalised over the full 8bit range, it isn't RGB.

The MOV files are flagged 'fullrange' by the encoder to flag to the decompressing codec / media player to squeeze the h264 full range levels into 16 - 235 YCC before it's converted into RGB. This is because full range 8bit YCC doesn't fit in 0 - 255 RGB. 8bit speak.

Problem is that older NLE's / codecs / media players may ignore the full range flag or simply handle the source as 16 - 235 and not squeeze the levels before conversion to RGB and chop off levels below 16 and above 235, expand the remaining levels over 0 - 235 and the contrast looks wrong, is wrong. So trusting a media player to give the right results or to extract screengrabs is not a good idea unless it's color managed like Media Player Classic or test first. VLC is a prime candidate.

Here's a test that contains a fullrange flagged file and non fullrange flagged file:

http://www.yellowspace.webspace.virginmedia.com/fullrangetest.zip

VLC, depending on settings will not necessarily show the output correctly, if you don't see the 16 & 235 text then VLC isn't setup right. If the 16 - 235 text does show you're good to go. same for any media player.

Also media players RGB conversion might not even use the right luma coeffs, ie: BT601 or BT709. So pinks can go to orange. Typical result of doing a transcode of BT601 flagged Canon MOVs to any other YCC codec like DNxHD which are all assumed BT709 at HD resolution, resulting in orangey skin tone for example. Ok in camera but not in the mangled transcode and playback. mediainfo will show what luma coeffs are used or assumed. Earlier Canon's like T2i are BT601. 5D MK III is BT709 anyway. Best check before conversion to RGB.

QuoteIf you then load the .MOV clip into the timeline of Sony Vegas whilst using a standard default 8-bit vegas project (I don't think this applies to 32 bit projects? But these aren't the standard and I don't use them due to PC speed) the cRGB video clip is automatically converted by Sony Vegas from cRGB 0-255 into sRBG 16-235 (Studio RGB). There is no warning message or notification, his sRGB fact isn't  made clear anywhere as far as I know.

The difference between 8bit & 32bit projects is to do with YCC to RGB conversion that any NLE does these days and precision.

First the YCC to RGB conversion, as said full range 8bit YCC doesn't fit in 8bit RGB, in fact 16 - 235 YCC doesn't fit completely into 8bit RGB, probably only 35% is transferable the rest can generate invalid RGB values in 8bit RGB and can appear as gamut clipped white artifacts, but NLE's work RGB so what to do?

Live with it or work in a 32bit RGB workspace, this allows the full YCC to be transferred and color processed in RGB, the idea at 8bit is that 16 - 235 YCC maps to 0 - 255 RGB the whole 0 - 1 thing in 32bit speak, anything outside of 16 - 235 can create negative values: ie: below 0 and values above 1, these can be held and used at 32bit wihout gamut clipping, allowing us to grade the output into the 0 - 1 RGB range for encoding back out to 16 - 235 YCC.

Display of coarse is still 8bit and should be based on a 16 - 235 YCC to 0 - 255 RGB conversion, not 0 - 255 YCC to 0 - 255 RGB which Vegas appears to do then at least in 8bit project, but at 32bit it's only the display, not the processing, so there is far far less chance of gamut clipping in 32bit, giving us room to grade without loss from dodgy YCC to RGB conversion.

YCC encoded output from the NLE should always have 16 - 235 levels unless h264 and we reflag the output 'fullrange', that's only possible to do with h264 which has a VUI Options extension to the specification. Annex 5.

For 8bit RGB image sequence output as long as the correct initial luma squeeze is done in YCC to RGB and a BT601 color matrix used, then the levels / contrast etc will be ok in RGB for Canon MOV's. Everything falls in the 0 - 255 RGB (0 - 1 range).

For cameras that shoot 16 - 255 YCC and don't flag the source full range, ie: outside of 16 - 235 like the Sony NEX5n, FS100 etc in an MTS container, I don't think are flagged full range, so it can be more important for those camera sources to be used in a 32bit workspace.

For RGB image sequences converted at 32bit precision in the NLE where YCC levels go beyond 16 - 235 or for Canon MOV's when no levels squeeze has been done then a image format like EXR is required that can store RGB levels range < 0 and > 1.

The 32bit precision part of the process maps at the 8bit YCC range is mapped into the 65536 levels range of 32bit it allows greater precision in color processing.

QuoteSo what you may ask...

Well the Sony Vegas preview monitor and the playback monitor both work as standard in cRGB colour space.

There's no such thing as cRGB, it's sRGB in the monitor, sRGB / BT709 are the primaries that define the gamut. What you seem to be inferring is that Vegas displays the sRGB from the YCC (rec709) to sRGB (RGB) source assuming full 8bit levels rather than 16 - 235 limited range?

QuoteSo you are now viewing a sRGB video in a cRGB viewer. The end result is a dull, washed out look to someone viewing it. As a user unaware of why this is being caused, you then alter the levels and colour to correct the washed out look (wrongly as there is actually nothing 'wrong' with the video clip itself, just your viewing of it in cRGB space) and you end up applying completely wrong levels/correction as a result. Due to Sony Vegas Studio not having any meters or histograms, it makes the whole situation even more confusing.

The underlying problem is the decompression of the original h264 source and handling of that by any vid card intervention that Vegas may use, if the fullrange flag is respected the YCC will be squeezed into 16 - 235 YCC, then converted to 8bit RGB (16 - 235 to 0 - 255 RGB) for playback and that will be correct preview. Going back to the full range luma test files.

QuoteWhen you then render out the Vegas timeline as a H264 video, the resulting render has the blacks/detail all wrong (amongst other things).

The workaround answer, is to apply a 'Studio RGB to Computer RGB' level correction (there is a preset) on the main Vegas preview window. Carry out all your edits, levels and corrections, in the knowledge that your preview window is now visually 'correct' and then when you have completely finished, then remove the 'Studio RGB to Computer RGB' level correction preset, just before you render out the timeline (ie. you must remove the 'Studio RGB to Computer RGB' level correction preset you previously applied to the preview monitor).

Hopefully this info will save someone else having the same hassle.

Apologies to anyone who thinks the above is obvious and to all those out there using 'proper' software with meters etc ;-)

I've heard this too, but had to respond as all the Sony speak about cRGB etc is just plain bad.