Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - dyfid

#26
Quote from: Audionut on August 22, 2014, 01:47:08 AM
The DNG output from cr2hdr is 16bits.

So is the output from cr2hdr no longer raw then? Demosaiced and scaled to 16bit and not that the dual iso holds two exposures and therefore raw channel levels are higher? Like BMD raw channels are up in the 50,000 range.

Don't bother pointing me to a thread to explain as I'm starting to try dualiso and capable of own research. :)
#27
Raw Video / Re: RAW - COLOR NOISE - Canon 50D
August 22, 2014, 12:51:10 AM
You see it clear as day when it manifests itself, on specular highlights such as water surfaces:

https://www.dropbox.com/s/y6z7acar25s86co/000001.dng?dl=0

As mentioned earlier in the thread the DCB algorithm in something like UFRaw or Rawtherapee vastly reduces it to the point it's not visible, where as a demosaic algorithm like ahd fails to deal with it. Resolve's demosaic of CinemaDNG looks like some version of ahd.

GPU assistance and sampling noise from a number of neighboring frames both forward and backwards directions is needed for denoising to be done well. Where as running the problematic sequences through DCB in Rawtherapee solves it, for me anyway.
#28
Saturation and posterization is from under exposure. Did you use ML YUV or raw histogram and waveform tools? YUV ones will give incorrect exposure for raw. Resulting in under exposure.

QT gamma bug, what codec did you export to for Premiere, on mac or PC. Premiere doesn't use Apple QT but Mainconcept.

Reduced posterization in final would suggest possible gamma scaling back and forth in transfer from one app to another possibly.

Posterization can also appear worse on displays with a badly set up monitor gamma response. If your display and different apps are colour managed and you've set that up correctly it will help identify problems transferring from one app to another with an intermediate format and the way its interpreted.
#29
Quote from: Thomas Worth on August 20, 2014, 12:02:13 AM
Can you cite a specific example where Resolve can't debayer Canon/ML footage properly? I'd like to see this if it is indeed a problem. I haven't had a problem.

https://www.dropbox.com/s/y6z7acar25s86co/000001.dng?dl=0

Spec highlights on water as mentioned in earlier post. Amaze & DCB algorithms have no problem with this. Looks like Resolve 11 uses an ahd variant.

Another example.

http://www.magiclantern.fm/forum/index.php?topic=13012.0
#30
Although for some shots a different demosaic algorithm is more suitable than Resolve 11. http://www.magiclantern.fm/forum/index.php?topic=13012.0

ayshih's description in the link nails it.

I find DCB algorithm in Rawtherapee (mac / windows / linux) also free vastly reduces the crazy coloured strands in line skipped versions of ML raw from things like spec highlights on water that Resolve can't deal with. But on the whole Resolves demosaic is fast and good quality output I find.

One thing about raw image processing tools like ACR, Rawtherapee, UFRaw etc is the raw data and l*a*b based tools that Resolve is currently without. There's only so much that can be done with RGB adjustments and LUT's. But horses for coarses, where you want to invest / spend / waste time and what sort of image you want to craft or not.
#31
Quote from: fisawa on August 17, 2014, 05:56:48 PM
Well, for me, Resolve end to end it's not an option now, and to make decent proxies for editing in FCP/PP, using the Rec709 the colour rendition was terrible. My workflow was just a way to try to match the image to the ACR default processing, which is very nice by itself and good for proxies.

And yes, the color match tool is no substitute, but I'm just an assistant editor making proxies, not a CC  ;)

I guessed that and understand, however all my comments related to ML raw in Resolve end to end, no proxies or log intermediates. But I query whether the Rec709 colour rendition was terrible due to under exposure. Which is why I queried whether ML raw exposure aids were used or the YUV ones. I've personally not had bad experience with Resolves raw to Rec709 output defaults using the raw exposure tools. Never mind tho.
#32
Not unless you intend moving the sliders per frame. :o

Seriously I haven't so far noticed, but best to test yourself.
#33
As explained DNxHD format restricts resolutions and frame rates.

You could encode to a DNxHD at decent quality as an intermediate, using an intermediate format is advisable anyway. Then encode to h264 for upload with x264 via a tool like Handbrake & use the crop tool in Handbrake to remove the letterboxing in the process of encoding to h264.
#34
Share Your Videos / Re: One day in Paris
August 17, 2014, 11:30:53 PM
The heavy contrast looses a lot of the detail which is a pity.
#35
Looks like you totally over exposed the shot or whatever you're viewing the DNG in is making a mess of it. I've seen similar pattens in Canon h264 in over exposed areas. Can you post the DNG?
#36
Your test doesn't really tell much, my comments relate to Resolve end to end rather than exporting log as an intermediate which would be the preferred route if not doing Redolve 11 end to end.

But with access to raw data and control at a raw level under the Rec709 curve on 16bit data at 32bit precision I personally don't see any point in log unless its for some 3D LUT ***kery.

Did you use the raw exposure helpers or the ML YUV jpeg based tools. The latter will give you about 2 stops under exposure I believe.

Were you also viewing through a decent Rec709 gamma curve like BT1886 or sRGB display curve the latter could appear as if shadows were crushed.

But either way its just  defaults, you have the ability to go back to raw data and adjust below the curve within the limits of the captured raw data depending on a decent control of exposure.

The colour match tool in Resolve is a helper for matching different sources under same lighting conditions not a one button substitute for eyes on the scopes and some basic adjustments primary corrections on the default raw interporation.
#37
Raw Video / Re: RAW - COLOR NOISE - Canon 50D
August 16, 2014, 05:43:41 PM
Resolves demosaic is not always acceptable however in many cases I find it very similsr in output to Amaze. But it does fail on some ML raw shots. What demosaic option did you use in UFRaw?

I find DCB with a couple of iterations helps a lot in these instances.
#38
Quote from: mannfilm on August 14, 2014, 11:37:43 PM
Is it correct that all video to be played on a HDTV should be encoded into the Rec 709

Not any video but any HD Video starting at 720p resolution. If you were viewing DVD, PAL or NTSC then those are different specifications, resolutions, primaries, transfer and colour matrix. It's not the TV display that's decompressing the video and generating the signal.

Quoteor Rec 709 (16-235) color space, otherwise the blacks and whites get clipped, and the colors weird?

It's not so important whether you encode with under and overshoot in digital video realm, what is important is where you reference black and white are at YUV 16 - 235.

QuoteAnd for general public we should use Rec 709 (16-235) because of all the 8 bit consumer HDTV's out there???

HDTV's are generally expecting to receive a levels range 16 - 235 YUV or RGB Low or RGB Normal. What's important is that the viewer takes care to ensure their TV and media player settings match levels wise. If the wrong levels range is fed to a TV then there will be problems with contrast and gamma.

Checking levels ranges can be done by feeding the TV a few simple images. Not calibration that would be the next step, a LUT box the step after that unless using a PC dedicated for colour managed media playback.

Support for display refresh rates suitable for correct video cadence is also important, a computer monitor capable of only 60Hz refresh rate and sRGB colour space is crippled to handle 23.976 fps Rec709 for example.

QuoteSo when do we start working in Rec 709 or Rec 709 (16-235)?

The important bit is monitoring and preview on a calibrated display to specifications that display supports. Whether that be Rec709 HD, PAL or NTSC, LUT'd to be able to swap between the specifications or a media player capable of doing the transforms which is generally what happens, to varying quality.

If only doing Web or TV then Rec709 is the specification to monitor with. Gamma is debatable but BT1886 is suggested for LCD/LED/Plasma as it exists to more closely relate to the old CRT gamma response which many of these video specifications are targeted at.

QuoteI've seen some advocate going RAW to Prores 444XQ and later down-convert to Rec 709???  Yet others (broadcast?) seem to start in REC 709 and stay there. Which is better?

What ever you do in the grading process, you're monitoring it generally Rec709 even for cinema unless you grade using a projector that can do P3. Where you go from raw, what codec you put it in is anyones choice.

btw sRGB monitoring is generally not advised.

QuoteIn grading, is there trick or something for encoding into Rec 709 or Rec 709 (16-235)?

Thanks in advance

What you see on your calibrated reference display is or should be what is encoded, what is important is that the correct specification is chosen for encoding.

For example just because a video is encoded to h264 doesn't mean it's Rec709 primaries, transfer and colour matrix. What determines the specification is generally lines of resolution.

Also there appears to be issue with QT h264 encoding on mac's through Resolve and probably any QT based mac app with regard to sticking to the specification and not introducing tweaks to suit its own OS level colour management, resulting in playback on non QT players appearing incorrectly.
#39
Quote from: baldavenger on August 14, 2014, 03:24:06 AM
Thank you dyfid for that extensive and extremely informative response.  I learned more useful information in that entry pertaining to my most pressing requirements than I have in the last month of online research.  On many things you have put my mind at rest, and I have to believe that most people on this forum would find your clarifications invaluable.

Only my opinion on what I've concluded myself nothing more, its upto individuals to make their own choices and conclusions through testing.

I'd actually preferred to be challenged and corrections made to be honest.

QuoteOn the subject of ACES, why exactly do you believe it is of no use to ML Canon raw?

You'd require proper IDTs for each camera you wished to use and that really requires camera manufacturer input. Without that ACES for ML raw is flawed. Also do the cameras limited cababilities in gamut and DR make it worthwhile from a possible benefit point of view over non ACES workflow. Only benefit might be if dropping ML raw into a ACES workflow using more capable and suitable cameras with manufacturers IDTs, even then again its not a one size fits all.
#40
Quote from: EVZML on August 13, 2014, 10:55:27 AM

@dyfid: "QT h264 on the mac is encoded incorrectly. On Windows QT h264 is encoded correctly."
Really? WOW! Maybe this is the problem. Would it look correctly on Mac if I would upload the video on youtube/vimeo?


I'm on a hackintosh, use Resolve on mac and windows and checking the output from h264 from mac Resolve 11 & Win Resolve 11. Mac is always restricted range regardless of choice in Resolve of data or video levels for encoding but more importantly doesn't matter what combination of settings mac h264 doesn't convert back to same levels and gamma as the input like Win h264 does.

Can only assume mac h264 even from Resolve is meant for decompressing via QT on a mac's rather than respecting a standard for all platforms regardless of OS.

But really best advice is to encode to an decent intermediate format, on windows that would be DNxHD I'd suggest and then encode that through something like Handbrake using x264. A proper h264 encoder. :) Only problem is DNxHD is restrictive regarding frame rates and resolution. So for some of the more exotic aspect ratios of ML raw you'll end up encoding with letterbox to DNxHD to get the typical 1920x1080 frame size. No problem though, if you want to do your final encode and upload non letterboxed, say 1920x816 it's easy to add a crop operation in handbrake or similar when encoding to h264.

Quote@dyfid: I don't really understand all of that. I will do some google search in the next days to understand what you mean :)

Nor me, funny that. :)
#41
Quote from: goran on August 13, 2014, 08:29:16 PM
The point is to get faster, closer to the look you're after and that's what LUTs are ultimately for.

"The look you're after ", hmm. Look LUT's are for one of two things, instant gratification with least amount of effort or for the technicalities of onset, dailies and all that world.

I think grading is about crafting an image that evokes emotion and feeling not applying a look from a catalogue of looks.

QuoteThey're a fast way to get 70-80% of they way for most people.

Look LUT's -> Instant gratification :)

QuoteAlso for creating dailies there's no substitute for LUTs.

Technicalities of production. :)

All in jest, we do what we want and what we like, nothing else.
#42
Quote from: baldavenger on August 13, 2014, 08:06:40 PM
I'm coming round to dyfid's way of thinking on this.  My initial concern was the apparent limitations of the Rec709 colour space

Apparent limitations is a good description, but as you infer it's unfounded for raw.

Rec709 gamut is not an issue, it's generally the gamut the majority will view the final video in, a sRGB monitor of questionable ability, a Rec709 LCD or LED TV or even Rec709 home cinema projector.

Rec709 gamma curve would be a limiting factor if encoded into a lossy compression such as the h264 profile Canon's use at 8bit.

But in the case of raw, there is, even in Resolve 11 far more flexibility to revisit the raw data previewing through the Rec709 gamma transform, which is where the final video is heading anyway, whether it's transformed in a controlled way via an output LUT or just injected with contrast & saturation to suit.

It's totally different to dealing with Rec709 h264, as we all know, where you can't keep going back and revisiting the raw data adjusting highlights, shadows, lift, gain, white balance etc because the camera's done it for us. All that is done prior to transforming to Rec709 colour space and whatever gamma is chosen.

The Rec709 raw transform in Resolve is 16bit not 8bit, so thats 65536 levels to play within at 32bit precision and it's considered that even with a typical 2.2 - 2.4 gamma encoding on 16bit data, that 16 f-stops of DR can be comfortably distributed albeit not linearly. Choose linear for gamma with Rec709 in Resolve and it's even more comfortable but not good to grade in linear.

Absolutely no point imho to use log unless going to an intermediate for edit and grade outside of Resolve. That's what log is for, to efficiently maximise data in minimal bit depth. 8 & 10bit being most common. But again not raw data.

Where does log play a part in a typical raw development application like Lightroom, UFRaw, Rawtherapee, Darktable. No where to be seen because it's pointless.

In fact I'd suggest the future for raw development in Resolve, maybe next release is to increase the L*a*b toolset. In 11 BM introduced L*a*b colour space for the first time, that's a pointer to where they're heading. You can only do so much with typical RGB tools, 1D LUT's (lift gamma gain & curves).

Quoteand like most I'm reluctant to compromise any image quality therefore the expanded gamut BMD Film appeared to be a better choice, without necessarily knowing exactly what any of that in fact means.

However, considering the initial access to RAW data and 32bit floating point (so no clipping) then perhaps that's unnecessary?

Expanding the gamut via BMD Film is one thing but applying it to a camera with differing sensor capability is another. It gives a look, but whether it's detrimental or not. To me BMD is been promoted by the LUT creators to provide a flat log appearance as a starting point for their heavy LUT's in order to maximise the instant gratification and minimise the 'it all looks too contrasty and saturated'. Rather than work with primary grading on the raw to get it where you want and then apply the look LUT last.

raw process as I understand it is raw -> WB -> demosaic -> adjust exposure etc -> scale into output bit depth -> intermediate colour space XYZ -> Transform to output colour space (Rec709) -> Apply gamma curve or linear -> output

Every time we adjust WB the raw is demosaic'd again and the cycle continues.

Quote3D LUTS are destructive so surely they should be avoided unless entirely necessary?  I know Peter Doyle refuses to use them so that's worth considering.

There's someone to aspire to for anyone who describes LUT ***kery as 'Grading'. A guy who uses L*a*b, custom tools and has a real passion for the art.

QuoteI'd love a definitive explanation on all this as I have a project coming up and I've yet to decide my final workflow approach.

The definitive explanation has got to be test and a 3D LUT is nothing magical its input value mapped to output value with a heap of interpolation in between based on a specific profile of a camera created under set shooting conditions, not a one size fits all.

QuoteI'd really like to know the best and cleanest way to emulate film stocks.

Mmm, mentions Peter Doyle and the goal is to emulate film stocks. :) Not something he'd do. :) Why emulate film stocks, the goal surely is to create imagery that provokes a feeling, a memory. From what I've heard and read he draws inspiration from everywhere other than a freaking film stock. :) Joking.

QuoteAlso, what do people make of the ACES workflow, and is it worth the hassle for the benefits, if indeed there are any?

Nope, not for ML Canon raw.
#43
Quote from: EVZML on August 13, 2014, 05:34:50 PM
Hey. Thanks for the fast reply!
I recorded some test footage, in RAW and in H.264, so you can see what I mean.

@dyfid: Yes, clean template. Maybe you're right, but I'm always like learning by doing  ;)  :)


Yes me too, best way.

I've not used all these LUTs but if the h264 to whatever LUT looks right to you, like the bottom image then Rec709 for colour space and gamma would seem the choice, that's what your h264 is using Rec709 primaries & Rec709 transfer curve.

To me using BMD colour space and gamma for ML raw -> final output in Resolve is pointless, no point using a logish transform for raw because you have full control over shadows, highlights, lift, contrast all within the raw tab.

Every intermediate this 3D LUT to log to that 3D LUT when working with raw to me is ***kery. Introducing interpolated error. If you start with flat or log source out of the camera then fair enough work log but with raw what is the point?

Try Rec709 colour space and gamma in your raw tab, apply your M31 whatever look LUT to your Output and tweak your raw settings mentioned above which will affect the raw data under your M31 LUT until you like what you see. Closer to your h264 -> M31 workflow previously.



#44
Are you starting from a clean template that you've set up for raw, no hidden track node luts applied.

Have you satisfied yourself that your primary corrections on the raw files give you  a good starting point to do your LUT ***kery after. Using the scopes as a guide.

Then apply your input lut and check scopes and preview, then apply your output LUT and look at scopes again.

Maybe step away from following some crib sheet approach and hope you don't mind but learn how to use Resolve a bit better, relying on these look LUTs is so limiting and doesn't help when problems arise.
#45
Quote from: YourMom on August 13, 2014, 12:28:16 AM
I may be wrong, but I'm pretty sure the t3i only writes up to a speed of about 24 mb/s

Correct,  but card speed specs usually are aimed at read not sustained write speed and sustaining speed is key thats why I said 95mb/s. As an owner of 550D I find 45mb/s inconsistent  particularly with the SRM module double recor times.
#46
Quote from: YourMom on August 13, 2014, 12:24:10 AM
Sorry, I'm here to ask a different question.  I can't seem to get any raw video shots without noise on the 600D, daylight, lowlight or otherwise.  I'm trying to use neat video only as a last resort, because it tends to soften the image, I'm aware that I can sharpen it, but it just isn't the same. I understand that I am unable to get perfect results with the 600D, but I don't think I'm getting optimal results. Can somebody please recommend the most ideal settings as far as aperture, shutter speed and ISO?  I'm kind of new to messing with raw, but once I got a taste, I just can't go back to h264.  Please help! Any suggestions for daylight and lowlight would be greatly appreciated.

Thanks in advance!

Best just start a new thread than bury here? You'll get more response.
#47
Thanks for the reply, comparing raw -> BMD FILM with raw -> Rec709 in Resolve, I see no real reason so far to use BMD but it's personal choice, I understand that.

Do you apply a BMD to Rec709 3D LUT for output or just grade the BMD output until it looks the way you want and fore go the final output transform from BMD -> Rec709?

Going to the final colour space and adding gamma is the very last operation on raw so Resolve 11 gives control over shadows, highlights, saturation etc before colour space and gamma are added, it's 16bit data so even with Rec709 gamma applied there's enough levels available to support about 16 f-stops comforatbly more than enough to play within without the need to use BMD Film transform.

Depends which way to go about it I guess, process raw -> BMD and grade without Rec709 Output 3D LUT giving all that room in shadows and highlights as it's log but having to inject contrast and saturation or work with a Rec709 Output LUT over and pull shadows and highlights into play, nothing is lost.

The BMD Film and colour space are specific to their cameras, just like the idea that a proper transform for Canon raw is required. Can't help thinking that part of the BMD Film and colour space transforms and 3D LUTs are to massage specific BMD sensor colour bias, which you can bet are not the same colour bias's in Canon raw and could be more detrimental to Canon raw rather than helpful.

I though one of the goals of DNG & CinemaDNG was to provide all the necessary data required for transforming without the need for an application to know the specifics of a certain camera, as long as those specifics are open and transparent, assume the other Camera raw options in Resolve such as BMD's exist because they don't want to provide that information.

But for Canon raw has no one profiled the cameras like FilmConvert have done, in order to add the correct info into the DNG's for Resolve and other apps to interpret? Non of the .MLV and .RAW to DNG wranglers have this functionality? Other than a basic matrix and black level?
#48
They profiled the individual cameras with colour charts under controlled conditions, giving them the input values so they know how to transform those through the FilmConvert process to arrive at all the emulated film stocks, just like they did with h264 camera profiles.

They too suggest BMD Film for colour space for raw and with HR on for all clips regardless. Only time HR is needed is if the raw data gets clipped and judicial use of HR can be detrimental. ML provides a raw histogram with clip alerts to avoid the HR problem and minor clipping of highlights is just that.

Why is BMD Film being recommended everywhere for raw? Is Canon raw data sufficient to spread over a wider gamut. Are the red and blue photo sites on Canon sensors receptive to capture well beyond 709 or is it more about adopting the BMD gamma curve for raw and unfortuneatly with that comes the wider gamut colour space beyond what is required?

For end to end in Resolve for raw what point is BMD Film, 11 offers much more raw control compared to 10. Or does BMD Film help with all the 3D LUT ***kery masquarading as grading going on. Going to log curve of some sort for grading outside of Resolve fair enough.
#49
You need a faster card, something like a 95MB/s Sandisk, 16GB or bigger. Format to ExFAT. The SRM builds will give you double record times, for example 1280x544 for about 15 - 20secs at 23.976.

Search the forums though for your specific camera and give more useful info or you'll get arsy obnoxious answers or just be ignored. Help yourself and search first.
#50
Dedicated video playback via Mini Monitor isn't just restricted to Resolve btw.

http://www.blackmagicdesign.com/products/decklink/techspecs/

You'll see that the device is also supported in AE CC, FCPX, PS CS, Premiere CS, Nuke & Avid Media Composer, Sony Vegas Pro etc etc. So even more useful, whether on Resolve, not sure whether to move to Resolve or no intention to move.

10bit is least important. I don't think many will see that as a reason to purchase.