DaVinci Resolve and ML Raw

Started by baldavenger, September 01, 2015, 11:41:51 PM

Previous topic - Next topic

0 Members and 4 Guests are viewing this topic.

baldavenger

I looked into the effects of extending the White Level value in the DNGs metadata. With 5D MkIII footage it is 15000, which is mapped to the value 1.0 in Resolve. The reasons for White Level being lower than the full 14bit limit (16383) are well documented - Peak Saturation, non-linearity past that point, magenta cast - but when Highlight Recovery is enabled these extra values come into play anyway.

I ran some tests with DNGs whose White Level I altered using Exiftool. There were three varieties: Normal 15000, Full 16583, and Bright 10000

In Adobe Camera Raw they appeared as expected, with Full 16383 slightly darker than Normal 15000 (due to the values being mapped lower to accommodate the additional 1383 values) and Bright 10000 much brighter in keeping with the fact that the lower value is mapped to peak white thereby raising all values accordingly.

Using the Exposure and Highlight controls it was very easy to bring back the highlight values that got clipped or heavily compressed, so in this case the White Level is really just a convenient guideline for Adobe Camera Raw to map Peak White, and no actual raw data is compromised by altering the metadata.

In Resolve it is a somewhat different matter. When the three DNGs are initially brought in the appearance is more or less the same as in ACR, but then using controls like Exposure and Gain to bring back highlight detail it is revealed that this information has been clipped, with Bright 10000 clearly showing the missing information. Only by enabling Highlight Recovery is that signal information brought back into play, but only though whichever algorithms Resolve uses to rebuild signals.

Now, even with Highlight Recovery on Full 16383 Resolve tries to build more highlight detail, even though there is no reserve signal to call upon. I prefer to use the signal splitting techniques mentioned earlier in the thread to do highlight rebuilding, but that can be combined with Highlight Recovery to increase dynamic range if done judiciously. The less of a signal Highlight Recovery has to anchor to initially, the more control you'll have overall, which is why I now recommend (for those using Resolve and willing to put in the extra work in order to achieve superior results) that you change the White Level of your DNGs to their max, which is 16383 for regular and 65535 for Dual-ISO. This process will involve the Exposure control to scale the signal such that the exposure value of a white diffusing reflector is 1.0

This is a deliberate attempt to take the DNG process away from its originally intended display referred/stills approach, and adapt it towards a more scene referred/film approach like that of Cineon and LogC.

With the sterling aid of the two Daniels (@Danne and @dfort) I've been able to convert with relative ease the White Level of both DNGs and MLVs.

$ exiftool -IFD0:WhiteLevel=16383 * -overwrite_original -r.

Is a quick and dirty way to convert every DNG in the folder (and subfolders)

echo "0000068: FF 3F" | xxd -r - Input.MLV

Converts the White Level in the header of a MLV. Very handy if you're using MLVFS in combination with Resolve.
@dfort figured out the code and the HEX values needed.

15000 = 98 3A
16383 = FF 3F
10000 = 10 27
60000 = 60 EA
65535 = FF FF
30000 = 30 75

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Bonus Tip relating to the first example from Reply #98

Create a scrolling Waveform window by just adding a Node and adjusting Gain.



In Node 05 you simply adjust the Gain control in LGG to create the scrolling effect. Gain has values from 0.0 to 16.0, so that covers quite a stretch of signal. To disable the process either replace ResolveConstant_-1.cube with ResolveConstant_Zero.cube, or change the Layer Node option from Add back to Normal.

Even better is if you have a control surface and use the Gain dial to scroll. The displayed Gain value will also identify which part of the signal is currently being viewed in the waveform 'window'.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

DeafEyeJedi

Hey I have question regarding DarkFraming average processing workflow in which I've been doing a lot of rough testing with MLP and was curious if it would matter how long I record an MLV (3 secs vs 10 sec) to make the average before the render begins onto the footage for optimum results?

I also find the results to be more noticeable (improvements) if I work with DNG's directly than with ProRes4444 that was spitted out from ffmpeg. So instead I would take the DNG into AE and export as ProRes4444XQ for best quality as flat log unless I do all the dirty work (CC, LUTS, etc) which then can be exported as H264 for online viewing purposes.
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

baldavenger

With regards recording time, it is subject to the laws of diminishing returns, so anything more than a few seconds makes little or no difference.

As for your other enquiry, what are the exact workflows that you are using?
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

DeafEyeJedi

Exactly what I thought. I figured 3 seconds would be plenty for the DF avg to be made and Thanks for the clarification!

I'm currently testing out 'Scenario #1' for DF avg processing within MLP which seems fairly straightforward and works like wonders!

Re: workflows ... I find it more effectively if use the DNG's (DF avg process already applied) and export them as H264 or ProRes. I also played with some Cinelog-C together with these DNG's which was rather interesting to experiment on my end.

Admittedly, I am in the middle of this mess that I made on MLP's thread from last night so please excuse me for a moment as I'll post up my better test results comparisons and will share them with you shortly.
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

baldavenger

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

So here's a basic workflow that involves some of the methods that have featured so far in this thread.

ML Raw -> RCM -> LogC AWG -> Linear -> Signal Split -> Cineon -> Print LUT (optional) -> Grain Overlay -> Display Gamma (sRGB/Rec.709)

An alternative workflow involves a Film Matrix conversion.

I recommend setting MLV White Level to its max value (though optional) and raising Exposure by up to 1 stop in Camera Raw controls to compensate for DNG display referred scaling.




Here's the workflow that involves Film Matrix:




If you want to replace the 1D LUTs with a hand-made powergrade, then simply take the reverse LUT (i.e. the sRGB to Cineon LUT if trying to replace the Cineon to sRGB LUT) and apply it to the Grey Ramp and manipulate the controls until it's straight again (as done previously in the thread).

Included in the ML Resolve Workflow Pack are a set of 1D LUTs, a pair of 3x3 Matrices, a pack of 3D Print LUTs, Signal Splitters, and a few other useful elements along with the two main powergrades that feature above. Import the powergrades by right-clicking in the Gallery tab and choosing Import.


https://www.dropbox.com/s/w46zz4g2swtjgbe/ML%20Resolve%20Workflow.zip?dl=0


Here's the link to the Grain samples that featured previously in the thread:


https://www.dropbox.com/s/lpo6lnes2uzq13h/Grain%20Scans%20HD%20Flip%20Flopped.zip?dl=0


Have fun grading and feel free to post results.

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Big Resolve update released today (12.5). Lots of new and improved features that that will be familiar to anyone who has been following this thread.

Download the free version here (and also download the free version of Fusion as they can now both be linked):

https://www.blackmagicdesign.com/uk/support

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Quick PFE LUT tip.

I tried improving on the Print LUTs that were previously posted by reducing the photometric highlight compression and slight shadow lifting that was still prevalent, but it proved persistently problematic. Instead I settled for a more practical compromise, which basically involves using an alpha channel on the node that is receiving the LUT. As the signal going into the node is Cineon log it is better to pull the key in a previous node and pipe it down stream.

Original sRGB image:



With LUT applied but key disabled:



With LUT and key applied:




Now, Resolve's Luminance Keyer isn't the greatest (and something I'm looking into improving on with an OFX plugin), but the system still works and produces better results than with just the LUT on its own. I reckon I've gone as far as you can go with regards to cleaning up the 'muckiness' of Print LUTs. Always a battle, but sometimes it's worth it.

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

All the previously posted formulas can be applied via DCTL, and without a great deal of effort. Simply replace r,g, and b with p_R, p_G, and p_B, and add f to the end of every numerical value.


For example, the original AWG to Rec709 matrix conversion


(r * 1.617523) + (g * -0.537287) + (b * -0.080237)
(r * -0.070573) + (g * 1.334613) + (b * -0.26404)
(r * -0.021102) + (g * -0.226954) + (b * 1.248056)


can be applied as follows:


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
const float r = (p_R * 1.617523f) + (p_G * -0.537287f) + (p_B * -0.080237f);
const float g = (p_R * -0.070573f) + (p_G * 1.334613f) + (p_B * -0.26404f);
const float b = (p_R * -0.021102f) + (p_G * -0.226954f) + (p_B * 1.248056f);

return make_float3(r, g, b);
}



Likewise the LogC to Linear transform


r > 0.1496582 ? (pow(10.0, (r - 0.385537) / 0.2471896) - 0.052272) / 5.555556 : (r - 0.092809) / 5.367655
g > 0.1496582 ? (pow(10.0, (g - 0.385537) / 0.2471896) - 0.052272) / 5.555556 : (g - 0.092809) / 5.367655
b > 0.1496582 ? (pow(10.0, (b - 0.385537) / 0.2471896) - 0.052272) / 5.555556 : (b - 0.092809) / 5.367655


becomes:


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
const float r = p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f;
const float g = p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f;
const float b = p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f;

return make_float3(r, g, b);
}



Now of course the necessity for these particular transforms via DCTL is no longer relevant because of the Color Space Transform OFX plugin, but it at least shows that options outside of what is already available in Resolve are easily accessible. The real challenge now is to build custom OFX plugins that in effect remove all limitations to what you can do in Resolve. Learning C++ is proving to be a slow process, but seeing as Blackmagic Design have been good enough to provide the access and resources to really elevate Resolve, then it would be a shame not to at least have a go at making it work.

Note: DCTL option is only available on Resolve Studio version, but the formulas can still be applied via previous methods such as LUTs or the ChannelMath plugin.

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Here are some more potentially useful DCTLs

Add Arri Film Matrix

// Arri Film Matrix addition (to be applied to LogC AWG)


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float r = (p_R * 1.271103f) + (p_G * -0.284279f) + (p_B * 0.013176f);
    const float g = (p_R * -0.127165f) + (p_G * 1.436429f) + (p_B * -0.309264f);
    const float b = (p_R * -0.129927f) + (p_G * -0.510286f) + (p_B * 1.640214f);

    return make_float3(r, g, b);
}



Preparing Arri Linear for conversion to Cineon

// Arri Linear to Cineon Linear Prep

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float r = p_R > 1.0f ? _powf(p_R, 0.6496465358f) : (p_R < 0.0f ? _powf(_fabs(p_R), 1.2756188918f) * -1.0f : p_R);

    const float g = p_G > 1.0f ? _powf(p_G, 0.6496465358f) : (p_G < 0.0f ? _powf(_fabs(p_G), 1.2756188918f) * -1.0f : p_G);

    const float b = p_B > 1.0f ? _powf(p_B, 0.6496465358f) : (p_B < 0.0f ? _powf(_fabs(p_B), 1.2756188918f) * -1.0f : p_B);

    return make_float3(r, g, b);

}



Full LogC to Cineon transform with roll-off

// LogC to Cineon (with highlight roll-off to preserve dynamic range)

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float r = (((_log10f(((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f) > 1.0f ? _powf((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f), 0.6496465358f) : ((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f) < 0.0f ? _powf(_fabs((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f)), 1.2756188918f) * -1.0f : (p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f))) * (1 -.0108) + .0108)) * 300) + 685) / 1023;

    const float g = (((_log10f(((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f) > 1.0f ? _powf((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f), 0.6496465358f) : ((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f) < 0.0f ? _powf(_fabs((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f)), 1.2756188918f) * -1.0f : (p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f))) * (1 -.0108) + .0108)) * 300) + 685) / 1023;

    const float b = (((_log10f(((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f) > 1.0f ? _powf((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f), 0.6496465358f) : ((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f) < 0.0f ? _powf(_fabs((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f)), 1.2756188918f) * -1.0f : (p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f))) * (1 -.0108) + .0108)) * 300) + 685) / 1023;

    return make_float3(r, g, b);
}



Perhaps the best way to utilise Arri Film Matrix with Canon footage is to adjust the saturation response so that it more resembles that of an actual Alexa. The Luma Vs Saturation control can do this, as can applying an LC709A 3D LUT (gamut only). One possible route might be ML Raw -> LogC AWG -> LogC LC709A -> LogC AWG (LogC Rec709 to LogC AWG transform) -> Arri Film Matrix -> LogC to Cineon -> Print LUT

The LC709A LUT can be generated here:

https://cameramanben.github.io/LUTCalc/

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

Danne

Looks like you are getting into programming @baldavenger. When will you put this in dcraw  ;D :P

DeafEyeJedi

Seriously @baldavenger you rock and eventually it'll be implemented into dcraw which would be a dream come true for all of us one day.

Thanks and keep them coming!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

baldavenger

Here are some more film emulation type LUTs I put together recently. I was aiming towards a more film-scan/alexa type response, and the LUTs are smaller (17^3) than usual. They expect a Cineon image with Rec709 primaries, and work well in Resolve as well as Photoshop (use the cinelog V3 profile in camera raw if you have it).

There are 6 in total. 2 PFE output, 2 film-scan + PFE output, and 2 film-scan log output (1 for PFE, 1 for Rec709/sRGB)


Original Chart




Cineon Rec709 to Kodak




Cineon Rec709 to Fuji




Cineon Rec709 to Film-Scan + Kodak




Cineon Rec709 to Film-Scan + Fuji




Cineon Rec709 to Film-Scan (for PFE)




Cineon Rec709 to Film-Scan (for Rec709/sRGB)





https://drive.google.com/open?id=0B0WuqGJ11_EgU1plSkktaGpPNmc


Nice and uncomplicated (for a change). Have fun and post some feedback if something springs to mind.

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

I've been working on some OFX plugins for Resolve 12.5, and they can be found (along with some LUTs and DCTLs) on my GitHub page:

https://github.com/baldavenger?tab=repositories

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

DeafEyeJedi

Thanks for sharing the OFX plugins @baldavenger! I'll definitely take a dive with them and report back when I can.
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

baldavenger

I did some spring cleaning on the OFX plugins. I removed as much unnecessary code as I deemed probable, plus I employed a better naming structure (as well as a wee icon for the top left corner). Hopefully it'll be easier now for those wanting to compile for Win or Linux.

https://github.com/baldavenger?tab=repositories


I took a break from coding endeavours for a month or so, but I hope to add some new ones soon (plus do some more fine-tuning on the ones already there).



EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

kfprod

Hi,

There's an amazing amount of knowledge here but very hard to get an overview of the work you've done.

I work a lot with mixed footage from ML, Red and Alexa and I'm looking for a simple way to get my Magic Lantern footage to the Alexa Log-C colourspace (as it is now the industry standard).

The best way I've found of doing this so far is with the Cinelog DCP transformation Luts from http://www.cinelogdcp.com/ but it does seem a bit odd the transformations that happen there and I feel there must be an easier (better?) way...

Also your OpenFX plugins look amazing but I don't really know where to start, is there a site where you show how to use them and what they're for?

Best regards,
Karl

66dellwood

This is amazing work!  Great job and Thanks much.
1.  Need a LUT recommendation for Resolve.  For cdng footage shot on a 5d2.  I set my raw settings to BMD Film and use Hunters LUT for quick basic color correction (not for style).  Hunters LUT does a nice job but is getting old and displays some artifacts.  Other LUTs are not working well. Most make white balance correction very difficult.

2.  Speaking of white balance, how does the Balance/white balance plugin work?  WB is often my most time consuming fix for basic color correction.  Wish it was as quick and simple as WB fix for raw in LightRoom

kfprod

66dellwood: The EOS HD lut is much better than the Hunter LUT: http://www.eoshd.com/2013/11/introducing-eoshd-film-lut-5d-mark-iii-raw-resolve-10/

I'm also interested in the white balance plugin, it's so weird that Resolve still doesn't have a color picker!?

Danne

Quoteit's so weird that Resolve still doesn't have a color picker!?
+1

66dellwood

I have been using the EOS HD LUT for a while. On much of my footage I have the hardest time neutralizing the white balance with the EOS LUT. No such problem with the Hunter LUT but it has other issues.  The EOS LUT is probably optimized for the 5d3.  Perhaps 5d2 footage is creating some color balance issues with it.

baldavenger

The OFX plugins have been refined and improved, and Win compatible versions are also available.

Mac versions

https://github.com/baldavenger/Plugin-Downloads

Win versions

https://github.com/baldavenger/Win-OFX

Both format versions work fine with CUDA, but no OpenCL yet on Win (and only limited functionality on Mac).


One possible use of the plugins is as an alternative to the Dark Frame process of removing magenta cast in high ISO image shadows. Play around with the ChannelBoxV2 plugin to see what results you might get.

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

If you're working in ACES in Resolve, then this might be worth a look.






There's more info, and download links, here:

https://acescentral.com/t/davinci-resolve-dctl-and-ofx-plugins/1385/89?u=paul_dore
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

yokashin

Quote from: kfprod on November 23, 2016, 12:32:08 PM
I'm also interested in the white balance plugin, it's so weird that Resolve still doesn't have a color picker!?

70D.112 [main cam] | M.202 | S110 [CHDK]