DaVinci Resolve and ML Raw

Started by baldavenger, September 01, 2015, 11:41:51 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

baldavenger

Back in the 7th post I gave an example of how to reverse engineer the LogC to Rec709 tone curve in Resolve.  However, reverse engineering the tone curve of a LUT is not so straight forward, especially if it's a log to lin/rec709 transfer.  There is a way around it though.

Come out of DaVinci YRGB Color Managed, and select the grey ramp.  There should be a straight diagonal in the waveform scope, as to be expected.  Use two nodes in the node graph, and in the second node place the LogC to Rec709 1D LUT.  The waveform will now resemble this:



Trying to reverse engineer the LUT in a node after it would not work, as LUTs clip information above 1.0 (in most cases) and that information cannot be retrieved.  So instead we do the reversal before.  However, the resulting curve in the node represents a Rec709 to LogC transfer, which is the reverse of what we're after.  The trick is to bring all the values in the custom panel from 100 to 0.  100 to 50 cancels the effect of the curve, and 50 to 0 reverses it.  So we get our LogC to Rec709 after all.





I included the Lum Mix figure in the bottom left, not because it's necessary for this process, but because it is involved when attempting to reverse engineer colour information.


EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Here's another bonus node tree I put together not that long ago.  Nothing particularly special, but it might be of use in some situations.





Ignore the splitter combiner in the middle for the time being, and pay attention to nodes 02 and 04, which are the two components of a layer node.  Node 02 has saturation down to 0 so it's just a b/w image, and in node 04 in the Primary Bars section the Y bar of Gain is turned all the way down (leaving just colour information).  In the actual Layer node Add is selected as the composite mode.  This process separates the chroma and luma into different streams before uniting them again.  Many handy uses for this.  One such use is adding a look LUT to a node before node 04, to see the colour without disturbing the gamma.  You can apply sharpen to node 02 and denoise/blur to node 04, but you can also do that in a single node anyway by selecting LAB space, and deactivating the channels as required.

The splitter combiner was added because someone requested the ability to adjust the luma of individual channels without affecting the overall colour, and this set-up did the trick.

Try it out if you feel so inclined, or not as the case may be.  It's a free world, or at least it is in this thread where the software, LUTs, and suggestions cost nothing.  Keep the spirit of Magic Lantern alive and try to be positive and supportive of one another.  We're all trying to find ways to make more beautiful images, so if we work together and help each other out then surely we will all benefit in the long run and hopefully become better artists.  I like to think so anyway.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

andy kh

5D Mark III - 70D

baldavenger

We'll have a quick look at using curves to reverse engineer colour.  This is quite the most complex element of colour grading, and serious pros take detailed measures (combined with years of experience) to achieve noteworthy results, often utilising a number of specialist charts, lit with a range of different exposures, on calibrated monitors and external scopes.  But we all have to start somewhere, and these following examples might give you an idea of where to begin.


This first image features an untouched set of curves and a waveform scope representing the luma of the three channels.  As you can see just from the scopes the image is very saturated, with a dominant green element.




Notice how it says 100.00 in the Lum Mix in the lower left hand corner.  That's the luma is fully incorporated into the curves, so if an adjustment is made to one of the colour curves, the other two react as a kind of counter balance.  Turning the value down to 0.00 that interaction is removed, so now you can alter the individual curves independently.





In the image above, the curves have been adjusted to try to remove colour and saturation.  Not the greatest effort by any means, but you see now by bringing the colour elements together ultimately colour is removed.  To reverse the procedure, drag the Edit values all the way from 100 to 0, and the effect is reversed.  The main image will now be very saturated (as you have in effect added doubled over in colour), but save the node on its own as a still and you can apply and adjust the colour on another image (preferably neutralised first). 

There are other ways to remove and then add colour from an image, such as via the RGB Mixer, though that would take a very experienced individual to make it work.  There's also a mathematical equation also, whereby an image with colour is divided by a desaturated version of itself, and the result is multiplied by the desaturated version of the image you wish to add the colour to.  Anyway, there may be in amongst all that something of possible interest of even use to you.  Proceed with caution  :)


EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Ok, this one should prove useful for practically anyone who uses DaVinci, or any other colour grading application out there.  I hope those of you who have availed of the ML Fudak LUTs are finding them useful.  This is how I made them.

"Give a man a fish, and you feed him for a day; show him how to catch fish, and you feed him for a lifetime."



Open up LUTCalc.  There's actually an app you can install, instead of using the website.  Especially handy when there isn't internet access for whatever reason.

https://cameramanben.github.io/LUTCalc/

Click the preview button to reveal the picture above, and above it click the buttons to activate the Waveform and Vectorscope.  Input the values as shown on the left hand side.  It's pretty much the same set-up as when making the optimum ML LogC Alexa to Rec709 Rec709 LUT.  You can adjust the different parameters and see an immediate effect on the image and scopes.

There's an option to import your own image, so we'll try that.  In Resolve, select an image form a clip in the timeline, grab a still, and right click the still in the gallery and choose export image. I would choose tiff for maximum accuracy, but jpeg or png will do also.  You can select the image in Rec709 or LogC, it doesn't matter.  I choose an image that was in LogC, and when importing the image you're asked to pick the Image Gamma, Colorspace, and Legal range.  For a LogC Alexa image I chose the above options.





Pretty straight forward.  The image does not appear in LogC as it has been converted to Rec709.  The process is the same here as it is nor non-RAW footage in Resolve Colour Management.  It converts the image from the input colorspace to the recorded colorspace and on to the output colorspace.  If I had put in the wrong information when importing the preview image, then it would not correspond correctly to the chosen output setting, just like in RCM.

We're now going to import a look LUT so that we can combine and optionally modify its colour information with the our preferred Gamma.  I find the best LUTs to use for this are standard Rec709, as Log LUTs seem to lack a bit of punch no matter what how I adjust the settings.  I'm still learning though so maybe there's something I'm not quite doing right.  Anyway, click on the white bar and the loading options appear.





Input the appropriate settings to correspond with the LUT.  As this one was a Rec709 Gamma and Gamut LUT I picked those settings, and they do indeed work best.  Click the analyse button, and a second or two later it will be processed and you will now have the option to select the LUT's Gamma and Gamut separately.  Select the new option in the Output Gamut drop-down, and also for the Highlight Gamut.  The preview image will update and you should see the new colour combined with the original Rec709 Gamma.  You can then make whatever modifications with the available tools (I tend to use the main saturation, and targeted saturation controls as above), with almost instant updates in the preview image and scopes.  Very handy.  When finished simply generate the new LUT as usual.  Those of you who find themselves jumping to and fro from Resolve to test new LUTS all the time will especially find this approach practical.  I certainly do.

So yeah, that's how I made them.  Now you can too.  Big Up once again to Mr Ben Turley for producing such an amazingly useful piece of software.  Well done sir.


EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Here's a clip featuring some basic approaches that have been explored in previous posts.  Both shots featured only had white balance adjustment performed, so no other camera raw tweaks or colour grading was applied.  The first section features 3 states:

1. The image in regular Davinci YRGB and Rec709
2. The image in Davinci YRGB Color Management and Arri LogC colorspace
3. The image after having a powergrade curve adjustment and RGB Mixer Gamut input to convert from LogC Alexa to Rec709 Gamma and Gamut.  No LUT.

The second section features LogC footage with a powergrade curve adjustment, and a Gamut only Print Emulation Look LUT, in 5 varieties.

Though far from perfect, they represent a much more solid starting point for further grading, thanks to customisation and the more active involvement of powergrades.






EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

"Pursuing a film aesthetic through a digital medium."

The final missing piece in the roll your own jigsaw is the Look LUT to powergrade conversion.  As stated before on this thread this is by no means an easy task, but after running a few tests I came about something not a million miles away.  The key here is to consider why the LUT does what it does, and what it's trying to achieve.  In the case of the Print Emulation LUTs you might think the answer is obvious, but then why is there still such mystery surrounding them and how they work?  A better understanding might come form paying more attention to the actual nature of celluloid film, and it's relation with light levels and saturation.

"...the most saturated colors that film can reproduce are dark cyans, magentas and yellows, each produced by a maximum density of its respective image dye, but resulting in low luminance levels. In contrast, the most saturated colors on a digital display are the bright red, green and blue primaries, each produced by the maximum output of its additive color channel, and therefore resulting in the maximum luminance for that channel."

— Kennel, Glenn. Color and Mastering for Digital Cinema. Focal Press, 2007. Page 22, "Unstable Primaries."

Glenn is now the CEO at ARRI, so you can see how this logic is influencing their cameras.  A very clever chap called Art Adams looked at the luminance and saturation relationship with regards a comparison between ARRI and Sony, and his findings are relevant to Canon too.



http://www.dvinfo.net/article/post/making-the-sony-f55-look-filmic-with-resolve-9.html



He devised a Luma to Saturation curve that helps mimic the Alexa response.  Worth having a look at, even just for curiosity's sake.




So with these things in mind lets have a look at two images.  One regular Rec709, and the other with a 2383 Print LUT applied.





Notice not just the hue changes, but also the luma values.  Saturation is invariably maintained but at the expense of brightness, as is the case with actual print film.  It helps to understand the reasoning behind this so as to in effect better emulate an emulation.

The main crux of the reverse engineering can be done in two nodes.  The first node uses the custom curves (with Luma Mix down to 0) to neutralise the main body of the LUT as follows:






It's not exactly pretty, but it's a start, and in the second node the Hue vs Hue and Hue vs Sat curves can be used to tweak the overall effect.  In an optional third node a high saturation only key can be pulled to use the Hue vs Luma curve to darken very saturated parts of the image.  I've found this technique to be quite effective.  I recommend testing on an image such as this one so as better to monitor subtle changes (also via the scopes):



Once satisfied with the results, simply select the nodes, convert to compound to node, then save as a powergrade.  You should now have the option to use powergrades and/or LUTs however you please.

Although I refer to the ARRI Alexa a lot, don't be mistaken into believing that I am in any way proposing that a Magic Lantern camera (even the 5D Mark III) can somehow compete with the industry standard digital film camera of choice.  All I'm purporting is that by paying attention to how they operate and the logic behind their colour science we can improve our end results, and if some of use move on to better things then at least the transition won't be so daunting.



This brings to an end the main body of what I set out to do, so from here on in feel free to post any questions, or observations, criticisms, etc.  If you have any tips of your own and feel inclined to share then by all means please do.  Let's keep the thread running, and let's try to keep a positive and supportive tone throughout.

Thank you.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

So I was watching episode 3 of Fear The Walking Dead, and I couldn't help notice now Arri Alexa it looked.  I mean, proper greeny yellowy tinged Film Matrix look.  Very distinctive, and its absence was noticeable when they used a different camera for the crane shots and other shots where they deemed a smaller and more practical camera was needed.  I decided to review the ML Fudak LUTs, and put together a Version 2 pack.  These ones are just for footage already converted to Rec709 Gamma and Gamut, with the option of placing a node with the Film Matrix values before it.  They will appear less punchy (saturated) than the previous version, but simply adding saturation on the same node as the LUT will bring it back to preferable levels and no funny errors should occur in the highlights.  Ideally you would reverse engineer them into powergrades as per previous instructions, but as LUTs go they are pretty safe, provided you put them as the last node.  If the Film matrix workflow results in too much saturation then simply select the node and in the Node Key panel reduce the Key Output until you are satisfied with the results.

Here's the link:

https://www.dropbox.com/s/3isuydphexyc77u/ML%20Fudak%20Version%202.zip?dl=0


Here are some examples of regular Rec709 powergrade, LUT with saturation boost, and Fim Matrix and LUT.  All images had white balance in camera raw, but no other colour grading was performed.


























So yeah, have a go with those and see how you get on.  I believe that they are a far more viable option with regards to a controlled workflow than the previous bunch.  Feel free to post your results and observations.  It would be interesting to hear how people are getting on with them, and the workflow in general.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

DanHaag

Thanks so far, your amazing work helps me a lot. I don't really use your whole workflow or the LUTs right now, as I'm going for different looks with my current project. I already use a whole lot of information and individual techniques shown in this thread, though, so it's incredibly useful regardless if you work with LUTs or not.  8)   

BTW: I love the idea of trying to come up with individual LUTs and Power Grades to mimic certain film's or series' looks with ML raw footage! Very interesting, I hope there's more to come and others join the party!

baldavenger

Most professional colourists try to avoid using LUTs due to obvious restrictions to the grading process.  There is, however, one particular occasion when a LUT has to be used, and that is during a conventional DIT process where the footage is graded under a PFE before being sent to a lab to be actually printed to film.  After the original film negative is processed it is scanned to DPX files in Cineon log, and that is what is graded by the colourist, with the PFE at the end.  Once complete the PFE is removed and the footage is exported to the lab, and the final print should match the image in Resolve.

So basically those Kodak and Fuji print LUTs that are already in Resolve (along with various other proper print LUTs like the Juan Malera collection), are designed to be used on scanned film frames that have been converted to Cineon log DPX files.  Cineon log is only a transfer curve, so the process doesn't convert the colour information into another Gamut, i.e. if you were to apply a Cineon to Rec709 LUT to a scanned file the you'd get the original image (more or less).

Let's have a look at some images:







As you can see, the first image is an actual scanned film frame in a Cineon log DPX, the second image has had a PFE applied, and the third is just a regular Alexa Rec709 frame.  We'll try to convert the Rec709 frame so that it matches the film scan.

Although there is the choice to debayer straight into Cineon colorspace in Resolve, in order for the correct process to be applied the footage has to be first in Alexa Wide Gamut, so we stay with the usual Arri LogC colorspace option.  Converting LogC to Cineon is easy.  Use the grey ramp from before, select Cineon as Input Colorspace, Arri LogC as Timeline Colorspace, and Bypass as Output Colorspace.  Then use a simple curve adjustment to straighten the line in the waveform, grab a still, and that's your LogC to Cineon transform.





Arri's colour science comes from their experience and knowledge of scanning film.  LogC is an adaption of Cineon, but designed to contain more lattitude, and their Film Matrix is specifically designed to convert LogC Alexa Wide Gamut so that footage matches the colour of film scans.  In theory, once the Film Matrix has been applied to LogC footage, it can be graded along side actual film scans with a PFE output LUT.  Evidence suggests that this system does indeed work.









The first image above shows the node graph.  The original image being in Arri LogC Colorspace, the first node converts LogC to Cineon, the second applies the Film Matrix, the third is an optional darkening of the more saturated colours (pull a saturation only qualifier and use Offset to darken), and the last is a PFE, in this case the Kodak Rec709 2383 D65 LUT in the film looks folder in Resolve.

The second image image has had the LogC to Cineon conversion performed, and the Film Matrix applied.  The third image has had the saturation darkening applied (digital images tend to have bright saturation, so this is an adjustment in line with more conventional film response).  The final image has had the PFE applied in the last node, and the results aren't too shabby.  I've tried this method with various images, and the various official PFEs (Juan Malera ones are pretty good too) all work in a pleasantly filmic manner.

I've only just come up with these findings today, so if they seem a tad sketchy then please forgive my clumsiness.  It's pretty exciting stuff nonetheless.  There's a lot bluff out there with regards film emulation, and expensively marketed packages that promise the moon on a stick, but a sneak peek into how the actual pros go about their business will often reveal far more practical information.  So yeah, have a go yourselves, test it out, give us some feedback, get involved  :)
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

I've looked back over what I posted in this thread, and there are some useful things, but overall it's still a bit messy.  One thing that was pointed out on more than one occasion was when I said ML Raw Footage was always darker than how it was shot.  That's wrong, and I apologise for that and also for not accounting for it sooner.  The idea stemmed from my previous (pre Resolve 12) workflow with involved conversion LUTs, and I always had to boost exposure.  This apparently was common practice, and referenced in these posts:

http://www.magiclantern.fm/forum/index.php?topic=10151.msg151811#msg151811

http://www.magiclantern.fm/forum/index.php?topic=10151.msg138372;topicseen#msg138372

http://www.magiclantern.fm/forum/index.php?topic=10151.msg122090;topicseen#msg122090

However, it was an issue with converting from BMD Film to Cineon Alexa Wide Gamut by a LUT, and not directly because of ML Raw.  Apologies once again.

There are other inaccuracies in the thread, such as the suggestions in the opening posts with regards to what LUTs to use, as the release of Beta 4 rendered them no longer optimum, but I did address that in later posts.  I am very much learning as I go along, so please bear with me.

I will, however, stick to my assertion that the RGB Mixer is a 3x3 Matrix, albeit one with limited input controls (I'm hoping Blackmagic Design will address eventually).  Here's a description of the Matrix node in Nuke, which effectively describes what the RGB Mixer does:



The free version of Nuke can reveal a good number of useful conversion matrices.  A handy discovery.



These matrices are dependant on the footage being in Linear space, but that's how RCM works anyway.  1D shaper LUTs (or ideally a mathematical equation inputted into a node curve) could be used to get to and fro Linear space.

I'm still trying to envisage a LUT free workflow, and I believe it is feasibly within grasp.  Perhaps with better integration with Fusion this will be possible.

Also, in case anyone was actually countenancing the thought that I might at any stage try selling a product based on LUTs (or whatever), then rest assured that this will not be the case.  Not now.  Not ever.  This thread is very much all about the free.

I'll be quiet with regards big entries on this thread for the next fortnight or so as I need to finish off my showreel, but please free free to post any questions or contributions in the way of tips and advice.  Thank you kindly.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

I posted this on another site just now, so it's basically a copy and paste job, but useful stuff nonetheless.



Before I present it I'd like to have a quick look at what a Matrix actually does.



So it is just like the RGB Mixer (with preserve luminance unticked). If the first line of the 3X3 Matrix was (x1 y1 z1), the second line (x2 y2 z2), and the bottom line (x3 y3 z3), then to get the new RGB values from a Matrix transform you would perform the following:

(R*x1)+(G*y1)+(B*z1)=R

(R*x2)+(G*y2)+(B*z2)=G

(R*x3)+(G*y3)+(B*z3)=B

So because it is presently not possible to input specific values into the RGB Mixer, then in order to replicate a transform we will need to separate the channels, multiply them by the appropriate Matrix values, and add them back together in the appropriate stream.

Starting with the final (compound node) stage, and working backwards from there we get these:













By limiting node functions to simple addition, subtraction, and multiplication (and Channel Swap in RGB Mixer), and removing the need to engage with Resolves GUI controls, a more mathematically precise approach can be realised.

It can be a hog on resources though, but the compound node can be cached. The Matrix values are entered in the Key Output section of the nodes labelled above. This limits the multiplication factor to between 0 and 1.0, but if a value greater than 1 is required then simply multiply the node by a factor and then divide the Matrix value by that same factor, and the net result will comply with requirements. You can type a very specific value into the Key Output, but it will round that to 3 decimal places. Whether that's an actual rounding up, or the input is preserved and the display is rounded up for appearance sake, I do not know. Perhaps someone at Blackmagic Design can clarify.

To put things into perspective, the ability to perform specific Matrix transformations within Resolve means it has just as many options as Baselight with regards to Colorspaces. Resolve Colour Management is basically a fixed set of transforms, involving a conversion to Scene Linear, a 3X3 Matrix conversion for the Primaries, and a conversion from Scene Linear to which ever tone curve is stipulated. There was a recent case when someone wanted to incorporate RedLog DragonColor2 footage into RCM, but that wasn't an option. DragonColor2 to XYZ Matrix values are available, so using the system above the footage can be converted to whichever Colorspace you prefer (via two Matrix conversions, but these can be concatenated into one Matrix).

http://colour-science.org/api/0.3.5/html/colour.models.dataset.red.html

There is a great source of Matrices to be found in the free version Nuke, covering both a number of Primaries and Illuminants.







Here's a link to folder that contains the following:

3X3 Matrix Template Compound.drx
Add Alexa FM.drx
Alexa Wide Gamut(Tone Mapped) to Rec709.drx
Remove Alexa FM.drx

LogC to Linear 1D Shaper LUT
Linear to LogC 1D Shaper LUT
LogC to Rec709 1D ShaperLUT
LogC Alexa Wide Gamut to Rec709 Rec709 3D LUT


https://www.sendspace.com/file/89ktr7


The LUTs were generated with Ben Turley's excellent LUTCalc:

http://cameramanben.github.io/LUTCalc/LUTCalc/index.html

You can download the app for Mac as well.


If someone had a word with Blackmagic Design and persuaded them to allow for more accessible RGB Mixer and Custom Curves Inputs, then all the node havoc above would no longer be necessary. Even just an option in Preferences to activate custom Input would do the trick. It would make a world of difference, and wouldn't require any extra fancy coding.

Please feel free to test and give feedback. Thank you.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

This is an update on the previous post, again mostly taken from stuff I posted on another site.

It would appear there was a problem with some of the LUTs I had posted, in particular the ones concerning Log to Lin and Lin to Log operations.  It was pointed out that a good deal of information was being clipped following the Lin to Log LUT.  This makes sense, as the LUT specified Min 0 and 1 values.  Performing a LogC to Linear transformation means expanding the compressed transfer curve to its full extent, and this actually ranges from just below zero to over 55 (Cineon goes up to 13.5).  That's a lot of additional stops that would be clipped unless the LUT specified a higher Max Input Value.



Thankfully a solution was found in LUTCalc by entering the appropriate values when generating the Lin to LogC 1D LUT.  I have included the new LUTs in the download below.

The last attempt at emulating a 3x3 Matrix in Resolve was highly impractical (though still moderately accurate).  I devised a newer, more realistic model that although may seem a bit weird at first glance actually works quite well.  I like to think so anyway :)

A lot fewer nodes this time.  This is the template:



I used the RGB Mixer to swap channels in 3 nodes, so that they contain only Red, Green, or Blue.



I then imported 3 EXR(16bit half float) files as External Mattes and multiplied the specific nodes by the appropriate EXR.  The EXRs are constants generated in a VFX application, and allow for plus or minus values in 3 channels up to 6 decimal places, which is the exact level of accuracy stipulated in the Alexa White Papers.  The three nodes are then added together and the result is a perfectly accurate Matrix transformation.  Each EXR has values in the RGB channels that match those in the columns of the Matrix, so the Red node is multiplied by an EXR comprising of values taken from the 1st column, the Green node is multiplied by an EXR comprising of values taken from the 2nd column, and the Blue node is multiplied by an EXR comprising of values taken from the 3rd column.

In the download you'll find a PowerGrade for the above set-up, along with a few folders containing EXRs.  I recommend you put the main Matrix EXR folder on your main drive and select it as a favourite in the Media page.  That way you can easily access the folders within it and import the EXRs into your media pool as External Mattes.  When you right click on the Mattes in the PowerGrade you'll then have the option to allocate the appropriate one.


https://www.sendspace.com/file/d2lwva


Have a go and see what you think.  Feel free to post results and observations.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

Back in Reply no.15 on the first page I showed how to create a grain node.  I came across some scanned grain footage that used to be freely available from a VFX compositing application's website, but now is no longer listed. I converted it into Prores 422HQ files, which can be imported as External Mattes.

Here's a list of what's in the download package:




https://mega.nz/#!kQJmHC7I!khxcVhnPxU4RFYdqwmTn9_ge-9e6AZzZpIWmeEK8oSI


I do like a bit of grain, so I do.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

baldavenger

EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

budafilms

Incredible work.
I don't have skills to apply all these information.

I used to work with Resolve, starting to put BMD on DNG files. After I correct the exposition, blacks and whites to 0 and 1000 in the scopes. And before the color, white balance.

But, if understood correct, here are a lot of better workflows.

Can someone explain an easy way to apply the work of baldavenger? Or is not ready for a guide?

Thanks!

hjfilmspeed

Oh my word this is such a great thread cant wait to read all of this. Amazing work yall. I did know you could overlay grain in resolve. Do you think cine grain will also work this way in resolve?

I also agree, even though it is raw footage, there are proper ways to get a more accurate starting point and better color as mentioned in the workflows in this thread. This is essential to a raw workflow.
I was doing this:
Default Color options set in resolve menus. Then I choose BMD color and gamma in the clip raw tab. Then I made a LUT which combines BMD film to linear and linear to cineon (both LUTs come stock in resolve). I would then take my new combo LUT and add it as a Input lut in the color menu of resolve.
This gives you a good flat cineon log and it grade okay but i will say i am having some issues with skintones ect.

@baldavenger Is there anyway I could help you develope this color workflow? Or at least help simplify it? Maybe make a tutorial?

baldavenger

Having a look back at the grain post, it would appear that the luminance mask method isn't as effective as first thought. It would be better to add a corrector node, link input to source, and output to the key input of the node that features the image combined with grain.

However, I started looking into how to better integrate the grain into the image so there is a more integral relationship, and not just layered on top. I read about the concept of adding grain in Log mode, so looked into it and though a series of improvised attempts came about something that works ok and doesn't involve loads of nodes.

Basically, you add scaled down grain to an image in log space, then convert the image to whichever display referred space it's destined for. The grain has its contrast restored, but more importantly it is distributed in a filmic manner i.e. more intense in the mids to highs, and rolled off in the shadows and upper highlights. All very organic. The contrast control in Resolve is not linear when increasing contrast (values over 1), but is linear when reducing contrast (values between 1 and 0), so that's ideal for scaling down the grain in the grain plate (pivot at 0.5). The same contrast control can be used to increase or reduce grain levels (non-destructively), while the filmic distribution is maintained throughout.

When transforming the image and grain combo from Log to Lin (Gamma 2.2, or whatever target gamma you prefer), make sure it doesn't involve a colour space transform too as that it will adversely affect the hue and saturation of the grain, i.e. only use a Log to Lin 1D LUT.




I'm balls deep in calibration research at the moment so unfortunately haven't the time right now to make a tutorial, but by all means if anyone wishes to add in the thread in any way then please do. There's plenty of resources online for people to learn Resolve. Add links if you like, plus any interpretations you might have.
EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

hjfilmspeed

^^^^ This is brilliant! The grain would already be there if you were grading actual film so this makes way more sense. this is awesome.

hjfilmspeed

I was able to get this grain method to work with cine grain. Over laying grain to your log footie, then grading is the way to go. Thank you for this tip.

QuickHitRecord

I'm just looking for a good starting point for the grade that conforms to a common LOG color space and I think I'm getting better results with the Resolve 12 workflow in baldavenger's first post. There seems to be more color information in the skintones to work with than previously available options.

Oddly, I've never gotten FilmConvert's Log C settings to give me anything usable in that regard. The gamma always seems way off and correcting it introduces an unusable amount of noise. FilmConvert kind of presents itself as a library of LUTs, but maybe it's not meant to be used that way. But then why include LOG profiles at all? Anyone have experience with this?

Take a look:


ARRI Log C (Davinci YRGB Color Managed)


ARRI Log C (Davinci YRGB Color Managed) to Rec 709 (LutCalc settings from post #1)


ARRI Log C (Davinci YRGB Color Managed) to Rec 709 (BMD-supplied LUT)


ARRI Log C (Davinci YRGB Color Managed) to FilmConvert (Log C to KD 5207 Vis3)
5DmIII | January 27 2017 Nightly Build (Firmware: 1.23) | KomputerBay 256GB CF Cards (1066x & 1200x)

baldavenger

I did a revision to the grain package I posted before. This time it's a direct conversion from the source, plus by flip flopping the plates there is now 4 original grain images for every 1 source image. They work great with the composite technique I posted recently.

I also managed to build 14bit 1D shaper LUTs. The VFX LUTs in Resolve are 12bit (4096) and normally that's plenty to prevent quantisation errors, but conversions involving Cineon Log and in particular LogC require a bit more precision in certain circumstances. I'll post something about Resolve LUTs (1D and 3D) soon, covering how to read them and how they work. It's technical stuff but worth knowing nonetheless.

https://www.dropbox.com/s/lpo6lnes2uzq13h/Grain%20Scans%20HD%20Flip%20Flopped.zip?dl=0

https://www.dropbox.com/s/a85ijc0bj46gh5o/14bit%20LUTs.zip?dl=0



EOS 5D Mark III | EOS 600D | Canon 24-105mm f4L | Canon 70-200mm f2.8L IS II | Canon 50mm f1.4 | Samyang 14mm T3.1 | Opteka 8mm f3.5

DeafEyeJedi

Excellent progress so far as usual, @baldavenger!

Just because I've been silent (and quite busy) but never could refrain myself from reading your posts in this incredible thread!

Because now I can run DR12 without any problems on my new 2012 MBP with a decent graphic card and massive spaces.

Keep 'em coming and once my work schedule has settled down ... will finally get in touch with you about this DR12 workflow.

Thanks again, B!
5D3.113 | 5D3.123 | EOSM.203 | 7D.203 | 70D.112 | 100D.101 | EOSM2.* | 50D.109

hjfilmspeed

Downloading LUT! I will try asap !