Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - baldavenger

#1
Quote from: Kharak on July 15, 2019, 11:26:46 AM
Very nice tools baldavenger.

I've been using your Frequency separator to deal with moire, very easy to remove the color cast of moire with YUV or LAB, but do you know a way to deal with the spatial frequency of moire? Maybe someway to trick Resolve in to Baselights texture management?

I feel like there could be something better than masking and blurring it away, as it just looks blurred. Any ideas?

It's a tricky one, because the spatial frequency you speak of i.e. high frequency pattern, is often burnt into the image by the time you get to grade it. You can be quite aggressive with the colour component of an image without fundamentally altering the perceptual structure, but even the slightest adjustment to the luma component stands out a mile.

The Frequency Separation OFX Plugin only has the option of sharpening high frequency data. It's literally applying gain to the isolated image data, and because the values lay either side of 0 this is, in effect, a pure linear contrast. If the parameter (which returns a value between 0 and 3) would also have the option of returning a negative value, then this would allow for reducing sharpening (contrast) of the high frequency data, therefore blending it back into the low frequency domain. However, it may not produce the effect you're looking for and, to be honest, it's never going to be a simple task. If it's hardcoded part of the luma signal then it's a VFX removal job (and all that entails).
#2
Demonstration of some of the LMTs in ACES TOOLS, as well as an approach to grading CinemaDNGs in ACES






https://github.com/baldavenger/DCTLs/tree/master/ACES%20TOOLS


https://github.com/baldavenger/BaldavengerPlugins/blob/master/Frequency.ofx.bundle.zip
#3
QuoteWhat do you think about the 2048 black level on the 5D3?

I wouldn't mess with the black level value. It will screw up the white balance, and more than likely you'll just be introducing more noise into the shadows. However, you can experiment with the raw white and black levels in MLV App to see for yourself the effect.
#4
This video covers some of the LMT PFE OFX DCTL code.







https://github.com/baldavenger/DCTLs/blob/master/ACES%20TOOLS/LMT_PFE_OFX.dctl
#5
I put together a small set of LMTs for editing photos in Resolve

The ACES TOOLS set consists of the following:

IDT_INV_REC709
IDT_INV_SRGB
LMT_DNG_OFX
LMT_COLOR_BALANCE_OFX
LMT_HSL_OFX
LMT_HUE_VS_LUMA_OFX
LMT_PFE_OFX
LMT_PFE_OFX_NEUTRAL



LMT_DNG_OFX helps with images that are a little compressed in the shadows.










LMT_COLOR_BALANCE_OFX, LMT_HSL_OFX, and LMT_HUE_VS_LUMA_OFX emulate some of the controls familiar to users of Photoshop and Camera Raw. They can be used separately, or in conjunction with other LMTs (including LMT PFE)


https://github.com/baldavenger/DCTLs/tree/master/ACES%20TOOLS

The tools work with either CUDA or METAL in Resolve 15 and upwards.
#6
If you're working in ACES in Resolve, then this might be worth a look.






There's more info, and download links, here:

https://acescentral.com/t/davinci-resolve-dctl-and-ofx-plugins/1385/89?u=paul_dore
#7
The OFX plugins have been refined and improved, and Win compatible versions are also available.

Mac versions

https://github.com/baldavenger/Plugin-Downloads

Win versions

https://github.com/baldavenger/Win-OFX

Both format versions work fine with CUDA, but no OpenCL yet on Win (and only limited functionality on Mac).


One possible use of the plugins is as an alternative to the Dark Frame process of removing magenta cast in high ISO image shadows. Play around with the ChannelBoxV2 plugin to see what results you might get.

#8
I did some spring cleaning on the OFX plugins. I removed as much unnecessary code as I deemed probable, plus I employed a better naming structure (as well as a wee icon for the top left corner). Hopefully it'll be easier now for those wanting to compile for Win or Linux.

https://github.com/baldavenger?tab=repositories


I took a break from coding endeavours for a month or so, but I hope to add some new ones soon (plus do some more fine-tuning on the ones already there).



#9
Quote from: a1ex on September 01, 2016, 08:03:36 AM
This chart appears to show an ISO-less sensor (not sure how to read it, but the increments at both top and bottom ends match the ISO increments, +/- 0.1 stops). Usually, with such a sensor you can just shoot at the lowest ISO and increase the brightness in post as you need - there is little reason to use a higher ISO, as the shadow details will not get significantly better (they will get a little better, but not much).

On high-end cameras, I remember reading ISO is just metadata; if this is the case, it really doesn't matter which one you use. Or maybe it affects the exposure indicators and that's it (just a guess, as I have zero experience with these cameras).

Canon sensor is different - if you look at a dynamic range chart, you will see that, until about ISO 400, the DR does not drop much. Remember ISO is defined from the clipping point. Therefore, increasing ISO helps capturing more shadow detail. However, at higher ISOs (around 1600), Canon sensor becomes nearly ISO-less - there's little to gain by pushing ISO beyond that.

So, on an ISO-less sensor, you can just lower the ISO to protect your highlights without worrying much about the shadows, while on Canon sensor (especially on full-frame), you need to find a balance between how many highlights you clip (when increasing the ISO) and how many shadow details you capture.

Side note: with dual ISO, Canon sensor behaves a lot like an ISO-less sensor (so, when the light changes a lot, I just set it to 100/1600 and forget about it).

I believe the link between high ISO and extra highlight latitude is based on choice of exposure for a given scene. It's certainly not intuitive (especially from a stills perspective) but there is a logic there. In the previously shown chart for Alexa, the premise is that the scene exposure level is the same for every ISO rating, and every change in ISO is offset by changing the amount of light that hits the sensor (via aperture, shutter speed, shutter angle, ND filters, external light levels, etc.). Therefore, to match the correct exposure level with a low ISO you need a lot of light entering the camera which is great for shadows but also means the sensor hits saturation sooner, therefore highlight latitude is reduced. On the other hand, exposing with a high ISO means less actual light entering the camera, so less latitude in the shadows but more in the highlights. This is also based on a Raw workflow, where the ISO is indeed just metadata that can be adhered to, altered, or removed in post. 800 ISO is the base for Alexa for practical reasons.

Dual-ISO is in deed quite similar. Shooting 100/1600 and exposing for 1600 (to gain an extra few stops highlight latitude in post) is much like a DP choosing an ISO of 1600 (with a 1 stop ND filter attached to the lens) in order to gain an extra stop of highlight latitude. 
#10
I've been working on some OFX plugins for Resolve 12.5, and they can be found (along with some LUTs and DCTLs) on my GitHub page:

https://github.com/baldavenger?tab=repositories

#11
Here are some more film emulation type LUTs I put together recently. I was aiming towards a more film-scan/alexa type response, and the LUTs are smaller (17^3) than usual. They expect a Cineon image with Rec709 primaries, and work well in Resolve as well as Photoshop (use the cinelog V3 profile in camera raw if you have it).

There are 6 in total. 2 PFE output, 2 film-scan + PFE output, and 2 film-scan log output (1 for PFE, 1 for Rec709/sRGB)


Original Chart




Cineon Rec709 to Kodak




Cineon Rec709 to Fuji




Cineon Rec709 to Film-Scan + Kodak




Cineon Rec709 to Film-Scan + Fuji




Cineon Rec709 to Film-Scan (for PFE)




Cineon Rec709 to Film-Scan (for Rec709/sRGB)





https://drive.google.com/open?id=0B0WuqGJ11_EgU1plSkktaGpPNmc


Nice and uncomplicated (for a change). Have fun and post some feedback if something springs to mind.

#12
Here are some more potentially useful DCTLs

Add Arri Film Matrix

// Arri Film Matrix addition (to be applied to LogC AWG)


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float r = (p_R * 1.271103f) + (p_G * -0.284279f) + (p_B * 0.013176f);
    const float g = (p_R * -0.127165f) + (p_G * 1.436429f) + (p_B * -0.309264f);
    const float b = (p_R * -0.129927f) + (p_G * -0.510286f) + (p_B * 1.640214f);

    return make_float3(r, g, b);
}



Preparing Arri Linear for conversion to Cineon

// Arri Linear to Cineon Linear Prep

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float r = p_R > 1.0f ? _powf(p_R, 0.6496465358f) : (p_R < 0.0f ? _powf(_fabs(p_R), 1.2756188918f) * -1.0f : p_R);

    const float g = p_G > 1.0f ? _powf(p_G, 0.6496465358f) : (p_G < 0.0f ? _powf(_fabs(p_G), 1.2756188918f) * -1.0f : p_G);

    const float b = p_B > 1.0f ? _powf(p_B, 0.6496465358f) : (p_B < 0.0f ? _powf(_fabs(p_B), 1.2756188918f) * -1.0f : p_B);

    return make_float3(r, g, b);

}



Full LogC to Cineon transform with roll-off

// LogC to Cineon (with highlight roll-off to preserve dynamic range)

__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
    const float r = (((_log10f(((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f) > 1.0f ? _powf((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f), 0.6496465358f) : ((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f) < 0.0f ? _powf(_fabs((p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f)), 1.2756188918f) * -1.0f : (p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f))) * (1 -.0108) + .0108)) * 300) + 685) / 1023;

    const float g = (((_log10f(((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f) > 1.0f ? _powf((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f), 0.6496465358f) : ((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f) < 0.0f ? _powf(_fabs((p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f)), 1.2756188918f) * -1.0f : (p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f))) * (1 -.0108) + .0108)) * 300) + 685) / 1023;

    const float b = (((_log10f(((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f) > 1.0f ? _powf((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f), 0.6496465358f) : ((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f) < 0.0f ? _powf(_fabs((p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f)), 1.2756188918f) * -1.0f : (p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f))) * (1 -.0108) + .0108)) * 300) + 685) / 1023;

    return make_float3(r, g, b);
}



Perhaps the best way to utilise Arri Film Matrix with Canon footage is to adjust the saturation response so that it more resembles that of an actual Alexa. The Luma Vs Saturation control can do this, as can applying an LC709A 3D LUT (gamut only). One possible route might be ML Raw -> LogC AWG -> LogC LC709A -> LogC AWG (LogC Rec709 to LogC AWG transform) -> Arri Film Matrix -> LogC to Cineon -> Print LUT

The LC709A LUT can be generated here:

https://cameramanben.github.io/LUTCalc/

#13
All the previously posted formulas can be applied via DCTL, and without a great deal of effort. Simply replace r,g, and b with p_R, p_G, and p_B, and add f to the end of every numerical value.


For example, the original AWG to Rec709 matrix conversion


(r * 1.617523) + (g * -0.537287) + (b * -0.080237)
(r * -0.070573) + (g * 1.334613) + (b * -0.26404)
(r * -0.021102) + (g * -0.226954) + (b * 1.248056)


can be applied as follows:


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
const float r = (p_R * 1.617523f) + (p_G * -0.537287f) + (p_B * -0.080237f);
const float g = (p_R * -0.070573f) + (p_G * 1.334613f) + (p_B * -0.26404f);
const float b = (p_R * -0.021102f) + (p_G * -0.226954f) + (p_B * 1.248056f);

return make_float3(r, g, b);
}



Likewise the LogC to Linear transform


r > 0.1496582 ? (pow(10.0, (r - 0.385537) / 0.2471896) - 0.052272) / 5.555556 : (r - 0.092809) / 5.367655
g > 0.1496582 ? (pow(10.0, (g - 0.385537) / 0.2471896) - 0.052272) / 5.555556 : (g - 0.092809) / 5.367655
b > 0.1496582 ? (pow(10.0, (b - 0.385537) / 0.2471896) - 0.052272) / 5.555556 : (b - 0.092809) / 5.367655


becomes:


__DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B)
{
const float r = p_R > 0.1496582f ? (_powf(10.0f, (p_R - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_R - 0.092809f) / 5.367655f;
const float g = p_G > 0.1496582f ? (_powf(10.0f, (p_G - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_G - 0.092809f) / 5.367655f;
const float b = p_B > 0.1496582f ? (_powf(10.0f, (p_B - 0.385537f) / 0.2471896f) - 0.052272f) / 5.555556f : (p_B - 0.092809f) / 5.367655f;

return make_float3(r, g, b);
}



Now of course the necessity for these particular transforms via DCTL is no longer relevant because of the Color Space Transform OFX plugin, but it at least shows that options outside of what is already available in Resolve are easily accessible. The real challenge now is to build custom OFX plugins that in effect remove all limitations to what you can do in Resolve. Learning C++ is proving to be a slow process, but seeing as Blackmagic Design have been good enough to provide the access and resources to really elevate Resolve, then it would be a shame not to at least have a go at making it work.

Note: DCTL option is only available on Resolve Studio version, but the formulas can still be applied via previous methods such as LUTs or the ChannelMath plugin.

#14
Quick PFE LUT tip.

I tried improving on the Print LUTs that were previously posted by reducing the photometric highlight compression and slight shadow lifting that was still prevalent, but it proved persistently problematic. Instead I settled for a more practical compromise, which basically involves using an alpha channel on the node that is receiving the LUT. As the signal going into the node is Cineon log it is better to pull the key in a previous node and pipe it down stream.

Original sRGB image:



With LUT applied but key disabled:



With LUT and key applied:




Now, Resolve's Luminance Keyer isn't the greatest (and something I'm looking into improving on with an OFX plugin), but the system still works and produces better results than with just the LUT on its own. I reckon I've gone as far as you can go with regards to cleaning up the 'muckiness' of Print LUTs. Always a battle, but sometimes it's worth it.

#15
@Andy600

Thank you for the detailed explanation. I am indeed interested in matching 5D3 linear to inverse LogC, or more specifically converting the DNGs into true scene linear. Resolve still processes DNGs as display referred, and takes its cue from the black level and white level in the metadata. The white level can be changed (and therefore the part of the signal mapped to 1.0), but it also clips at that point unless Highlight Recovery is enabled.

If there was a standard (i.e linear scale) way of transforming the Raw information into true scene linear, then it would be much easier to make work with other camera sources. As it stands, applying lin2log conversions (such as Cineon or LogC) is a bit pointless in that the signal has already been squeezed because of DNG interpretation. Anyway, it would be handy to put these ideas to the test.
#16
@beauchampy

Would it be possible for you to get hold of a few seconds of LogC from the Arri, where some charts are filmed and also by your 5d mkIII ?

If the scene is metered and suitably recorded, by comparing both sets of footage (after converting to linear light in Resolve) perhaps a clear difference can be pinpointed. I suspect the 5d DNGs will appear darker, as Resolve maps peak white (15000 in the case of 5d) to 1.0, whereas linearised LogC maps diffuse white (@90% scene reflectance) to 1.0 hence why it might appear brighter. It might be possible to match the linearised signals with just the Exposure control, then applying a linear to LogC transform with the new Resolve controls should produce a closer approximation.

No worries if it's not an option, but it would be cool if you could.
#17
Sounds great :) Looking forward to the new releases.
#18
@Andy600

Now that Resolve includes its own OpenFX API, are you still going ahead with your planned release? I for one think it's still a worthwhile venture, and I'd be happy to purchase (and promote) it once its released. I gather you were pretty close anyway, so now with the easier to implement CUDA and OpenCL based GPU acceleration it should be a more viable prospect?

I know neither C nor C++ (yet) so it might be a while before I'm able to build my own custom plugins, and it would be great if Cinelog could provide a working solution in the meantime.
#19
Big Resolve update released today (12.5). Lots of new and improved features that that will be familiar to anyone who has been following this thread.

Download the free version here (and also download the free version of Fusion as they can now both be linked):

https://www.blackmagicdesign.com/uk/support

#20
So here's a basic workflow that involves some of the methods that have featured so far in this thread.

ML Raw -> RCM -> LogC AWG -> Linear -> Signal Split -> Cineon -> Print LUT (optional) -> Grain Overlay -> Display Gamma (sRGB/Rec.709)

An alternative workflow involves a Film Matrix conversion.

I recommend setting MLV White Level to its max value (though optional) and raising Exposure by up to 1 stop in Camera Raw controls to compensate for DNG display referred scaling.




Here's the workflow that involves Film Matrix:




If you want to replace the 1D LUTs with a hand-made powergrade, then simply take the reverse LUT (i.e. the sRGB to Cineon LUT if trying to replace the Cineon to sRGB LUT) and apply it to the Grey Ramp and manipulate the controls until it's straight again (as done previously in the thread).

Included in the ML Resolve Workflow Pack are a set of 1D LUTs, a pair of 3x3 Matrices, a pack of 3D Print LUTs, Signal Splitters, and a few other useful elements along with the two main powergrades that feature above. Import the powergrades by right-clicking in the Gallery tab and choosing Import.


https://www.dropbox.com/s/w46zz4g2swtjgbe/ML%20Resolve%20Workflow.zip?dl=0


Here's the link to the Grain samples that featured previously in the thread:


https://www.dropbox.com/s/lpo6lnes2uzq13h/Grain%20Scans%20HD%20Flip%20Flopped.zip?dl=0


Have fun grading and feel free to post results.

#21
Here's a useful post on another forum:

https://forum.blackmagicdesign.com/viewtopic.php?f=2&t=45985#p266945

Interesting and relevant thread too.
#22
With regards recording time, it is subject to the laws of diminishing returns, so anything more than a few seconds makes little or no difference.

As for your other enquiry, what are the exact workflows that you are using?
#23
A safe and reliable option is the Cinelog package. If you're not particularly sure of transfer functions and colour science then best avoid building your own process using Resolve Colour Management, at least until you're adept enough to know what's going on with your signal and primaries at each important stage of the workflow.
#24
Bonus Tip relating to the first example from Reply #98

Create a scrolling Waveform window by just adding a Node and adjusting Gain.



In Node 05 you simply adjust the Gain control in LGG to create the scrolling effect. Gain has values from 0.0 to 16.0, so that covers quite a stretch of signal. To disable the process either replace ResolveConstant_-1.cube with ResolveConstant_Zero.cube, or change the Layer Node option from Add back to Normal.

Even better is if you have a control surface and use the Gain dial to scroll. The displayed Gain value will also identify which part of the signal is currently being viewed in the waveform 'window'.
#25
I looked into the effects of extending the White Level value in the DNGs metadata. With 5D MkIII footage it is 15000, which is mapped to the value 1.0 in Resolve. The reasons for White Level being lower than the full 14bit limit (16383) are well documented - Peak Saturation, non-linearity past that point, magenta cast - but when Highlight Recovery is enabled these extra values come into play anyway.

I ran some tests with DNGs whose White Level I altered using Exiftool. There were three varieties: Normal 15000, Full 16583, and Bright 10000

In Adobe Camera Raw they appeared as expected, with Full 16383 slightly darker than Normal 15000 (due to the values being mapped lower to accommodate the additional 1383 values) and Bright 10000 much brighter in keeping with the fact that the lower value is mapped to peak white thereby raising all values accordingly.

Using the Exposure and Highlight controls it was very easy to bring back the highlight values that got clipped or heavily compressed, so in this case the White Level is really just a convenient guideline for Adobe Camera Raw to map Peak White, and no actual raw data is compromised by altering the metadata.

In Resolve it is a somewhat different matter. When the three DNGs are initially brought in the appearance is more or less the same as in ACR, but then using controls like Exposure and Gain to bring back highlight detail it is revealed that this information has been clipped, with Bright 10000 clearly showing the missing information. Only by enabling Highlight Recovery is that signal information brought back into play, but only though whichever algorithms Resolve uses to rebuild signals.

Now, even with Highlight Recovery on Full 16383 Resolve tries to build more highlight detail, even though there is no reserve signal to call upon. I prefer to use the signal splitting techniques mentioned earlier in the thread to do highlight rebuilding, but that can be combined with Highlight Recovery to increase dynamic range if done judiciously. The less of a signal Highlight Recovery has to anchor to initially, the more control you'll have overall, which is why I now recommend (for those using Resolve and willing to put in the extra work in order to achieve superior results) that you change the White Level of your DNGs to their max, which is 16383 for regular and 65535 for Dual-ISO. This process will involve the Exposure control to scale the signal such that the exposure value of a white diffusing reflector is 1.0

This is a deliberate attempt to take the DNG process away from its originally intended display referred/stills approach, and adapt it towards a more scene referred/film approach like that of Cineon and LogC.

With the sterling aid of the two Daniels (@Danne and @dfort) I've been able to convert with relative ease the White Level of both DNGs and MLVs.

$ exiftool -IFD0:WhiteLevel=16383 * -overwrite_original -r.

Is a quick and dirty way to convert every DNG in the folder (and subfolders)

echo "0000068: FF 3F" | xxd -r - Input.MLV

Converts the White Level in the header of a MLV. Very handy if you're using MLVFS in combination with Resolve.
@dfort figured out the code and the HEX values needed.

15000 = 98 3A
16383 = FF 3F
10000 = 10 27
60000 = 60 EA
65535 = FF FF
30000 = 30 75