My ACES/ HDR (HDR10+) Workflow: MLVapp - Davinci Resolve - Workflow

Started by Bender@arsch, January 25, 2020, 08:29:46 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Bender@arsch

UPDATE 03.09.2022

- New Start: I completed my HDR-Workflow
- I added a downloadable ACES-HDR-to-SDR-Lut for Youtube upload
-----------------------------------------------------------------


This is my new Version of my ACES and HDR (HDR10+) workflow.
I change my Workflow again, and again, but this is my Newest Version.


Overview:

Introduction
   1. Short Tutorial
   2. What you need
   3. Helpful links
   4. About this Workflow?

Part One: MLVapp - cDNG Workflow
   1. MLVapp

Part Two: SDR/HDR - grading on Davinci Resolve
   1. Comparison between SDR and HDR
   2. Mastering in...
   3. Davinci Resolve
   4. HDR Color Grading in Davinci Resolve on a SDR Display
   5. create your own LUT for HDR to SDR conversion

Part Three: Youtube Metadata Tool
   1. HDR to SDR conversion and Youtube Metadata tool



Introduction

1. Short Tutorial
Magic Lantern Raw (.MLV) - MLV App (Export as cDNG) - Davinci Resolve (ACES HDR grading and Export) - Youtube Metadata tool (Attaching my own ACES-HDR-to-SDR-Lut) - Finished


Here are some video examples.
Filmed with my Canon eos 5D Mark II with the Experimental Builds by Reddeercity
and 5D Mark III by Danne's Build
---

Botanic Garden II | 5D Mark III Magic Lantern Raw - 3.3K HDR


Music Box | 5D Mark III Magic Lantern Raw - 3.3K HDR


In my Dreams | 5D Mark III Magic Lantern Raw - 3.5K HDR


Breathe | 5D Mark III Magic Lantern Raw - 3.5K HDR


Espresso | 5D Mark III Magic Lantern Raw - 5.7K HDR


My last Tour... | 5D Mark III Magic Lantern Raw - 3.5K HDR


Around the Elbe | 5D Mark III Magic Lantern Raw - 3.5K HDR


Handheld again | 5D Mark III Magic Lantern Raw - 3.5K HDR


Light | 5D Mark III Magic Lantern Raw - 3.5K HDR


A little walk in 4K  | 5D Mark III Magic Lantern Raw - HDR


5D Mark II

Macro | 5D Mark II Magic Lantern Raw - HDR 60FPS 5.6K




2. What you need:
- MLV App -> https://mlv.app/
- Davinci Resolve (Studio) -> https://www.blackmagicdesign.com/de/products/davinciresolve/
- Youtube Metadata tool -> https://github.com/youtubehdr/hdr_metadata


3. Helpful Links:
- What is ACES? https://www.oscars.org/science-technology/sci-tech-projects/aces
- 5D Mark II ML Experimental Builds by Reddeercity https://www.magiclantern.fm/forum/index.php?topic=19336.0
- 5D Mark III ML Experimental Builds by Danne https://www.magiclantern.fm/forum/index.php?topic=23041.0
- You can Download my own ACES-HDR-to-SDR-Lut for Youtube

4. About this workflow?
I've been search a long time to find the best (quality) Workflow for Magic Lantern Raw data and i changed all again, again: There are many ways to grade this data, but for grading in HDR there are only one way that i know. With Adobe Applications there are no ways to complete a HDR Movie, unfortunately with MLVapp too. The only Application that can handle it is Davinci Resolve Studio. However, until now it has not been possible to use Rec2020 with CinemaDNG, it was always limited to Rec709.
Until Now... Now, in Version 17 and up, and i can't say why, it works!
And i was lucky enough to be able to test the new Canon eos R6. And i could also campare it with the Canon 5d Mark III: I compare my old Workflow with just inporting cDNG, with the files from the R6. The result was: the cDNG Vesion looks realy realy similar to the R6 Version and yes also better (natural) than my old version. So i can throw away my Camera Raw - After Effects - Premiere pro part completely. It is also suprising how fast the cDNG format works in Davinci Resolve, its playable in fullspeed (GPU Speed) with my old computer (3.5K in 23,976fps).
I also compared the Davinci own Version "Davinci Wide Gammut" with "Aces 1.3". Result: Davinci Wide Gammut looks at first sight realy nice but has some huge ugly problems in highlights (especially with clipping), that i can't fix it.

In short: my new way is just exporting cinemaDNG in MLVapp and grading in Davinci Resolve Studio.
Next huge change is how i grade my footage. Earlier i try to replicate the Youtube HDR to SDR conversion in Davinci Resolve and grad my footage before this self replcicated Node. This works never 100% and from time to time, you have some bad surprise.
But now, i try it from the other side. My way is, i grade my Footage in ACES 1.3 SDR Rec709, but export it after in ACES 1.3 HDR Rec2020 in 1000nits. And before i upload it, i use the HDR Metadata Tool to bring my own ACES-HDR-to-SDR-Lut in. Now the Rec709 Youtube Version looks 100% identical to my Davinci Resolve ACES Rec709 Version. Realy easy now! ;)




Part One: MLVapp - cDNG Workflow

1. MLVapp
Go to output Settings and chose cinemaDNG (lossless -> no quality loss, but less bigger file size) and Export




Part Two: SDR/HDR Color Grading in Davinci Resolve

1. Comparison between SDR and HDR
HDR is not just a stretched SDR version, but extends the peak luminance, far beyond the range that can be represented by SDR correctly. SDR (like rec709) is limited to 6 f-stops, HDR is up to 17,6 f-stops! (Wikipedia)

And here is the problem with Adobe Aplications. If you upload your HDR version to Youtube, youtube try to convert (HDR to SDR version) all the dynamic range to a too small area -> so it will looking flat. If you upload your Metadata to Youtube (MaxFALL and MaxCLL) it will look much better. After Effects and Premiere pro has no possibility to analyze or save/ delivering the HDR metadata so we need Davinci Resolve Studio.

2. Mastering in...
We can choose between 1000, 2000 and 4000nits, 1000nits need minimum 10bit, 2000? and 4000nits minimum 12bit (Wikipedia). But there is a problem. In my case, i think the 5D Mark II/III doesn't have enough dynamic range for 2000 or 4000nits (and i can't test it). But i want the correct ACES to HDR Transformation, so i choose Rec.2020 ST2084 and 1000nits for my grade.



3. Davinci Resolve

1. open a new Project
2. open Project Setting: In Color Management set Color science to ACEScct, ACES Output Device Transform to Rec.709 and the last enable HDR10+ (ACES input Device Transform is not important)



4. HDR Color Grading in Davinci Resolve on a SDR Display

Here is my Basic Tree Node for Color Correction
-> I can recommend to use the HDR-Color Wheel only for the Basic grade, because of clipping and better Satuation. Changing Contrast makes no different in Satuation you need to change the Satuation seperatly, but it works much stronger, you need to hold the "Alt" Button on your Keypad for a precise adjustment.


1. Node: My Setting for Noise Reduction (iso 100)
-> I use Noise reduction on the Shadows only. And this is possible with the Qualifier tool


2. Node: my Primaries
-> Just Exposure (HDR Color Wheels - Global), Contrast (HDR Color Wheels - Cont and Pivot),  and White Balance (Camera Raw Settings)

3. Node(s): HDR - Tool
-> Fine Tuning HDR-Wheel blacks to Specular

Last Node: Color
-> just use the "Sat" (satuation) on the Global from HDR - Color Wheel

1. Problem: If you want to Export a HDR Footage you will see the HDR and SDR Youtube Version looks flat and dark, because you need to analyze your Metadata: MaxFALL = Maximum Frame--Average Light Level and MaxCLL = Maximum Content Light Level.

2. After Color Grading, change your ACES Output Transform to  Rec.2020 ST2084 (1000nits). And if you are on the Color Tab, click Color / HDR10+ / Analyze all shots and wait. -> so it will analyze MaxCLL and MaxFALL for every frame and for every shot.

But after Analyzing the Metadata and uploading to Youtube, Youtube will transform his own HDR to SDR Version that you can see on your SDR Dysplay. And if you compare your Rec709 Version with the Youtube SDR Version, you will see: Youtube's HDR to SDR Version looks also flat and darker (but better than before), but we can fix this. I create my own ACES-HDR-to-SDR-Lut for Youtube.

Attention:
After Analyze your Image it looks darker, because of the HDR10+ preview "Enable Tone Mapping Preview". If you simply uncheck this feature, you can grade correct.

3. Rendering: Depending on your DSL speed you can choose between like DNxHR and H.265 codec, or what you want. The important thing is to click on "Embed HDR10 Metadata" (Metadata is saved in videofile). In h265 you need to set Encoding Profile to Main10. The rest you can set what you want. Click Add to Render Queue and Start Render

But now, you need a Lut for the Youtube HDR to SDR conversion. You can Download and use my own ACES-HDR-to-SDR-Lut for Youtube or build your own:


5. create your own LUT for HDR to SDR conversion


In Davinci:
1. open a new Project
2. open Project Setting: In Color Management set Color science to DaVinci YRGB Color Managed, Color processing mode to HDR Davinci Wide Gamut Intermediate, Output Color Space to Rec.709 Gamma 2.4 and Enable Dolby Vision.
3. Import the file "trim_lut0.dpx" and in Timeline (it is a Pattern for analyze your grade) in "C:\ProgramData\Blackmagic Design\DaVinci Resolve\Support" or simply find with click on Project Setting to Color Management and in Lookup Tables to Open LUT Folder and go back a folder.
4. Now you can set your own HDR to SDR grade: in My case i just open the Effects "ACES Transform" and set Rec2020 1000nits to Rec709. If you want to create your own conversion, you can import your exportet HDR Movie, right click on the clip, "import Color Space" to "Rec2020 1000nits". Make your create and Copy and paste the grade to the trim_lut0.dpx.
5. Go to Color Tab (below), click on the Clip and on Dolby Vision Tab (left, next to Curves). Click Selected for Analyzed and wait. Now right click on the trim_lut0.dpx Clip, Generate Lut and chose betreen 17, 33 and 65 Point Cube.

You can Download my own ACES-HDR-to-SDR-Lut for Youtube




Part Three: Youtube Metadata Tool

1. HDR to SDR conversion and Youtube Metadata tool
Link: https://github.com/youtubehdr/hdr_metadata

For Windows:
Download the Tool and save it in a folder.

You can simlpy write your command in a Editor.
Here is a default Text that I used:

Quote"Path to the Tool...\mkvmerge.exe" -o "path where you want to save\---.mov" --colour-matrix 0:9 --colour-range 0:1 --colour-transfer-characteristics 0:16 --colour-primaries 0:9 --attachment-mime-type application/x-cube --attach-file "Path of your LUT\---.cube" "path where you video is\---.mov"

Replace the red text. (if you have click on Embed HDR Metadata you dont need to set the MaxFALL and MaxCLL)

Copy this text in Command line tool, click Enter and finished;)

Now you can upload your new file with your own HDR to SDR lut -> Youtube will make the rest.



I hope you enjoy this tutorial. For technical questions about ACES and HDR, ask Andy600.  ;D

IDA_ML

Thanks a lot for sharing this very informative tutorial with us, Bender@arsch.  I have never used ACES before, so this method seems to provide the best results in terms of image quality.  Your video examples deffinitely confirm this.  The only issue I noticed concernes the green tones.  They seem to be too oversaturated to look natural.

I do have a question:  Have you compared directly video processing results coming  out of MLVApp with those you get with your ACES workflow?  In my experience, MLVApp is also capable of providing very high-quality results and very naturally looking videos and is very user friendly too.

Bender@arsch

@IDA_ML

here are a few possible problems with the color problem:

1. My version in AE is not really the correct way. The correct way is in Layer 3 in Output Space: ACES - ACES - ACEScg. BUT.. it dosn't work -> green/ Magenta problems in Highlights and shadows. So i fix this with this version that probably nobody can explain how it works^^.

2. HDR-grading with Davinci Resolve. I don't know why but after the grading -> the "automatical" Youtube HDR-to-SDR- Version is oversatuated. But the youtube HDR-to-SDR problem is a thing in itself: in a better World i upload a HDR AND a SDR Version, BUT it dosn't work. There is a posibility of after HDR-grading -> creat a lut from HDR to SDR conversion, use the Youtube metadata tool to bring this in and Youtube make the rest... -> I don't know how it works.. I test it many times -> no difference.
My way for HDR grading is: I am trying to make the Youtube HDR-to-SDR automatic version similar to my SDR version. -> among other things, I also reduced the color saturation. But maybe i must search for a better balance.

-> my SDR version looks not so oversatuated

I compared many times MLV app best result with my version and I never managed to get the same result.
I like MLV app and from day to day it gets better and better. The day will comes when the MLV app abbreviates this detour.

Luther

Nice tutorial, thanks. Some notes:
QuoteCinelog-C
There's no need to use that, ACES already has Log in it. There's a comparison here between Log-C and ACES.
QuoteACEScg
ACEScg is meant to be used in computer graphics (particularly in green/blue screen). Normally you should use ACEScct instead.
QuoteOutput Space: Input - Canon - Linear - Canon DCI-P3 Daylight
QuoteOutput Space: Input - Generic - sRGB - Texture
It's not a good practice to convert color spaces multiple times. Actually ACES was created to avoid these.
Also, the "best practice" now is to use Rec.2020 instead of Rec.709/P3.

Bender@arsch

@Luther

QuoteThere's no need to use that, ACES already has Log in it. There's a comparison here between Log-C and ACES.

Yes you are right. But I need Cinelog-C. I use it as a bridge to bring it to something ACES knows (to arri log in this example). A better way is RAW to ACES directly. But I don't have an IDT for that. ACES does not know what he is doing with this data without an IDT.

QuoteACEScg is meant to be used in computer graphics (particularly in green/blue screen). Normally you should use ACEScct instead.

This is because After Effects is a little bit ... retaded? He has only: "ACEScg ACES Working SPace AMPAS S -2014-004" and "ACES Academy Color Encoding Specification SMPTE ST" as ACES Working Space. The second is similar, i only change ACEScg to ACES2065-1 and it is the same.  ACEScct works incorect at this point. -> don't ask me why^^.

Quote(...)the "best practice" now is to use Rec.2020(...)

Next point is i can't control what AE do with this after Export, because he transform it to what he want not what i want. HDR grading is with Adobe not possible -> no HDR Metadata export. And i have no HDR/10bit/rec.2020 grading Monitor. I can't grade something what i can't see it.

QuoteIt's not a good practice to convert color spaces multiple times. Actually ACES was created to avoid these.(...)

Same problem. No control for this. AE works difference, it is not logical. Changing "Linear - Canon DCI-P3 Daylight" with another Linear space or ACEScg in this case, only slightly changes the colors, nothing more -> not logical. Why AE can't show me the correct preview before i export? If i opend it in Media encoder or Premiere it works. I cannot fully explain why this is so. It is a mystery to me. Maybe an AE guru reads this with a lot of knowledge about ACES and OpenColorIO and can help.
But what i understand is:
- Layer 1 is only for bring your footage to the specific working space of AE. Usually i right click on the footage blablabla and klick preserve RGB. But in this case not possible.
- Layer 2: convert cinelog to something what ACES knows.
- Layer 3: bring this into ACES
- Layer 4: grading/correction or other effects (ACES)
- Layer 5: ODT/ back to what i want.. i mean what AE want.
- Layer 6: because AE is retaded, export it in what he wants. Another way is i export a exr file without layer 6 with proEXR. 

allemyr

Color looks really good!

If you like can you explain more about this IDT, I have Cinelog myself but didnt know there was a IDT for different cameras. When i Try the .dcp file in AE it looks very flat. Do the IDT converts to cinelog?

Bender@arsch

@allemyr

QuoteWhen i Try the .dcp file in AE it looks very flat

This is correct, it is a log (like s-log3... ). Follow my description.
-->
From Cinelog website:
"(...) Cinelog-C Digital Camera Profiles convert your raw footage and stills to Cinelog-C colorspace using a high precision 12bit tone curve and Cinelog's custom matrix mapping in Adobe® Camera Raw® without arbitrary corrections, contrast or look-up tables - giving you the cleanest color conversion without compromising your image or dynamic range.(...)"

QuoteDo the IDT converts to cinelog?

IDT is "input device transform" it is a transform from camera data (raw, slog, Canon log... ) to ACEC. I use cinelog to transform the Raw Data to something that ACEC knows. Because there is no official IDT from Canon for EOS models.

Andy600

Lets clear a few things up.

You don't need an IDT for raw files in Resolve's ACES environment.

Resolve will automatically debayer raw files into ACES colorspace when ACES color science is selected. DNG files should contain enough information to display the color correctly. If the color is wrong it means the embedded color data is incorrect or incomplete for the scene or even the camera itself.


Tip: always, ALWAYS set your camera white balance and exposure correctly for each scene and lighting change and, if you can, include a proper grey or white reference target at the beginning of each shot. It will save you a ton of effort and guessing later. Taking a still before pressing record will also provide a general color reference and tell you if the app producing the DNG files from your MLV has any issues.


ACES IDTs (Input Device Transforms) are for intermediate camera colorspaces (Log-C, S-Log, C-Log etc) and ADX film scans only. Inverse ODTs (Output Device Transform) can be used as IDTs for monitor and workspace colorspaces if required.


I'll say again - DNG files, .cr2 files and most, if not all, raw file formats do not require any IDT for ACES. As long as the sensor has been accurately calibrated to map to XYZ it can be transformed to ACES AP0. The more calibration information included in the raw file, the better the color rendering and reproduction of captured light.


Digital Camera Profiles or .dcp (not to be confused with Digital Cinema Package) are a form of IDT but they do not target ACES, they typically produce a final look (Adobe Color, Standard, Portrait etc). Cinelog-C profiles target an intermediate log colorspace which is essentially a type of compression.

Cinelog-C in ProRes/DNxHD/HR etc can be used in Resolve's ACES environment with the Cinelog IDT (DCTL) or with an additional transform targeting a different, natively supported colorspace such as Log-C. You can also export ACES files (float EXR) by transforming Cinelog-C to ACES AP0 in After Effects and exporting to EXR. ACES data (AP0), being linear, should only be stored as float or half float in EXR containers to avoid any clipping.

After Effects does not write ACES EXR tags which may be an issue in some ACES apps but most assume EXR files to be AP0 by default.

Multiple transforms between exclusive, incompatible colorspaces are not a good idea. You will always introduce error and likely some clipping, hue rotation or other issues depending on the method used. Using OCIO as you have described is 'creative' but technically incorrect. I'm not saying the look is wrong because that is subjective and your taste.


Incidentally, doing this is adding an unnecessary step:

QuoteAdjustment layer 6 - OpenColorIO
Config. ACES 1.03   
Input Space: Utility - Rec.709 - Dysplay
Output Space: ACES - ACES - ACEScg
Adjustment layer 5 - OpenColorIO
Config. ACES 1.03   
Input Space: ACES - ACES - ACEScg
Output Space: Output - Rec.709

The transform to ACEScg, being an output in the first layer and an input in the second, is a null operation. You can simply go from Rec709 display (in) to Rec709 (out).



ACES in After Effects is limited but can work provided you set it up properly and there are several ways depending on the source material.

Raw ingest and transcoding using a Cinelog-C profiles requires additional steps in setting up a linear After Effects workspace in order to export ACES EXR files. A different workspace setup is required for grading the ACES EXR files although some Adobe color tools will not work as intended.

Resolve is much better for ACES so the choice is to do everything in Resolve (from raw files with or without transcoding) or transcode either intermediate log ProRes (small) or ACES EXR (massive) files from After Effects. You can also work in a pseudo-ACES environment directly in After Effects using the ACES OCIO configuration for colorspace management (relative to the assigned ICC workspace colorspace) but you then get into a minefield and it's very easy to lose track of things without a constant A/B against a dedicated ACES environment like Resolve. This method is similar to how Nuke works with OCIO but with the advantage of Adobe Camera Raw.

A good check to see if your ACES environment is setup and working correctly is to use these materials provided by ARRI:

https://www.arri.com/resource/blob/67438/a87188ffbb425d3f42d013793f767b93/2018-acesreferenceimages-data.zip



They also have some very useful and detailed write-ups with example configurations (Nuke) on ACES workflows:

https://www.arri.com/en/learn-help/learn-help-camera-system/camera-workflow/image-science/aces



Resolve and Adobe Camera Raw will interpret your DNG files slightly differently because they are built on similar but different architecture. They use different white balancing and highlight recovery methods. The difference in demosaicing quality is debatable. Also, if the camera data in a DNG is incorrect or incomplete a Camera Profile or ACR itself can override it with correct data. Resolve can only use whatever data is embedded in the DNG to reproduce color correctly. This goes for all Resolve environments, not just ACES.


Quote"best practice" now is to use Rec.2020 instead of Rec.709/P3


Best practice is to target whatever colorspace the intended playback device displays and only ever grade for the device colorspace(s) you can physically view i.e. your monitor(s).

Theoretically you should be able to just swap out the ODT and get the same result across multiple devices but that's never the case. You should never blindly assume a project graded under the rec709 ODT will translate when switched to a PQ ODT. In practice it always requires a trim pass on a calibrated HDR reference monitor to adjust levels but going to HDR also opens up other possibilities to display richer color and enhance details not visible in smaller display spaces. This invariably leads to an alternative, enhanced and different grade.

Lastly. Do you really need to use ACES? I like ACES a lot and I can see the appeal for multi-cam, multi format shows, CGI and collaborative,cross platform workflows but IMO it's overkill for most things, especially if you don't fully understand how it works and what it's for.





Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

allemyr

Hi Andy!

A good read. Thank you for your answer in thread. ANd also to you Benderarsch for making the thread.

I think this is a very interesting topic about IDTs.

I've learned a lot about raw files. But I still change workflow for ML raw now and then. And isnt done yet.

Luther

Quote from: Andy600 on February 01, 2020, 01:32:54 PM
You don't need an IDT for raw files in Resolve's ACES environment.
I don't think this is accurate. The IDT contains spectral data, while the DNG will contain only a simple matrix. While it's true that you can use ACES color space in any image, ACES is more than just the color space, it's a color management system. We have previously discussed this here.
Quote
Best practice is to target whatever colorspace the intended playback device displays and only ever grade for the device colorspace(s) you can physically view i.e. your monitor(s).
True, but you can easily convert Rec.2020 to any other display space without losing information. The same cannot be said for Rec.709 or P3.
Quote
Lastly. Do you really need to use ACES? I like ACES a lot and I can see the appeal for multi-cam, multi format shows, CGI and collaborative,cross platform workflows but IMO it's overkill for most things, especially if you don't fully understand how it works and what it's for.
If you're already using Alexa WideGamut, I agree. But going from Rec.709 processing to ACES is a huge jump in color quality.

ilia3101

Quote from: Andy600 on February 01, 2020, 01:32:54 PM
Resolve and Adobe Camera Raw will interpret your DNG files slightly differently because they are built on similar but different architecture. They use different white balancing and highlight recovery methods. The difference in demosaicing quality is debatable. Also, if the camera data in a DNG is incorrect or incomplete a Camera Profile or ACR itself can override it with correct data. Resolve can only use whatever data is embedded in the DNG to reproduce color correctly. This goes for all Resolve environments, not just ACES.

What are the basic differences between the two?

Andy600

Quote from: Luther on February 02, 2020, 11:58:57 AM
I don't think this is accurate. The IDT contains spectral data, while the DNG will contain only a simple matrix. While it's true that you can use ACES color space in any image, ACES is more than just the color space, it's a color management system.

I know what ACES is and does. I consult at several animation studios, some use ACES pipelines ;)

In an ideal world we would measure each individual camera's spectral sensitivity and derive a polynomial solution but that is impracticable and doesn't take into account lenses, filters and a multitude of other influences.

Even when there is spectral data available it is typical to use it only to derive a matrix solution from the camera to AP0 primaries. The Academy's own raw2aces software even does this.


QuoteTrue, but you can easily convert Rec.2020 to any other display space without losing information. The same cannot be said for Rec.709 or P3.

True, Rec2020 encompasses Rec709 and P3 gamuts. Grading Rec2020 is definitely a good idea IF you have a capable, calibrated monitor which, I think, the OP doesn't!?

As with my previous reply, going Rec2020 to Rec709 will also (usually) require a trim pass. It's not as simple as switching ODT.

Quote
If you're already using Alexa WideGamut, I agree. But going from Rec.709 processing to ACES is a huge jump in color quality.

How so? If we're talking about raw and you process it as float data the output device (not accounting for the capture quality) will dictate the 'color quality', whatever that is?
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Andy600

Quote from: ilia3101 on February 02, 2020, 01:45:08 PM
What are the basic differences between the two?


Basically this..

Resolve = Libraw = DCraw = limited DNG SDK support

Adobe = Full SDK implementation.


As I've said many times here everything for color processing DNG is in the DNG SDK ;)

Just because some of the specification is implemented in an app (e.g. Resolve) doesn't mean it is interpreted in exactly the same way as the Adobe interpretation.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Danne

Quote from: Andy600 on February 02, 2020, 03:13:22 PM
Basically this..

Resolve = Libraw = DCraw = limited DNG SDK support
Interesting. Any more on this? Links, sources?

Andy600

Resolve installs Libraw.dll for DNG support.

Resolve doesn't support camera profiles, forward matrices and a several other things that are part of the specification. BMD implemented the CinemaDNG subset for the obvious reason that it's for moving images and their own original cameras supported CDNG. There would also be a processor hit if they supported everything that is included in the SDK.


Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

ilia3101

Quote from: Andy600 on February 02, 2020, 03:13:22 PM
Resolve = Libraw = DCraw = limited DNG SDK support

Adobe = Full SDK implementation.

I see.

I think there needs to be more alternative raw processing outside of the "Adobe DNG SDK" world.


Quote from: Andy600 on February 02, 2020, 02:18:08 PM
In an ideal world we would measure each individual camera's spectral sensitivity and derive a polynomial solution but that is impracticable and doesn't take into account lenses, filters and a multitude of other influences.

This is interesting. Does any one, or any software do this?

Is a polynomial even the best option? What about a 2d lookup table in xy space, applied after a matrix - which will give a new x,y value and a Y multiplier for all input xy values. This table would be calculated based on spectral data. (It doesn't have to be xy, maybe somehting more uniform but it has to be linear)

It would be like an extra level of correction after a matrix, could give accurate spectral locus and correct small shifts. What do you think?

Andy600

Quote from: ilia3101 on February 02, 2020, 04:15:35 PM
I see.

I think there needs to be more alternative raw processing outside of the "Adobe DNG SDK" world.

Of course.

I don't think this is really about Adobe vs the world in terms of color processing. They invented the DNG spec but there have been huge developments in raw capture and processing since.



QuoteThis is interesting. Does any one, or any software do this?


In terms of ACES, IDT's are the responsibility of the camera manufacturer. Some provide IDTs (ARRI, Sony, Panasonic etc) as a transfer function and matrix solution. Aside from the occasional university lab producing spectral data for one or two cameras there is only one company who seem to be dedicated to producing independent IDTs:

http://ismini.tvlogic.tv/en/wlp/cameraprofileidt.html

TVLogic obtained the business from Wowow (Japan) who purchased it from FujiFilm so there is some pedigree behind it but it's only relevant to their own hardware. I use them occasionally but have never had great results with the 3rd party IDTs they provide and opt for the generic colorspace IDT that the camera manufacturer provides

QuoteIs a polynomial even the best option? What about a 2d lookup table in xy space, applied after a matrix - which will give a new x,y value and a Y multiplier for all input xy values. This table would be calculated based on spectral data. (It doesn't have to be xy, maybe somehting more uniform but it has to be linear)

It would be like an extra level of correction after a matrix, could give accurate spectral locus and correct small shifts. What do you think?

Probably not.

I can't be 100% sure it was LUT based but I believe there were some efforts in this direction a while ago that never got off the ground. I suspect the cost vs benefits of measuring then correcting for a specific camera's spectral deficiency are more suited to university research projects, astronomy etc. A monochromator alone costs in the tens of thousands.

I used a cheaper system CamSpecs Express (still hideously expensive) a few years ago in helping develop color calibration for clients with self-build cameras. The device measured spectral response with a selection of band pass and color filters to produced a matrix solution to XYZ. That seems to be another acceptable way of creating an IDT but again it's only a simple linear solution to a complex non-linear problem.

A more practical solution for color calibration is what we already have. A Colorchecker but this is limited to reflectance.

If sensor manufacturers would only provide the spectral data we'd be in a much better place but I can't see the likes of Canon, Sony, Fuji, Nikon etc sharing this information. On the rare occasion where I have dived into IDT creation with spectral data the end result (albeit only matrix based) was different but no better than an official IDT. It just had 'slightly different' color issues.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

ilia3101

Quote from: Andy600 on February 02, 2020, 05:18:11 PM
Of course.

I don't think this is really about Adobe vs the world in terms of color processing. They invented the DNG spec but there have been huge developments in raw capture and processing since.

I just get that impression from being on this forum, because eveything is about DNG tags :'(



Quote from: Andy600 on February 02, 2020, 05:18:11 PM
In terms of ACES, IDT's are the responsibility of the camera manufacturer. Some provide IDTs (ARRI, Sony, Panasonic etc) as a transfer function and matrix solution. Aside from the occasional university lab producing spectral data for one or two cameras there is only one company who seem to be dedicated to producing independent IDTs:

http://ismini.tvlogic.tv/en/wlp/cameraprofileidt.html

TVLogic obtained the business from Wowow (Japan) who purchased it from FujiFilm so there is some pedigree behind it but it's only relevant to their own hardware. I use them occasionally but have never had great results with the 3rd party IDTs they provide and opt for the generic colorspace IDT that the camera manufacturer provides

Interesting company. Seems like they do smart things...

QuoteTVLogic implemented our own method to reduce this error by deviding the color spaces into a couple of regions and calculate the optimal matrix for each region.



Quote from: Andy600 on February 02, 2020, 05:18:11 PM
I can't be 100% sure it was LUT based but I believe there were some efforts in this direction a while ago that never got off the ground. I suspect the cost vs benefits of measuring then correcting for a specific camera's spectral deficiency are more suited to university research projects, astronomy etc. A monochromator alone costs in the tens of thousands.

I used a cheaper system CamSpecs Express (still hideously expensive) a few years ago in helping develop color calibration for clients with self-build cameras. The device measured spectral response with a selection of band pass and color filters to produced a matrix solution to XYZ. That seems to be another acceptable way of creating an IDT but again it's only a simple linear solution to a complex non-linear problem.

Hm, I still want to try out some "university research project" style methods. I guess not with my own spectral data, don't have access to any of those instruments.


Quote from: Andy600 on February 02, 2020, 05:18:11 PM
If sensor manufacturers would only provide the spectral data we'd be in a much better place but I can't see the likes of Canon, Sony, Fuji, Nikon etc sharing this information. On the rare occasion where I have dived into IDT creation with spectral data the end result (albeit only matrix based) was different but no better than an official IDT. It just had 'slightly different' color issues.

What is stopping them from releasing this data though? I must be missing something obvious :/ I don't really understand how it would harm their business.

Spectral response should become the standard way of describing a camera's colour in metadata, so software can do whatever conversion methods it wants... I don't see what camera companies have to lose. It would be so great.

Andy600

Quote from: ilia3101 on February 02, 2020, 07:26:18 PM

Hm, I still want to try out some "university research project" style methods

You should. I would recommend starting here:


https://acescentral.com/t/rawtoaces-calling-all-developer-types/1048/63

https://github.com/ampas/rawtoaces

QuoteI guess not with my own spectral data, don't have access to any of those instruments.

Most don't have the equipment. It's very specialist.

You can find spectral measurements for several cameras online. There are 5D MkII and MkIII for sure but I don't have the link. It's probably linked in the raw2aces thread (link above).


QuoteWhat is stopping them from releasing this data though? I must be missing something obvious :/ I don't really understand how it would harm their business.

Spectral response should become the standard way of describing a camera's colour in metadata, so software can do whatever conversion methods it wants... I don't see what camera companies have to lose. It would be so great.

Totally agree but..


Competition. Sensor technology is a hot topic and these big corporations don't want to give away anything that may give the competition any insight or advantage that could eat into an already diminishing market share.

The first BMD cameras used off-the-shelf sensors and there is spectral data available for some of those. You could also look at the likes of ON Semiconductor https://www.onsemi.com/support who provide development tools for their sensors. You might find something useful there.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Danne

The way I see it aces is a high end color gamut that is becoming standard for many high end movie corporations. So after white balance and exposure is corrected aces is in other words to be treated as any other colorspace conversion, only a much wider gamut? So no magic here really, and without proper monitors better just stick with srgb/rec709  :P.

ilia3101

Quote from: Andy600 on February 02, 2020, 09:58:09 PM
Competition. Sensor technology is a hot topic and these big corporations don't want to give away anything that may give the competition any insight or advantage that could eat into an already diminishing market share.

I see. Canon hiding the secrets behind "cAn0N c0LoUrS".

Quote from: Andy600 on February 02, 2020, 09:58:09 PM
The first BMD cameras used off-the-shelf sensors and there is spectral data available for some of those. You could also look at the likes of ON Semiconductor https://www.onsemi.com/support who provide development tools for their sensors. You might find something useful there.

I thought blackmagic's first two cameras (BMCC and BMPCC) had awful colours, am I the only one? I think they used some industrial sensor not indended for cinematogaphy. Maybe lookup table corrections could really help them out.



Quote from: Andy600 on February 02, 2020, 09:58:09 PM
You should. I would recommend starting here:


https://acescentral.com/t/rawtoaces-calling-all-developer-types/1048/63

https://github.com/ampas/rawtoaces

"rawtoaces-calling-all-developer-types" I like the sound of that :D

I have a private github repo where I have been slowly writing spectral colour code. Some time in the near future I hope to create some IDTs with it, but it's not quite there yet. It's been on my mind for a long time.

I will share any results when I finally have something. Maybe if it's good, rawtoaces could add it.



Quote from: Andy600 on February 02, 2020, 09:58:09 PM
Most don't have the equipment. It's very specialist.

You can find spectral measurements for several cameras online. There are 5D MkII and MkIII for sure but I don't have the link. It's probably linked in the raw2aces thread (link above).

I have seen that thread with spectral data for 5D3 and 2, also some data here: https://github.com/ampas/rawtoaces/tree/master/data/camera

What about projecting a spectrum on to a white surface (with a prism or something), and using that to measure spectral response. Could calibrate it first by comparing to existing ampas spectral data using my 5D2 or 3, then measure other cameras by relative difference.

allemyr

Quote from: Danne on February 03, 2020, 12:19:20 AM
The way I see it aces is a high end color gamut that is becoming standard for many high end movie corporations. So after white balance and exposure is corrected aces is in other words to be treated as any other colorspace conversion, only a much wider gamut? So no magic here really, and without proper monitors better just stick with srgb/rec709  :P.

I think the hole point with ACES is to match color from different cameras without much effort. Its not about a wider gamut. A wide gamut means nothing if there isnt a good workflow with colors. I dont use ACES with ML tho. I as always finding my way on how and the best way to interpret footage. And I cant say yet I found a solution with ML RAW footage, but i will evolve working with it.

Danne

It´s all about gamut. Eye perception. Aces is supposed to be doing this better than other spaces. Why else would we use it?
https://z-fx.nl/ColorspACES.pdf

Page 37:
QuoteHere I've expanded the ACES graphic to explain a bit more about the way the workflow is applied in
practice. We start with the shoot, so we have the scene again, but now shot with different cameras. Or maybe it's
multiple scenes, or green screens and plates, you name it. Each camera converts the light in the scene to image data
in a Scene Referred color space, a Storage Space. Now ideally all the rushes would get converted to ACES using the
IDTs and stored for archiving, but in the real world most likely the data is stored in its original storage space.
So, next up is the editing department. The rushes get converted to ACES as an intermediate step and
perhaps a look is applied. The rushes are then rendered to a Display Referred space suitable for offline editing, and
the look is saved in a LUT file. What's important is that this look is saved without the RRT/ODT embedded, so anyone
else can apply the look as well!
After the edit is done the data that made the cut is conformed. From the conforming station visual effects
shots are exported in ACES. The VFX department imports the shots and converts them to ACEScg internally to work
on. Because all the data was converted to ACES using the correct IDTs the amount of work lost in trying to match
different cameras is minimised. Any looks that were applied are sent to VFX, in the form of the LUT that was saved
before. Any CG elements are rendered out and saved in ACES or ACEScg if possible, or otherwise in Linear sRGB,
and the CG is composited. As you can see both CG artists and compositors view their work through an RRT/ODT,
sRGB (D60 sim.) assuming they are working on sRGB monitors. It is very important the same View LUTs are used,
otherwise the CG will look completely different to the compositor than it did to the CG artist. Any previews that are
needed in the edit can be rendered out with the look LUT applied. And finally completed shots are rendered out to
ACES again and are sent back to the conforming station.
In the meantime the grading department will probably have started grading. It's likely the look that was used
for editing will be a starting point for the grade. Graders should be working in a scene referred color space like
ACEScc and will be looking through the appropriate RRT/ODT View LUT, like DCI­P3 or, if the projector is on target,
just the RRT. Because VFX was using the same RRT with the ODT for their display the difference between what the
artist saw and what the grader sees should be minimal. And again, any mismatch between different cameras should
lead to a minimum of extra work thanks to the IDTs. When the grading is done the graded master should be rendered
out to ACES so it can be used to make DCPs, TV versions and even film prints.
Of course, keeping track of the 'color space journey' requires a lot of attention and it's easy to get
sidetracked with all the different departments involved, but I hope you can see why using ACES in your workflow can
eliminate a lot of confusion and frustration!

Bender@arsch

UPDATE 15.02.2020 - see the first Post

Ok, lets back to my ACES workflow. I learned very much about HDR grading and HDR to SDR cross conversion. So i revised my workflow, have expand for HDR grading in Davinci Resolve, the HDR to SDR conversion and the Youtube Metadata-tool.

Unfortunately I found out that I had made mistakes:
- I used EI160, but not EI1800 in my videos -> incorrect contrast
- I understood the problem between the application AE, Premiere and Davinci. -> I think i need to buy a better Dysplay ^^ I have an old sRGB Monitor that cannot displayed rec709 (gamma 2.4). So i fixed the After Effects settings. -> if you set Layer 6 to Input Space: Input - Generic - sRGB - Texture, you don't need to uncheck Display Color Management. You will now see what you grade. But, here is my problem -> in AE i see now the correct preview, but in Premiere (scopes looks better) and in Davinci i see a little bit flater version, but if i upload this to youtube, i see the same image as in AE. I think AE dysplayed me sRGB and premiere and Davinci rec709 (gammma 2.2 and 2.4). Last time I set input to sRGB (instead of rec709) in Davinci to fix this. This time i can set the correct gamma.
- Luther you're right, rec2020 is a much better idea if i want HDR grade my footage (bottleneck). After grading in rec709 i can simply change it to rec2020 and render it out.
- incorrect HDR grading, because I don't understand how to control HDR to SDR -> But know I know it;)

-> So I reUPDATE all the videos... again... The SDR Version looks similar, but the HDR Version is now correct.


@Andy600
Thanks for the big Answer. You have a lot of knowledge in ACES and other stuff.

QuoteThe transform to ACEScg, being an output in the first layer and an input in the second, is a null operation. You can simply go from Rec709 display (in) to Rec709 (out).

I tried it out -> it doesn't work.

The only think that i don't understand is Layer 3 and Output Space. Usually and correct is ACEScg, but there is a magenta color failure in blue tones -> don't know why. With canon rec2020 the same. -> I compare it with a ColorChecker.


---------------------------------------------
03.03.20  -  small Update

I found some problems in the dark area -> distortion and gapping. => I change AE Settings, Layer 1 and Layer 6. Now it looks correct, and has a much better contrast.. But i don't update my videos again^^

Bender@arsch

UPDATE 19.03.2020 - see the first Post

After many tests and my own knowledge, I had to change a few things again. The most frustrating thing is HDR. HDR grading on an SDR monitor is very complex. But i found "a way". Now the HDR version looks like I wanted it. And I can leave the HDR to SDR conversion by youtube.

I don't understand why i can't simply grade with my HDR TV on my usually graphic card. And buying a 200$ BMD I/O Device is a little bit risky, because i don't know if it's works with After Effects. however, i found a way and the result is really good now.

Sideinformation: I did a lot of testing to find the best codec for HDR. The Prores 4444 XQ is really good but there is a failure sometimes in green-tones (shows me small black dots -> only some pixels). This is not happend with 16bit EXR or Tiff rendering, but Tiff and EXR is more than 3 times bigger filezise. There is also a big difference if i see the Data in the graphic of Davinci Refolve (less colors in the rec2020 viewer in the sprout area). However, I don't see much difference with my own eyes on a HDR and a SDR Monitor, except the black small dots. Maybe someone understand this bug.

Here is a example. (rec2020 1000nits preview on SDR Monitor)
Unbenannt-1" border="0

i've already update my videos again. This time i hope for the last time^^

Pls enjoy the last changes and try it out.