Encoding for UHDTV and ACES workflow [?]

Started by bpv5P, October 12, 2017, 12:44:04 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

bpv5P

Following the discussion and some tips from other users, I've some questions to people smarter than me:

It's a bit more complex to work with 3D LUTs on ACES than I thought. The correct practice seems to use LMT (Look Management Transform). So, is FilmConvert using LMT to work correctly with ACEScct? Or it's just a simple fancy curve? If it is, in fact, converting colors with 3D LUTs, FilmConvert is not a good practice while on ACES, right?

Although people from high-end industry don't seem to recommend using LUTs for primary color grading (because they are "ultimate-power-level Baselight users"), I find it very handy to work with it, since it saves so much time. Some people seem to be trying to use CTL already, but I don't know how good these are.

Also, what's the best way to output these ACES projects for new 4K TV's these days? Hybrid Log-Gamma is already supported on many TV's, it seems, so Rec.2020 (ICtCp) is a thing already? What codec should be used? HEVC 10bit 4:4:4? Youtube and Vimeo supports these?
As for submitting to film festivals, DCI-P3 and DPX should be used?

Anyway... does anyone know what is the current best practices at all for grading and encoding raw files (CDNG and others)? I'm not taking into consideration VFX here, of course. All those data is a bit complex and I'm just the guy with no money trying to do a good technical job.
It seems the industry was always about companies trying to sell their products as the best thing ever created, while the actual benefits are non-existent. ACES seems to be of real benefit, enough to change my workflow entirely if necessary.

reddeercity

I guess I should start with basic of ACES , you have links that are jumping around and don't really help you I think it just make it more complex and confusing .

We are only concerned with the very last pipeline



Its really very simple , Linear Scene to Camera -> Raw Video/Cdng -> IDT -> ACES -> RRT ->ODT -> Video Monitor or RDT -> SMPTE Reference Projector -> Theater Reproduction


Here is the Color Space of ACES as you can see it'd be on the visual spectrum



Here lies first problem with ACES using magic lantern raw video (MLV) , we have no IDT ( Input Device Transform) here is the one for Canon_EOS_C100_IDT_Ver.1.0.zip
So with out the IDT you have no way of having a accurate color transform to ACES This will cause major color problems .
Here are the canon-idts very limited , read this very good resource , explain everything I think  acescentral resources

So if you what to be in UHD color space you should be in Rec2020 and if on a MAC the new ProRes 4444XQ is designed with this color space (Wide Gamut )
Or on Windows use MLVProducer and in basic CC use bt2020 color space and export as Tiff but it would be better to use DPX , I guess if you ask AWPStar I bet he could add DPX .

This may not be what you where looking for , if there was a IDT for DSLR I would be using ACES .
And the last thing If you are exporting to 4k/UHD how are you monitoring the color grade for the wider color space ?
HD/rec709 will not cut it as every thing will look like it's clipping , I guess you could use a transform to rec709 from bt2020 , I lot of work to get the color space setup right.
That's all for now .





reddeercity

Will maybe found a way to get Canon DLSR Raw to ACES color space without a IDT , user andyp6 on GitHub seems has written some code to convert Camera Raw to ACES with program called  rawtoaces
Quote
The RAW to ACES Utility or  rawtoaces , is a software package that converts digital camera RAW files to ACES container files containing image data encoded according to the Academy Color Encoding Specification (ACES) as specified in SMPTE 2065-1. This is accomplished through one of two methods.
1.CameraRAW RGB data (generated by libraw) is converted to ACES by calculating an Input Device Transform (IDT) based on the camera's sensitivity and a light source.
2.CameraRAW RGB data (generated by libraw) is converted to ACES by calculating an RGB to XYZ matrix using information included in  libraw  and metadata found in the RAW file.
The output image complies with the ACES Container specification (SMPTE S2065-4).

I haven't tried this out yet so I'm not sure on the results , there may be a sweet spot to maximize the DR e.g. 100 ISO , 800 ISO etc. ..... .

reddeercity

some very interesting option in converting raw to  ACES , look very familiar (mlv_dump)  ;)
https://github.com/ampas/rawtoaces#help-message
$ rawtoaces --help
rawtoaces - convert RAW digital camera files to ACES

Usage:
  rawtoaces file ...
  rawtoaces [options] file
  rawtoaces --help
  rawtoaces --version

IDT options:
  --help                  Show this screen
  --version               Show version
  --wb-method [0-4] [str] White balance factor calculation method
                            0=white balance using file metadata
                            1=white balance using user specified illuminant [str]
                            2=Average the whole image for white balance
                            3=Average a grey box for white balance <x y w h>
                            4=Use custom white balance  <r g b g>
                            (default = 0)
  --mat-method [0-2]      IDT matrix calculation method
                            0=Calculate matrix from camera spec sens
                            1=Use file metadata color matrix
                            2=Use adobe coeffs
                            (default = 0)
  --ss-path <path>        Specify the path to camera sensitivity data
                            (default = /usr/local/include/RAWTOACES/data/camera)
  --headroom float        Set highlight headroom factor (default = 6.0)
  --cameras               Show a list of supported cameras/models by LibRaw
  --valid-illums          Show a list of illuminants
  --valid-cameras         Show a list of cameras/models with available
                          spectral sensitivity datasets

Raw conversion options:
  -c float                Set adjust maximum threshold (default = 0.75)
  -C <r b>                Correct chromatic aberration
  -P <file>               Fix the dead pixels listed in this file
  -K <file>               Subtract dark frame (16-bit raw PGM)
  -k <num>                Set the darkness level
  -S <num>                Set the saturation level
  -n <num>                Set threshold for wavelet denoising
  -H [0-9]                Highlight mode (0=clip, 1=unclip, 2=blend, 3+=rebuild) (default = 2)
  -t [0-7]                Flip image (0=none, 3=180, 5=90CCW, 6=90CW)
  -j                      Don't stretch or rotate raw pixels
  -W                      Don't automatically brighten the image
  -b <num>                Adjust brightness (default = 1.0)
  -q [0-3]                Set the interpolation quality
  -h                      Half-size color image (twice as fast as "-q 0")
  -f                      Interpolate RGGB as four colors
  -m <num>                Apply a 3x3 median filter to R-G and B-G
  -s [0..N-1]             Select one raw image from input file
  -G                      Use green_matching() filter
  -B <x y w h>            Use cropbox

Andy600

Quote from: reddeercity on October 20, 2017, 08:12:26 AM
Will maybe found a way to get Canon DLSR Raw to ACES color space without a IDT , user andyp6 on GitHub seems has written some code to convert Camera Raw to ACES with program called  rawtoaces
I haven't tried this out yet so I'm not sure on the results , there may be a sweet spot to maximize the DR e.g. 100 ISO , 800 ISO etc. ..... .

andyp6 is actually me ;)

I didn't write the code. Miaoqi from the Academy did.

Rawtoaces is basically a way of getting raw images into an ACES container (EXR files with AP0 primaries) using libraw. It's currently only command line but after a year of only supporting .cr2 and .nef files it can now use DNG files though it's not in the main branch yet.

The great thing about rawtoaces is the ability to use a camera's spectral sensitivity and it's response to an influx spectrum (this includes the lens, any filtration and the SPD of the lighting) to derive and apply a Camera RGB to ACES matrix on a shot by shot basis. Alternatively you can choose Adobe coefficients or embedded metadata in the same way as you would with DCRaw.

Spectrally profiling a camera properly is not something for amateurs (you need access to a monochromator in laboratory conditions). The current list of supported cameras is very small. I am adding QE data from 'found' research that will enable (unofficial) support for a few older models, 5D MkII, 50D, 60D but that will take some time and still needs extensive testing. So far there is not enough variance between the results of using QE compared to Adobe coefficients (not surprising because Adobe Labs is a world leader in this) to warrant using rawtoaces over other methods (ACR, Resolve, MLVProducer etc etc).

If you do try it you will likely need to use the metadata or Adobe coefficients which you can already do in most raw apps.

Rawtoaces also doesn't currently support compression so the output file sizes are huge but that and other refinements will come and it is currently Mac only.

I'll add a compiled version to my Github shortly. Sorry, looks like you need to build it yourself. Use this branch for DNG support: https://github.com/miaoqi/rawtoaces/tree/feature/DNG

@reddeercity - You don't need an IDT for raw data because it is assumed to be in XYZ space which is easily transformed to ACES primaries ;)

@bpv5p - LMT's can include but are not limited to conventional luts. They come as CLF (Common Lut Format) and these can contain 1D and 3D luts, 3x3 matrices and offsets, ASC CDL and a few other components but not expressions. There is nothing wrong with properly built luts and for film emulation there is currently no other way to encapsulate color crosstalk of film without sampling it and building a lut from the data.

FilmConvert Pro does indeed use luts and lots of them. I don't know the exact specifics of the app but an educated guess tells me each camera is profiled to the reflectance of a color chart (Colorchecker SG probably) over a range of exposures. The input XYZ values are then mapped to a set of output values derived from densitometry of real film.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Danne

Great links shared.
Also thanks again Andy600 for insightful information. I also was under the impression we lack IDT, ODT and what not under this tree.

Andy600

IDTs are needed for images that have a defined RGB colorspace i.e. a set of RGB primaries and a white point (with or without a transfer function).

Raw doesn't have a defined colorspace. The color matrices in a DNG (or metadata in cr2, nef etc) provide a method to get the debayered pixels into a connecting colorspace (PCS) and that is typically CIE-XYZ colorspace. Then it's just a simple 3x3 matrix to get to ACES AP0. You also need to chromatically adapt the white point but that can be concatenated or added into into the 3x3.

If you import DNG files into Resolve that is set up for ACEScc or ACEScct you effectively have ACES data to work with. This can be exported to half-float EXR files (as used for high end compositing and VFX work) without an ODT or graded in ACES colorspace - the grade basically IS or is part of the LMT and there can be multiple LMTs per shot. The LMT/grade always happens in ACES colorspace which is huge and will not clip color (however a LUT might).

You always view the image through the RRT (Reference Rendering transform) and ODT (Output Device Transform). The ODT is relative to the playback device i.e. for Youtube you would select sRGB, HDTV - Rec709 etc, DCI-P3 for Cinema but I wouldn't even attempt to use an ODT if you can't monitor on the actual device because movies nearly always require a trim pass - then there's device calibration to think about ;)

ACES is still very much a work-in-progress and not ideal for most work. If you work in a facility where many artists or even different companies all contribute to the same end product then ACES is a no brainer but if you're making videos for Youtube and use a lot of luts then it's not really for you.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Danne

QuoteIf you import DNG files into Resolve that is set up for ACEScc or ACEScct you effectively have ACES data to work with. This can be exported to half-float EXR files (as used for high end compositing and VFX work) without an ODT or graded in ACES colorspace - the grade basically IS or is part of the LMT and there can be multiple LMTs per shot. The LMT/grade always happens in ACES colorspace which is huge and will not clip color (however a LUT might).
Nice.
Quote
ACES is still very much a work-in-progress and not ideal for most work. If you work in a facility where many artists or even different companies all contribute to the same end product then ACES is a no brainer but if you're making videos for Youtube and use a lot of luts then it's not really for you.
Probably won´t be in my line of production work but still interesting read.

Kharak

Thank you, Andy.

Every post you make, i learn something new.

Though sometimes i need to chew through a bunch of acronyms. You should come with a legend in your signature. Hehe
once you go raw you never go back

Andy600

I try to explain in as simple terms as possible but I do tend to use acronyms a lot ::). Typing SPD is easier than spectral power distribution but I guess neither has much meaning if you are not very familiar with the subject. If I don't explain something clearly or simply enough just ask and I'll try to reword it if I can.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Teamsleepkid

i use eos m. i shoot raw. in davinci i set my input transform to canon 7d because eos m isn't in there. i set my output transform to rec709. is this completely wrong? am I actually using aces to grade if i do this? seems like it looks better than just stock davinci color managed space.
EOS M

bpv5P

Thanks everyone contributing, great information here.
So, to compile the information (according with Andy600):
- IDT is not necessary in raw data
- rawtoaces is not ready for use yet
Also:
- ProRes 4444XQ is a good codec for Rec.2020 (although I think DNxHD also is a good option)

QuoteThere is nothing wrong with properly built luts and for film emulation there is currently no other way to encapsulate color crosstalk of film without sampling it and building a lut from the data.

Ok, but these current 3dluts that have widespread use are not constructed to be ACES-to-ACES, right? So, use them together with ACES would be a waste, since the Luts would be limited to other space (?). Someone from academy explain this here (read on "matching LUT X"). But, as he stated:
Quote"Because empirical LMTs are derived from output-referred data, the range of output values from such an LMT is limited to the dynamic range and color gamut of the transform used to create the empirical LMT"
And:
QuoteFurthermore, empirical LMTs should certainly not be "baked in" to ACES data because that would destroy potentially useful dynamic range and color information contained in the original ACES-encoded imagery.

So, he continues, the right step while using ACES is not to apply "empirical LUT X" (a normal LUT converted to ACES), but to create a "Analytic LMT", that is already made for ACES range.



I have no know-how to do this, but one can probably do a lot of money if he manages to convert these Vision3 emulation LUTs to real Analytic LMTs. Just a tip.  ;D

Also, note:

QuoteCLF does not support math formulas, so CTL used for even simple shaper functions would need to be sampled to LUTs for implementation in CLF. This is potentially limiting, but extending CLF, or adding support for algorithmic description of LMTs, is under consideration for upcoming ACES enhancements and extensions.
And Nick Shaw (ACES Mentor) agrees:
QuoteThis is a very important point. Since in CLF shaper functions currently need to be implemented as 1D LUTs, and operations like hue modifiers as 3D LUTs, that does not fit the goal of making analytic LMTs which are not limited to a particular range

So, the correct would be to construct  a LMT from scratch using math formulas inside it as an "emulation" for something like Vision3. That way, it would not be limited as the "Empirical LUT X".

bpv5P

Also, Andy600, the rawtoaces tool seems to work only on DNG (besides cr2 and nef), so it would be needed to automate it to work in a folder of dng (potentially already processed with raw2dng), right? How would someone produce a sequence from these separated EXR? I don't know how these work, that's why I'm asking these questions...
Thanks for all the information, btw.

Andy600

Quote from: Teamsleepkid on October 20, 2017, 06:50:38 PM
i use eos m. i shoot raw. in davinci i set my input transform to canon 7d because eos m isn't in there. i set my output transform to rec709. is this completely wrong? am I actually using aces to grade if i do this? seems like it looks better than just stock davinci color managed space.

It depends on how you want the color to look i.e. accurate to scene colorimetry or a 'look'? Using the 7D IDT or any other IDT with your EOS M raw (cr2) data will be incorrect and will cause wrong colors and artifacts. IDTs will have no effect on DNG files. If it looks better to you with the 7D IDT than without it then it's an indication that the embedded metadata may be incorrect. Resolve's implementation of raw images in ACES is also a bit 'odd'.

Quote from: bpv5P
- IDT is not necessary in raw data

correct

Quote from: bpv5P
- rawtoaces is not ready for use yet

It depends. You can certainly play with it and it's not limited to single frames. See: https://github.com/miaoqi/rawtoaces/tree/feature/DNG#usage


Quote from: bpv5P
Also:
- ProRes 4444XQ is a good codec for Rec.2020 (although I think DNxHD also is a good option)

For archiving yes but not for commercial deliverables. HEVC, ProRes HQ or DNxHD/HR maybe but not XQ.


Quote from: bpv5P
Ok, but these current 3dluts that have widespread use are not constructed to be ACES-to-ACES, right?

Correct. They are mostly built for or are relative to Rec709 display.

Quote from: bpv5P
So, use them together with ACES would be a waste, since the Luts would be limited to other space (?). Someone from academy explain this here (read on "matching LUT X"). But, as he stated:And:
So, he continues, the right step while using ACES is not to apply "empirical LUT X" (a normal LUT converted to ACES), but to create a "Analytic LMT", that is already made for ACES range.




I wouldn't say it's a waste as such but you have to ask yourself why not stick with YRGB workflows if you want to use lots of luts? If you are making deliverables for several outputs then ACES is a useful color management system and luts, although limiting, can still be used.

Empirical LMTs i.e. PFEs or baked look luts are perfectly ok to use as long as you are aware of the limitations. One being that the lut is built for a specific output (i.e. Rec709/P3). This will limit the dynamic range if, for instance you need HDR output. The statement about not baking luts into ACES means simply not baking the look into the EXR files. The look can be baked into the final output.

Quote from: bpv5P
I have no know-how to do this, but one can probably do a lot of money if he manages to convert these Vision3 emulation LUTs to real Analytic LMTs. Just a tip.  ;D


Also, note:
And Nick Shaw (ACES Mentor) agrees:
So, the correct would be to construct, from the scratch, using LMT math formulas an "emulation" for something like Vision3, so it's not limited as a "Empirical LUT X".

In their simplest form, analytical luts are actually fairly straight forward to code but emulating film color and density will still be dependent on luts for a while yet. We already have unlimited coding resources beyond the capabilities of LMTs and so far no one has cracked the math enough to emulate the full chemical processes. It's fun trying though ;)

I'm not sure how much of CTL Blackmagic Design has included in Resolve's ACES implementation (I haven't really pushed the boundaries yet) but hue modifiers in an LMT are an interesting and versatile aspect.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

bpv5P

Quote from: Andy600 on October 21, 2017, 01:16:38 AM
It depends. You can certainly play with it and it's not limited to single frames. See: https://github.com/miaoqi/rawtoaces/tree/feature/DNG#usage

Ok, but the document doesn't specify the output format. Is it a debayered OpenEXR?
I don't have a debian system here to compile it, but I could do it and create a binary for everyone, so we here from ML forum could play with it..
Also, If it is a OpenEXR, when I load it on some software (Resolve, for example), the working color space will automatically be ACES (it's on exr metadata) or we have to configure it? If not, what's the purpose of this conversion (if not for the better acuracy of "camera spectral sensitivities and illuminant spectral power distributions")?

Quote
For archiving yes but not for commercial deliverables. HEVC, ProRes HQ or DNxHD/HR maybe but not XQ.

Thanks. HEVC seems a good option for streaming and the other ones better for non-streaming.
I'm still waiting for the big players to look into Daala+Opus, though.

Quote
Correct. They are mostly built for or are relative to Rec709 display.

But, that's the point. If the final product will be displayed on a Rec.2020 space, but the LUT is already limited to Rec.709, use this LUT is a waste since the final product will not use all the capability of Rec.2020, would it?

Quote
I wouldn't say it's a waste as such but you have to ask yourself why not stick with YRGB workflows if you want to use lots of luts? If you are making deliverables for several outputs then ACES is a useful color management system and luts, although limiting, can still be used.

Yes, but I'm talking hypothetically here. I personally don't need all this quality, because most of our work our clients upload on screaming websites and the final users watch on cheap smartphones, but I try to look from time-to-time what is the best workflow today (taking into consideration the relevance to the workflow, not the 32bit 1% red shit we discussed on other threads).

Quote
The statement about not baking luts into ACES means simply not baking the look into the EXR files. The look can be baked into the final output.

I understand, but this is a view from the set production that needs realtime view of the "close to the final" product, right? My questions are based on the use of LMTs as a tool for final color grading process, not as a realtime look...


ps. Sorry if some of you don't understand things I write, english is not my mother language...

Andy600

Quote from: bpv5P on October 21, 2017, 02:57:36 AM
Ok, but the document doesn't specify the output format. Is it a debayered OpenEXR?

Yes. The ACES interchange format does not hold bayer information like a DNG. It's ACES container format. Half-float with ACES AP0 primaries.

Quote from: bpv5P
I don't have a debian system here to compile it, but I could do it and create a binary for everyone, so we here from ML forum could play with it..

Great. I'll work out how to package the OSX version.

Quote from: bpv5P
Also, If it is a OpenEXR, when I load it on some software (Resolve, for example), the working color space will automatically be ACES (it's on exr metadata) or we have to configure it? If not, what's the purpose of this conversion (if not for the better acuracy of "camera spectral sensitivities and illuminant spectral power distributions")?

The EXR files do contain metadata about primaries but this does not auto-configure the working space of your NLE or color grading app. You have to put it into ACES yourself. The metadata works to transform the pixels from AP0 primaries to AP1. AP0 is used only for the interchange. AP1 for everything else. AP1 primaries are never written i.e. ACES is stored only with AP0 primaries.

The purpose is a robust way to convert raw data to ACES files. Spectrally derived color rendering is much more accurate but essentially rawtoaces is currently doing what DCRaw, Resolve, After Effects etc can do but with the addition of using QE sensor data if available.

Quote from: bpv5P
But, that's the point. If the final product will be displayed on a Rec.2020 space, but the LUT is already limited to Rec.709, use this LUT is a waste since the final product will not use all the capability of Rec.2020, would it?

Not exactly. It depends on the specific lut and how carefully it was made. Well made luts can be rescaled and transformed to output in another colorspace. I've done just that - converted a Rec709 lut for P3 with no visible loss. In most cases you probably would get better results resampling the PFE as part of an analytical lut but it really depends on the actual lut.

Quote from: bpv5P
Yes, but I'm talking hypothetically here. I personally don't need all this quality, because most of our work our clients upload on screaming websites and the final users watch on cheap smartphones, but I try to look from time-to-time what is the best workflow today (taking into consideration the relevance to the workflow, not the 32bit 1% red shit we discussed on other threads).

I understand your point and yes, ACES may be overkill for a lot of users but it's going to be common in coming years. One of the key selling points of ACES is the ability to just switch ODTs which, if you have done multi format work you will know how frustrating and time consuming it is regrading or doing trim passes. There are new ODTs in the works covering smartphones, iPads, new TV technologies (Dolby Pulsar etc) and this will only make life easier. If you have archived your images using ACES or scene-referred log it means you can go back in a few years and easily render new video for the standards of the day.

Quote from: bpv5P
I understand, but this is a view from the set production that needs realtime view of the "close to the final" product, right? My questions are based on the use of LMTs as a tool for final color grading process, not as a realtime look...

Looks should not be baked into ACES files ever as that defeats much of what ACES is about. LMTs can be applied real-time on set and yes, it is used for dailies and more but LMTs should be thought of as glorified metadata that accompanies the clean source material (the EXR files). Anyone in the chain (editorial, color, vfx etc) can then make use of this metadata (the LMTs, ASC CDL, etc) in the exact same way as it was originally viewed. You would then bake it into your video but the original ACES files remain untouched.

Quote from: bpv5P
ps. Sorry if some of you don't understand things I write, english is not my mother language...

Your English is good!  :)
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

Wayne H

Hi, i used the (ACES) colour space for the first time, really pleased with the result's, there is a colour shift but it does in my humble opinion feel more cinematic. Here's a sample i uploaded to youtube in glorious 4K. Crop Mode 11-8bit lossless, 3584x1320. Da Vinci Resolve. ENJOY. lol.  https://www.youtube.com/watch?v=GLzfYci7pZ8


timbytheriver

@Andy600

"LMTs should be thought of as glorified metadata that accompanies the clean source material"

Like Cineform Active Metadata – on Academy-Acid?

I really hope they consulted David Newman http://cineform.blogspot.co.uk/ first! :)
5D3 1.1.3
5D2 2.1.2

reddeercity

Quote from: timbytheriver on October 24, 2017, 10:19:56 AM
I really hope they consulted David Newman http://cineform.blogspot.co.uk/ first! :)
Explain please , This has nothing to do with either Magic Lantern or ACES  . It a very bad shameless plug ! for GoPro & it's workflow(cineform) .
So once again what's the relevance to this thread ?

timbytheriver

@reddeercity If you'd bothered to read my post in any detail you'd have noticed the '@Andy600' preface to my post. I wasn't addressing you in that post.

I have no reason to promote, or professional connection with GoPro. In fact the Cineform I refer to is the Cineform of yesteryear – way before GoPro's acquisition. If you are familiar with the Cineform wavelet codec's active metadata: how it holds colour-grading information without baking it in, you'll understand the parallel I was drawing with what Andy600 was describing about the ACES LMTs.


5D3 1.1.3
5D2 2.1.2

Andy600

My use of the word Metadata (a set of instructions) is probably not the best analogy here because we typically think of metadata as being embedded in the image container. ACES LMTs are not embedded but can accompany the ACES files through the image pipeline. I believe Sony Image Works use meta identifiers to dynamically link certain ACES clips to specific transforms, CDL and LMTs but these are still independent of the actual images.
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

timbytheriver

@Andy600 Thank you for contextualising use of the term 'metadata' in your post Andy!  :)
5D3 1.1.3
5D2 2.1.2

bpv5P

Sorry for the late reply Andy600:

Quote from: Andy600 on October 21, 2017, 05:22:50 AM
Spectrally derived color rendering is much more accurate but essentially rawtoaces is currently doing what DCRaw, Resolve, After Effects etc can do but with the addition of using QE sensor data if available.

You said you're trying to add QE data for Canon models, but using "found research", since that to really construct these you would "need access to a monochromator in laboratory conditions", right?
May I ask you how are you finding these research? Some of Canon sensors are from Sony (not all of them, maybe the Axiom guys know better), from what I know, maybe they have specifications on some website, but I think that it would be possible variations to be introduced even in between sensors from the same batch in the same factory.
There's also other points, for example, the demosaicing algorithm would add color artifacts (by the way, what I've read about Foveon X3 made me very hopeful that, in future, industry will adopt those). And we are not even thinking about the physical factors, such as the distance people are watching from these images, the ambient luminance, etc.

Anyway, if you figure it out the QE data of some canon cameras, will you release this data as open source in future?

Quote
There are new ODTs in the works covering smartphones, iPads, new TV technologies (Dolby Pulsar etc) and this will only make life easier.

Yes, that would be very amazing.


To refine my other comment above:
Quote
I'm still waiting for the big players to look into Daala+Opus, though.

I did some research and they are actually looking into it. It's called AOMedia Video 1 or AV1. It already outperforms HEVC and many others. Here is a comparison:
http://wyohknott.github.io/image-formats-comparison/lossy_results.html

And here, using SSIMULACRA and google's butteraugli:
https://encode.ru/threads/2814-Psychovisual-analysis-on-modern-lossy-image-codecs?p=54616&viewfull=1#post54616

The project called Pik is trying to go even further, although the techniques applied on AV1 seems much more interesting than build a compression algorithm just as a convergence to a constructed psychovisual metric (butteraugli), in my opinion.
The FLIF seems a good algorithm for lossless compression. It could be used for a future digital intermediate format, better than the (now open sourced) Cineform or ProRes (I don't know if the decoding speeds are higher enough for this, but that would be interesting to try)...

Andy600

Quote from: bpv5P on October 26, 2017, 11:34:38 AM
You said you're trying to add QE data for Canon models, but using "found research", since that to really construct these you would "need access to a monochromator in laboratory conditions", right?

Yes and if you have ever seen one of these setups you will know just how complex they are. A CamSpecs system is another 'cheaper' option but still costs around 10k. At the lowest end there are cheap kits and online educational resources for spectral profiling but it's not really useful for precision camera calibration. https://spectralworkbench.org/

Quote from: bpv5P
May I ask you how are you finding these research?

Google :)

Here's the biggest data set I have found online: http://www.gujinwei.org/research/camspec/db.html (click on the database link). Remember, this is someone's research so you would require permissions from the author and The University of Tokyo to publish anything that uses the data. It is also incomplete for the purposes of rawtoaces and would require extrapolating between 380-400nm and 720-780nm i.e. not ideal when you are dealing with such precision.

Quote from: bpv5P
Some of Canon sensors are from Sony (not all of them...

Canon make all their own DSLR sensors AFAIK.

Quote from: bpv5P
...maybe the Axiom guys know better), from what I know, maybe they have specifications on some website...

I don't know. Ask them :)

I very much doubt Canon has published QE data for any of their sensors. I have never seen official anything official.


Quote from: bpv5P
but I think that it would be possible variations to be introduced even in between sensors from the same batch in the same factory.

Yes and that is a very important point when it comes to sensor calibration. Ideally this should be done individually for each camera but in practice, the tolerances are small enough to not present any significant color issues between sensors from the same camera model. This is how Adobe do it i.e. they profile a camera and it's this 'ground truth' that is used to derive the non-white balanced xy coordinates for all cameras with the 'same' sensor.

Any color calibration beyond that should only ever be done relative to one specific sensor (i.e. your camera) and illuminant (the shot lighting) because using the calibration (i.e. a colorchecker, it8 etc) derived for any other camera will likely increase the mean color error in your camera relative to no calibration.


Quote from: bpv5P
There's also other points, for example, the demosaicing algorithm would add color artifacts (by the way, what I've read about Foveon X3 made me very hopeful that, in future, industry will adopt those). And we are not even thinking about the physical factors, such as the distance people are watching from these images, the ambient luminance, etc.

As I understand it, demosaicing does add artifacts but it doesn't significantly affect collection of QE data because it is very narrow bandwidths (5 - 10nm) and not sampling individual pixels. Sensor and shot noise is more of a problem but it can be dealt with through dark frame subtraction.

I agree about Foveon sensors but sadly I can't see it happening. Sensor research looks to be going in other directions: https://www.bloomberg.com/news/articles/2017-10-24/sony-s-big-bet-on-sensors-that-can-see-the-world


Quote from: bpv5P
Anyway, if you figure it out the QE data of some canon cameras, will you release this data as open source in future?

I doubt I will be spectrally profiling any cameras soon. I did a couple using a CamSpecs Express system a few years ago but it really wasn't worth the hassle and costs when compared to using Adobe's coefficients. If I did then it almost certainly would not be open source because the cost of acquiring and profiling so many cameras (even with a CamSpecs express which is quite simple to use) would be in the tens of thousands.



Quote from: bpv5P
To refine my other comment above:
I did some research and they are actually looking into it. It's called AOMedia Video 1 or AV1. It already outperforms HEVC and many others. Here is a comparison:
http://wyohknott.github.io/image-formats-comparison/lossy_results.html

And here, using SSIMULACRA and google's butteraugli:
https://encode.ru/threads/2814-Psychovisual-analysis-on-modern-lossy-image-codecs?p=54616&viewfull=1#post54616

The project called Pik is trying to go even further, although the techniques applied on AV1 seems much more interesting than build a compression algorithm just as a convergence to a constructed psychovisual metric (butteraugli), in my opinion.
The FLIF seems a good algorithm for lossless compression. It could be used for a future digital intermediate format, better than the (now open sourced) Cineform or ProRes (I don't know if the decoding speeds are higher enough for this, but that would be interesting to try)...

There's a lot of interesting research and info to digest there. Even though i'm sure there are better options in development I, for reasons of sanity :), tend to stick to currently supported standards. For each new codec or compression algorithm that gains traction there are a hundred that don't and it's impossible to know which will succeed. I remember playing with HEVC years ago but nothing, bar the provided decoder, would read it and it's the same with all these other developments. With that said, it's still something I'll take a look at when I have some free time so thanks for sharing! :)
Colorist working with Davinci Resolve, Baselight, Nuke, After Effects & Premier Pro. Occasional Sunday afternoon DOP. Developer of Cinelog-C Colorspace Management and LUTs - www.cinelogdcp.com

bpv5P

Quote from: Andy600 on October 26, 2017, 01:50:03 PM
Here's the biggest data set I have found online: http://www.gujinwei.org/research/camspec/db.html (click on the database link). Remember, this is someone's research so you would require permissions from the author and The University of Tokyo to publish anything that uses the data. It is also incomplete for the purposes of rawtoaces and would require extrapolating between 380-400nm and 720-780nm i.e. not ideal when you are dealing with such precision.

There's some good data right there. Thanks for sharing.

Quote
I don't know. Ask them

Actually it can be that simple. I know some projects that just asked the company and they provided the information. A example is Debian and OpenBSD wifi drivers. Ralink just gave the data to them.
I don't think this will happen with Canon, since we know the kind of business they do, but we could try.

Quote
Ideally this should be done individually for each camera

Yes. I think the hardware manufacturing itself should allow some kind of  "checksum" of each sensor and automatically generate a metadata file that can be available for download with the serial number of the camera :P

Quote
for reasons of sanity :), tend to stick to currently supported standards

haha, good point.