Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Andy600

#51
Is what true?  ???

The curve? Yes. Look through the DNG SDK. Technically this tone mapping curve is a 'look' designed for print media and is not applicable to video but we are so used to how Adobe's ACR, Photoshop and Lightroom render raw images that we assume it to be correct.

ARRI, Canon, Sony, Blackmagic etc  all have their own version of tone mapping (think the various manufacturers 'Rec.709' looks, non of which match) and the DNG curve is simply another tone mapping but relative to a specific camera. It's neither right nor wrong.

Any tonemapping you apply (lut, curve, LGG, CDL etc) can live under the display transform either to produce a 'look' or something more scientific such as matching defined AIM values of a color chart.

Resolve using Libraw? Yes. look in the Resolve program folder and run process monitor to see the dll being called. Libraw can be compiled to include the DNG SDK but Resolve's libraw.dll is too small for that. It also seems to use Libraw for Canon .cr2 which is a bit odd because Canon also has an SDK but that might be due to implementation/GPU acceleration issues.
#52
Adobe Camera Raw and any app that uses the Adobe DNG SDK will usually render DNGs with a tone curve. The tone curve has the effect of increasing exposure by approximately 1 stop and adds contrast but the actual curve applied is more than just simple gain and is camera-specific.

Resolve's DNG engine (Libraw) does not add a tone curve so what you see in Resolve is just the result of the output transform (Rec709, sRGB etc) and will typically appear darker compared to ACR, Finder etc. Try setting Resolve to YRGB and the raw panel to Rec709 then add ~1 stop of exposure in Resolve.

Increasing raw exposure in Resolve does not increase noise but you may see it more in Resolve compared to ACR which adds some noise reduction by default.
#53
Quote from: bpv5P on October 26, 2017, 03:26:59 PM
Yes. I think the hardware manufacturing itself should allow some kind of  "checksum" of each sensor and automatically generate a metadata file that can be available for download with the serial number of the camera :P

In an ideal world maybe :D

'Off the shelf' sensors do usually come with detailed technical data but for the sensors in our Canon cameras - no chance! :(
#54
Quote from: bpv5P on October 26, 2017, 11:34:38 AM
You said you're trying to add QE data for Canon models, but using "found research", since that to really construct these you would "need access to a monochromator in laboratory conditions", right?

Yes and if you have ever seen one of these setups you will know just how complex they are. A CamSpecs system is another 'cheaper' option but still costs around 10k. At the lowest end there are cheap kits and online educational resources for spectral profiling but it's not really useful for precision camera calibration. https://spectralworkbench.org/

Quote from: bpv5P
May I ask you how are you finding these research?

Google :)

Here's the biggest data set I have found online: http://www.gujinwei.org/research/camspec/db.html (click on the database link). Remember, this is someone's research so you would require permissions from the author and The University of Tokyo to publish anything that uses the data. It is also incomplete for the purposes of rawtoaces and would require extrapolating between 380-400nm and 720-780nm i.e. not ideal when you are dealing with such precision.

Quote from: bpv5P
Some of Canon sensors are from Sony (not all of them...

Canon make all their own DSLR sensors AFAIK.

Quote from: bpv5P
...maybe the Axiom guys know better), from what I know, maybe they have specifications on some website...

I don't know. Ask them :)

I very much doubt Canon has published QE data for any of their sensors. I have never seen official anything official.


Quote from: bpv5P
but I think that it would be possible variations to be introduced even in between sensors from the same batch in the same factory.

Yes and that is a very important point when it comes to sensor calibration. Ideally this should be done individually for each camera but in practice, the tolerances are small enough to not present any significant color issues between sensors from the same camera model. This is how Adobe do it i.e. they profile a camera and it's this 'ground truth' that is used to derive the non-white balanced xy coordinates for all cameras with the 'same' sensor.

Any color calibration beyond that should only ever be done relative to one specific sensor (i.e. your camera) and illuminant (the shot lighting) because using the calibration (i.e. a colorchecker, it8 etc) derived for any other camera will likely increase the mean color error in your camera relative to no calibration.


Quote from: bpv5P
There's also other points, for example, the demosaicing algorithm would add color artifacts (by the way, what I've read about Foveon X3 made me very hopeful that, in future, industry will adopt those). And we are not even thinking about the physical factors, such as the distance people are watching from these images, the ambient luminance, etc.

As I understand it, demosaicing does add artifacts but it doesn't significantly affect collection of QE data because it is very narrow bandwidths (5 - 10nm) and not sampling individual pixels. Sensor and shot noise is more of a problem but it can be dealt with through dark frame subtraction.

I agree about Foveon sensors but sadly I can't see it happening. Sensor research looks to be going in other directions: https://www.bloomberg.com/news/articles/2017-10-24/sony-s-big-bet-on-sensors-that-can-see-the-world


Quote from: bpv5P
Anyway, if you figure it out the QE data of some canon cameras, will you release this data as open source in future?

I doubt I will be spectrally profiling any cameras soon. I did a couple using a CamSpecs Express system a few years ago but it really wasn't worth the hassle and costs when compared to using Adobe's coefficients. If I did then it almost certainly would not be open source because the cost of acquiring and profiling so many cameras (even with a CamSpecs express which is quite simple to use) would be in the tens of thousands.



Quote from: bpv5P
To refine my other comment above:
I did some research and they are actually looking into it. It's called AOMedia Video 1 or AV1. It already outperforms HEVC and many others. Here is a comparison:
http://wyohknott.github.io/image-formats-comparison/lossy_results.html

And here, using SSIMULACRA and google's butteraugli:
https://encode.ru/threads/2814-Psychovisual-analysis-on-modern-lossy-image-codecs?p=54616&viewfull=1#post54616

The project called Pik is trying to go even further, although the techniques applied on AV1 seems much more interesting than build a compression algorithm just as a convergence to a constructed psychovisual metric (butteraugli), in my opinion.
The FLIF seems a good algorithm for lossless compression. It could be used for a future digital intermediate format, better than the (now open sourced) Cineform or ProRes (I don't know if the decoding speeds are higher enough for this, but that would be interesting to try)...

There's a lot of interesting research and info to digest there. Even though i'm sure there are better options in development I, for reasons of sanity :), tend to stick to currently supported standards. For each new codec or compression algorithm that gains traction there are a hundred that don't and it's impossible to know which will succeed. I remember playing with HEVC years ago but nothing, bar the provided decoder, would read it and it's the same with all these other developments. With that said, it's still something I'll take a look at when I have some free time so thanks for sharing! :)
#55
My use of the word Metadata (a set of instructions) is probably not the best analogy here because we typically think of metadata as being embedded in the image container. ACES LMTs are not embedded but can accompany the ACES files through the image pipeline. I believe Sony Image Works use meta identifiers to dynamically link certain ACES clips to specific transforms, CDL and LMTs but these are still independent of the actual images.
#56
Quote from: bpv5P on October 21, 2017, 02:57:36 AM
Ok, but the document doesn't specify the output format. Is it a debayered OpenEXR?

Yes. The ACES interchange format does not hold bayer information like a DNG. It's ACES container format. Half-float with ACES AP0 primaries.

Quote from: bpv5P
I don't have a debian system here to compile it, but I could do it and create a binary for everyone, so we here from ML forum could play with it..

Great. I'll work out how to package the OSX version.

Quote from: bpv5P
Also, If it is a OpenEXR, when I load it on some software (Resolve, for example), the working color space will automatically be ACES (it's on exr metadata) or we have to configure it? If not, what's the purpose of this conversion (if not for the better acuracy of "camera spectral sensitivities and illuminant spectral power distributions")?

The EXR files do contain metadata about primaries but this does not auto-configure the working space of your NLE or color grading app. You have to put it into ACES yourself. The metadata works to transform the pixels from AP0 primaries to AP1. AP0 is used only for the interchange. AP1 for everything else. AP1 primaries are never written i.e. ACES is stored only with AP0 primaries.

The purpose is a robust way to convert raw data to ACES files. Spectrally derived color rendering is much more accurate but essentially rawtoaces is currently doing what DCRaw, Resolve, After Effects etc can do but with the addition of using QE sensor data if available.

Quote from: bpv5P
But, that's the point. If the final product will be displayed on a Rec.2020 space, but the LUT is already limited to Rec.709, use this LUT is a waste since the final product will not use all the capability of Rec.2020, would it?

Not exactly. It depends on the specific lut and how carefully it was made. Well made luts can be rescaled and transformed to output in another colorspace. I've done just that - converted a Rec709 lut for P3 with no visible loss. In most cases you probably would get better results resampling the PFE as part of an analytical lut but it really depends on the actual lut.

Quote from: bpv5P
Yes, but I'm talking hypothetically here. I personally don't need all this quality, because most of our work our clients upload on screaming websites and the final users watch on cheap smartphones, but I try to look from time-to-time what is the best workflow today (taking into consideration the relevance to the workflow, not the 32bit 1% red shit we discussed on other threads).

I understand your point and yes, ACES may be overkill for a lot of users but it's going to be common in coming years. One of the key selling points of ACES is the ability to just switch ODTs which, if you have done multi format work you will know how frustrating and time consuming it is regrading or doing trim passes. There are new ODTs in the works covering smartphones, iPads, new TV technologies (Dolby Pulsar etc) and this will only make life easier. If you have archived your images using ACES or scene-referred log it means you can go back in a few years and easily render new video for the standards of the day.

Quote from: bpv5P
I understand, but this is a view from the set production that needs realtime view of the "close to the final" product, right? My questions are based on the use of LMTs as a tool for final color grading process, not as a realtime look...

Looks should not be baked into ACES files ever as that defeats much of what ACES is about. LMTs can be applied real-time on set and yes, it is used for dailies and more but LMTs should be thought of as glorified metadata that accompanies the clean source material (the EXR files). Anyone in the chain (editorial, color, vfx etc) can then make use of this metadata (the LMTs, ASC CDL, etc) in the exact same way as it was originally viewed. You would then bake it into your video but the original ACES files remain untouched.

Quote from: bpv5P
ps. Sorry if some of you don't understand things I write, english is not my mother language...

Your English is good!  :)
#57
Quote from: Teamsleepkid on October 20, 2017, 06:50:38 PM
i use eos m. i shoot raw. in davinci i set my input transform to canon 7d because eos m isn't in there. i set my output transform to rec709. is this completely wrong? am I actually using aces to grade if i do this? seems like it looks better than just stock davinci color managed space.

It depends on how you want the color to look i.e. accurate to scene colorimetry or a 'look'? Using the 7D IDT or any other IDT with your EOS M raw (cr2) data will be incorrect and will cause wrong colors and artifacts. IDTs will have no effect on DNG files. If it looks better to you with the 7D IDT than without it then it's an indication that the embedded metadata may be incorrect. Resolve's implementation of raw images in ACES is also a bit 'odd'.

Quote from: bpv5P
- IDT is not necessary in raw data

correct

Quote from: bpv5P
- rawtoaces is not ready for use yet

It depends. You can certainly play with it and it's not limited to single frames. See: https://github.com/miaoqi/rawtoaces/tree/feature/DNG#usage


Quote from: bpv5P
Also:
- ProRes 4444XQ is a good codec for Rec.2020 (although I think DNxHD also is a good option)

For archiving yes but not for commercial deliverables. HEVC, ProRes HQ or DNxHD/HR maybe but not XQ.


Quote from: bpv5P
Ok, but these current 3dluts that have widespread use are not constructed to be ACES-to-ACES, right?

Correct. They are mostly built for or are relative to Rec709 display.

Quote from: bpv5P
So, use them together with ACES would be a waste, since the Luts would be limited to other space (?). Someone from academy explain this here (read on "matching LUT X"). But, as he stated:And:
So, he continues, the right step while using ACES is not to apply "empirical LUT X" (a normal LUT converted to ACES), but to create a "Analytic LMT", that is already made for ACES range.




I wouldn't say it's a waste as such but you have to ask yourself why not stick with YRGB workflows if you want to use lots of luts? If you are making deliverables for several outputs then ACES is a useful color management system and luts, although limiting, can still be used.

Empirical LMTs i.e. PFEs or baked look luts are perfectly ok to use as long as you are aware of the limitations. One being that the lut is built for a specific output (i.e. Rec709/P3). This will limit the dynamic range if, for instance you need HDR output. The statement about not baking luts into ACES means simply not baking the look into the EXR files. The look can be baked into the final output.

Quote from: bpv5P
I have no know-how to do this, but one can probably do a lot of money if he manages to convert these Vision3 emulation LUTs to real Analytic LMTs. Just a tip.  ;D


Also, note:
And Nick Shaw (ACES Mentor) agrees:
So, the correct would be to construct, from the scratch, using LMT math formulas an "emulation" for something like Vision3, so it's not limited as a "Empirical LUT X".

In their simplest form, analytical luts are actually fairly straight forward to code but emulating film color and density will still be dependent on luts for a while yet. We already have unlimited coding resources beyond the capabilities of LMTs and so far no one has cracked the math enough to emulate the full chemical processes. It's fun trying though ;)

I'm not sure how much of CTL Blackmagic Design has included in Resolve's ACES implementation (I haven't really pushed the boundaries yet) but hue modifiers in an LMT are an interesting and versatile aspect.
#58
I try to explain in as simple terms as possible but I do tend to use acronyms a lot ::). Typing SPD is easier than spectral power distribution but I guess neither has much meaning if you are not very familiar with the subject. If I don't explain something clearly or simply enough just ask and I'll try to reword it if I can.
#59
IDTs are needed for images that have a defined RGB colorspace i.e. a set of RGB primaries and a white point (with or without a transfer function).

Raw doesn't have a defined colorspace. The color matrices in a DNG (or metadata in cr2, nef etc) provide a method to get the debayered pixels into a connecting colorspace (PCS) and that is typically CIE-XYZ colorspace. Then it's just a simple 3x3 matrix to get to ACES AP0. You also need to chromatically adapt the white point but that can be concatenated or added into into the 3x3.

If you import DNG files into Resolve that is set up for ACEScc or ACEScct you effectively have ACES data to work with. This can be exported to half-float EXR files (as used for high end compositing and VFX work) without an ODT or graded in ACES colorspace - the grade basically IS or is part of the LMT and there can be multiple LMTs per shot. The LMT/grade always happens in ACES colorspace which is huge and will not clip color (however a LUT might).

You always view the image through the RRT (Reference Rendering transform) and ODT (Output Device Transform). The ODT is relative to the playback device i.e. for Youtube you would select sRGB, HDTV - Rec709 etc, DCI-P3 for Cinema but I wouldn't even attempt to use an ODT if you can't monitor on the actual device because movies nearly always require a trim pass - then there's device calibration to think about ;)

ACES is still very much a work-in-progress and not ideal for most work. If you work in a facility where many artists or even different companies all contribute to the same end product then ACES is a no brainer but if you're making videos for Youtube and use a lot of luts then it's not really for you.
#60
Quote from: reddeercity on October 20, 2017, 08:12:26 AM
Will maybe found a way to get Canon DLSR Raw to ACES color space without a IDT , user andyp6 on GitHub seems has written some code to convert Camera Raw to ACES with program called  rawtoaces
I haven't tried this out yet so I'm not sure on the results , there may be a sweet spot to maximize the DR e.g. 100 ISO , 800 ISO etc. ..... .

andyp6 is actually me ;)

I didn't write the code. Miaoqi from the Academy did.

Rawtoaces is basically a way of getting raw images into an ACES container (EXR files with AP0 primaries) using libraw. It's currently only command line but after a year of only supporting .cr2 and .nef files it can now use DNG files though it's not in the main branch yet.

The great thing about rawtoaces is the ability to use a camera's spectral sensitivity and it's response to an influx spectrum (this includes the lens, any filtration and the SPD of the lighting) to derive and apply a Camera RGB to ACES matrix on a shot by shot basis. Alternatively you can choose Adobe coefficients or embedded metadata in the same way as you would with DCRaw.

Spectrally profiling a camera properly is not something for amateurs (you need access to a monochromator in laboratory conditions). The current list of supported cameras is very small. I am adding QE data from 'found' research that will enable (unofficial) support for a few older models, 5D MkII, 50D, 60D but that will take some time and still needs extensive testing. So far there is not enough variance between the results of using QE compared to Adobe coefficients (not surprising because Adobe Labs is a world leader in this) to warrant using rawtoaces over other methods (ACR, Resolve, MLVProducer etc etc).

If you do try it you will likely need to use the metadata or Adobe coefficients which you can already do in most raw apps.

Rawtoaces also doesn't currently support compression so the output file sizes are huge but that and other refinements will come and it is currently Mac only.

I'll add a compiled version to my Github shortly. Sorry, looks like you need to build it yourself. Use this branch for DNG support: https://github.com/miaoqi/rawtoaces/tree/feature/DNG

@reddeercity - You don't need an IDT for raw data because it is assumed to be in XYZ space which is easily transformed to ACES primaries ;)

@bpv5p - LMT's can include but are not limited to conventional luts. They come as CLF (Common Lut Format) and these can contain 1D and 3D luts, 3x3 matrices and offsets, ASC CDL and a few other components but not expressions. There is nothing wrong with properly built luts and for film emulation there is currently no other way to encapsulate color crosstalk of film without sampling it and building a lut from the data.

FilmConvert Pro does indeed use luts and lots of them. I don't know the exact specifics of the app but an educated guess tells me each camera is profiled to the reflectance of a color chart (Colorchecker SG probably) over a range of exposures. The input XYZ values are then mapped to a set of output values derived from densitometry of real film.
#61
Raw Video Postprocessing / Re: MLVProducer
August 16, 2017, 01:21:52 PM
Quote from: Prokopios on August 15, 2017, 07:54:42 PM
One of the best programs I've seen. Great stuff.
      I was wondering if anyone else has this "problem". When you preview the MLV files in the viewer it looks nothing like the DNG or CDNG files that are exported. Any other file that is exported eg. ApProRes, DNxHD, even tiff sequence  is 99% of what you preview in MLVProducer. In the viewer you get a really cinematic picture but the export looks nothing like the picture in the viewer. Here are 3 snapshots if you don't mind taking a look.
I am using the latest version on win 10, Dell Precision  T7610, Intel Xeon CPU 3.4GHz 8 cores.

Because MLV Producer is not color managed and leaves it up to you to apply tone mapping and basic primary color corrections.
#62
Raw Video / Re: best format to publish raw video?
August 16, 2017, 01:05:55 AM
Thanks for your endorsements guys. I appreciate your support for Cinelog but we have pulled this post far away from @favvomannen's original question so I ask everyone to keep the discussion on topic from here onwards.

I will present the MLVP vs Cinelog DCP test results in the Cinelog thread when ready.
#63
Raw Video / Re: best format to publish raw video?
August 13, 2017, 04:04:45 PM
Quote from: bpv5P on August 13, 2017, 01:11:30 AM
Hi Andy600.
First, let me assume my error: in fact, cinema does not use Rec.709 as the working space, but for the broadcast. My fault.

Thanks for acknowledging your error.

Just to be clear, my reply below is not intended to mock you or belittle you in any way. I simply have to correct some of your misunderstandings as some of your opinions that may be perceived by some readers to be based in fact are incorrect.


Quote from: bpv5P
But, here we go:
Maybe high-end cinema ("US-American film industry", as defined by technicolor), yes.

DCI-P3 (by Digital Cinema Initiatives) is the standardized colorspace of digital cinema projectors and was published by SMPTE. If you are delivering media for projection in a cinema (usually as a DCP - Digital Cinema Package in XYZ colorspace) it is transformed to DCI-P3 on projection. This is the universally accepted standard around the world. Technicolor don't really come into this other than their DI dept. adhering to the standard.

You can convert Rec709 to XYZ or DCI-P3 but you will have already clipped colors.

Quote from: bpv5P
ACR does not uses it by default.

I never said it did. ACR is a raw photo developer. I (Cinelog) retask the app to do something it was not specifically designed for.

Quote from: bpv5P


Quote from: bpv5P
ACR needs you to define it for yourself and most people don't do it.

This is only when using ACR with Photoshop. The colorspace setting has no affect when dynamically linking ACR with After Effects (and then onto PPro if desired) which is one of the many reasons Cinelog exists ;)

Quote from: bpv5P
https://helpx.adobe.com/after-effects/using/color-management.html

Quote from: bpv5P
A quote from the adobe page above:

So, this page shows a least some people work on ProPhoto RGB (although I would also not advocate it), you've said "ProPhoto is not used for broadcast or Cinema".

Adobe state that Linear gamma/ProPhoto is a 'good choice' for digital cinema work and in the absence of anything else I would agree simply because ProPhoto has a significantly larger gamut than Rec709 (i.e. less clipping of colors).

Color clipping also occurs in ProPhoto and this is another reason for Cinelog - because Cinelog-C's scene-referred log transfer and virtual primaries keep the color information in-gamut for the transfer to After Effects and can effectively store 13.5+ f-stops in as little as 10bits. The transfer boundaries are too small for a true linear transfer which is why Adobe state 'linear gamma'.

The term 'Linear' is a much abused term. Linear (to light) and Linear (gamma) are not the same thing.

Quote from: bpv5P
Now to the main point. The same page above says that you "To preserve over-range values, work in 32-bpc color for its high dynamic range.". I do not agree with that. Adobe is probably talking about VFX, and our point here is digital camera RAW files.

Again you are assuming anything HDR has to be VFX related. This is not true.

Any digital camera from the past 10 years that produces raw images can capture HDR images relative to most current displays (with the exception of maybe a Dolby Pulsar). Your own camera can likely take images of 10+ f-stops and in Rec709 this is HDR because the data can not be represented linearly within that colorspace.

There are also many transient pixel-level operations in plugins that can produce over-range values - these would be clipped in an 8 or 16bit environment.

Regardless of this misunderstanding. What applies to VFX workflows generally also applies to raw image workflows because of the physical properties of light.

Technically speaking you can work in 8bits if your deliverable is 8bits, the same colorspace and any plugins or pixel manipulations in the app are handled in an internal float space but you will always irretrievably lose information if your source material is 10/12/14bit raw. Obviously, a 14bit raw image will fit in a 16bit workspace but can still clip with some operations so switching to a 32bit float space is desirable to avoid any data loss and important if you are targeting other colorspaces (Adobe even say this).

Quote from: bpv5P
Most people watch in Rec.709. Displays used today are not capable of reproducing wide-gamut.


Just because most viewers may watch in Rec709 or sRGB does not mean that you, the cinematographer, DIT, colorist, editor or software engineer should not aim to maintain data integrity, color fidelity and dynamic range throughout the entire process until output. There are plenty of monitors, TVs and even smart phone screens that now support wider gamuts (including DCI) and uptake is accelerating. HDRTV is currently in a standards battle but it will eventually become the norm and I suspect sooner than expected.

By maintaining scene-referred masters, either raw, .EXR (ACES) or economical log you can future-proof your videos and take advantage of new HDRTV standards and displays.

Quote from: bpv5P
The concept of maintaining DR through bit depth seems totally bullshit for me. If you have configured a working color space, you've already limited that (unless you're working with tangential wide gamut, and that's not the case since you've said DCI-P3). Bit-depth precision may calculate more precisely the colors (although it would not be perceptible), but it cannot preserve more luma DR, it's already limited to 14bit, your greys is already there, you can't create this information from nothing.

What may seem like bullshit to you is the bread and butter of any reputable VFX house and colorists who have a basic understanding of color science.

Defining a gamut does not necessarily restrict dynamic range! Scene-referred colorspaces (ACES 2065-1, Log-C, Cinelog-C, SLog3/S-gamut3 etc) maintain the relationship to linear light (and thus maintain quantifiable linear dynamic range) and have linear RGB values that extend well beyond Rec709, sRGB, ProPhoto, Adobe 1998, DCI-P3 and other display-referred colorspaces.

It's not about preserving more, it's about preserving what is there and, where log colorspaces are concerned, preserving what is there while compressing the signal in a visually lossless manner for lower storage costs - with the convenient benefit of a film-like response to light and simple mathematical inversion to a linear-light representation of the data i.e. identical or close to identical pixel values of the debayered, white balance raw image.

A colorspace is the only way our eyes can make any visual sense of the captured data and so the data must be brought into a colorspace (preferably a bigger colorspace than it will ever be displayed in) in the least destructive way then transformed to a display space in order to reproduce the image as faithfully as possible in terms of perceptual color appearance and contrast, relative to the device (and it's colorspace) used to display the image. In other words there is no getting away from colorspaces.

Your eyes have a colorspace and the idea is that digitally captured data should be transformed into a colorspace resembling what your eyes see (to the best of your display's capabilities). To be clear, it's not your particular eyes but the eyes of a set of observers who participated when the standard CIE models were derived. There will always be differences between how each person sees the same image or color for a multitude of physiological and technical reasons but we use CIE standards as a foundation in color science (for instance, as a model of how the eye perceives light (CIE-1931) or a connecting space i.e. CIE-XYZ etc). This at least tries to maintain some predictability and uniformity to everything we do in the digital cinema, video and photographic arts.

DR (Dynamic range) is only one part of this process. Obviously you cannot reproduce 15 stops of DR on a display that can at best display only 6-7 stops but you can manipulate the data in such a way as to map the higher dynamic range to something that looks perceptually correct to most viewers - this should be the very last thing you do in the processing pipeline because once that mapping happens there is no way to get the extended data back.

Quote from: bpv5P
tl;dr you're talking about an idealistic POV about workflow, but people watch our content on realistic devices, that are not HDR, not precise and most people will give zero shit about a 0,2% shift in your red spectrum on their 4-inches Iphone screens, because most people can't even perceive this. Even on 4k displays you can feel it. I know because I've tested.
You all want to work with 32-float, wide-gamut, galactic-fucking-computer-expensive-workflow, go ahead, I'm not stopping you from anything. But I'll not follow this.

Related meme
]


That is quite a statement.

I am talking about a workflow that has been developed through years of research and is in use by anyone who actually cares about their work regardless of whether it is for a tentpole Hollywood blockbuster or a Youtube video of your cats, seen on an phone screen or an iMax screen.

That 0.2% 'shift' in the red spectrum you talk of may actually be the logo of a huge corporation in an advertisement across multiple media outlets. You may not see it on that phone but what happens when the ad is viewed on something better, something calibrated like a cinema screen? - also, there are already millions of wide gamut 4k devices in the hands of the public. The shift becomes more and more apparent the better these device become and somehow these tiny, inconceivable differences start to matter.


There is a great deal of psychology involved with color perception and accurately translating color in a controlled, calibrated environment leads to less error down the line but that is a whole other topic.

Quote from: bpv5P
P.S: Another quote, this time from:
https://forums.creativecow.net/thread/2/1026156

Apart from that article being from 2012 and related to After Effects CS5 I don't understand the relevance of you highlighting it.

RED footage may have been brought into AE in Rec709 but a) the colorspace can be changed (hell, you can even convert it to Cinelog-C if you want ;) ) b) assuming the workspace is 32bit float the RED footage is held in an unclipped float space so no color is lost and c) most RED users and colorists use RED's own tools to produce intermediates anyway.
#64
Raw Video / Re: best format to publish raw video?
August 10, 2017, 01:42:05 PM
Quote from: bpv5P on August 10, 2017, 01:34:15 PM
I'm not interested in your product specifically or doing the "devils advocate" here, I just don't want people to spread false information. Remember that people will read this in future, through findings in search engines.

Of course. Which is why I try to remain factual, stand by what I say and acknowledge if/when I am wrong.

Quote from: bpv5P
But, if you show us the results of your comparison, and it really has noticiable better DR and color than Alexa-Log, I'll for myself advocate for Cinelog-C and say "I was wrong about your product" here.

and that I have no problem with :)

DR is only a one part of it. You should also pay attention to color ;)
#65
Raw Video / Re: best format to publish raw video?
August 10, 2017, 01:36:28 PM
Quote from: bpv5P on August 10, 2017, 01:05:26 PM
Cinema works with Rec.709 or new Rec.2020. No one works with ProPhoto RGB.

Oh lord  ::) @bpv5P maybe you should step back and do some research before posting such nonsense.

Cinema is not and never has used Rec709 or Rec2020 colorspaces. These are ITU HDTV and UHDTV broadcast standards. The standard Cinema colorspace currently in use is DCI-P3. ProPhoto is not used for broadcast or Cinema but photographers work in ProPhoto all the time.

The article may be about VFX but most of it applies to raw workflows because, in both, you are dealing with scene-referred linear imagery. Even when a raw image is developed, viewed and processed in a display space, maintaining a mathematical connection to linear light (a fundamental of physics), especially where color and apparent realism is is important, is still very desirable.

Quote from: bpv5P
You're already working with Red.709. Your color is already limited. 32bit will not allow your to have "a lot more colors to choose". Also, your monitor is already limited too, unless you have professional HDR-prepared monitors and, even with that, most people watch videos on normal monitors.
It will also not give you "better highlight or shadow", because your footage is in 14bit, not an 32bit HDR.

This is only relative to images that have been rendered in Rec709. Raw images do not have this limitation and this is why you set your working space bit depth as high as possible to retain all the color and dynamic range of the material. This also applies to properly encoded, perceptually lossless log material.
#66
Raw Video / Re: best format to publish raw video?
August 10, 2017, 01:13:51 PM
I think there may be some confusion here about image bit depths, workspace bit depths and Adobe specific architecture.

While the DNG files are 14/12/10bit, ACR can export in 8 or 16bit and processes in half-float space. If you set ACR to 8bits you will always get a lossy image and likely some posterization. This may only become apparent when you start grading and pushing the image around. Always set ACR to 16bits when dealing with raw images and 16bit TIFFs. The setting is accessible when launching ACR from Photoshop or Bridge and is persistent.

Even though your image may be developed and stored or transferred in 8 or 16bits, setting After Effects workspace to 32bit float is important for the operation of some plugins. It is also essential for VFX and avoids clipping the signal path.

Yes, 32bit containers are used for HDR images but, unless I'm missing something, that is not what is being discussed here. Incidentally a 16bitf .EXR is perfectly capable of holding extreme HDR imagery with lossless precision over 30 stops (i.e. 10^9 - well beyond what any camera is capable of and beyond any HDR images you are likely to make) and much more economically than 32bit files but all this is overkill unless you work in VFX and use it for interchange when sharing between artists and studios i.e. an ACES pipeline.

You can also store HDR in lower bit depths through log encoding and done properly you will maintain a relationship with linear light ;)

Quote from: bpv5P on August 10, 2017, 04:00:15 AM
Why would you do that? CDNG is raw, it will take no effect.

Maybe because he want's to develop the image in ACR!?

Quote from: bpv5P
No, he did not. No evidence.

Not in this topic but there is plenty of info in the Cinelog topic. As for 'evidence' see below.

Quote from: bpv5P
Would be really good to see it. Also, public, unaltered DNG's, so we can replicate your test...

Yes, and the source MLVs (I don't think MLVProducer can develop DNG image sequences!?) and of course detailed process information. That's what I mean by doing it properly  ::)


Honestly, with your level of interest I'm beginning to wonder if you are secretly working for us using reverse psychology ??? ;D
#67
Raw Video / Re: best format to publish raw video?
August 09, 2017, 12:00:51 PM
Quote from: bpv5P on August 08, 2017, 11:50:40 PM
Also, no need to buy Cinelog-C. The MLVProducer with Alexa-Log does basically de same, and the author could not provide evidence that his DCP is better than free alexa-log implementation.

@bpv5P - When have I ever said I 'could not' provide evidence?  ::)

As I said in a previous post, I am happy to publish comparisons but these things take time to do properly and I need to slot it in around my other work and commitments. I will publish comparative test results of Log-C produced in MLVProducer vs Log-C derived using Cinelog DCPs asap so you can see for yourself which is 'better'.
#68
Quote from: Danne on July 19, 2017, 02:17:21 AM
You can fit in 4096 points into the dcp profile which in itself is pretty neat. I never seen any dcp profiles from adobe using this. Was it ever intended for this?

No. Check out the DNG SDK for the basics. There are also some undocumented and often illogical things to ACR that dictate what you can and can't do with profiles.

Quote from: Danne on July 19, 2017, 02:17:21 AM
Now how to get hold of that conversion chart.


There is no conversion chart as such only math. I started out with spread sheets then CTL then custom Python scripts but there is still a degree of manual intervention required for building and testing each profile.

Quote from: Danne on July 19, 2017, 02:17:21 AM
Would be pretty neat to have lets say cineon dcp in acr  :P

Cinelog-C is Cineon in terms of gamma already but Cineon without the gamut mapping of Cinelog-C can still clip color. I know because I have extensively modeled, tried and tested each and every log curve and full log colorspaces in ACR.
#69
@Danne - Linear in DNGPE only removes the tone curve so it's still producing a display-referred image and can only work display-referred. Try it with a very high DR image and you'll get clipping.

A lot of calculations are needed to get ACR to produce Cinelog-C colorspace for any particular camera (every camera is different) and it simply can't be done with DNGPE because the matrix controls will always snap relative to the internal processing space (Cinelog-C is outside of this) and the curve points are very limited with relatively coarse interpolation so transformed RGB values are not precise enough for an accurate log conversion. The Cinelog curve still uses ACR's interpolation but applied to 4096 control points.

Pulling highlights down with the Parametric curve in DNGPE is really just faking highlight recovery with no real gains i.e. once you render with those settings you are just getting a flattened image with compromised highlights because you have stretched and/or compressed the spread of code values with no mathematical way to get them back. i.e. there will be large differences in the amount of information contained in one stop compared to the next. Some stops will have more information and some much less information than required - this ultimately leads to less latitude but more chance of banding and other artifacts.

Generally speaking, a log conversion will spread the code values more efficiently and equally between all stops and because you know the CVs input and transformed values you can put them back with the inverse, anti-log formula - thus scene-linear (relative to light).

With Cinelog profiles a known RGB value can be input, transformed with the profile and output in AE then reverted to the input value (relative to the CCT) using the math built into the inverse transforms. The gamut transform is calculated dynamically and relative to the user selected white balance settings i.e. it is recalculated continually as you move the white balance and tint sliders making it more accurate for the chosen white balance than a fixed CCT plus I also use ARRI's recommended chromatic adaptation (CAT02) not Bradford for the calculations. Beyond that I can't divulge anything else :)
#70
@bpv5P

I tried previously to answer your questions in an open and thoughtful manner (even offering you test conversions) yet you chose to take a very negative approach based only on your own assumptions with what I can only assume to be a mindset that is prejudiced against anything that is not open source. You also chose to question the integrity of 2 highly regarded members of this community with this: 

Quote from: bpv5P on July 18, 2017, 05:04:39 PM
But... I have to ask: is @Danne and @hyalinejim doing some astroturfing for you?

Certainly not! I have never solicited, used or manipulated responses from users, deceptively or otherwise. @Danne, @hyalinjim and others can speak for themselves on this but no users have ever benefited, in any way, through endorsing what I do. Their comments and views are their own.

Over time, Cinelog has received numerous endorsements privately from users, some of whom are highly regarded industry professionals and known for being highly critical but I chose not to use these comments even though I could for marketing purposes for the reasons I stated in my previous reply to you.

QuoteI've noticed you're using some selling techniques (especially social proof and bandwagon bias).
There's nothing wrong trying to sell your products, but I don't like astroturfing and discourse manipulation, and as a open source community we should keep these things out of here.

I had to Google the marketing terms you used.

I agree with the last part of your statement (above) however, in the context of your post, there is very obvious implication of dishonesty directed towards myself, Cinelog and the users you mentioned previously that I strongly deny and take issue with. I am not knowingly using any selling techniques. I do not make spurious claims and I am very open with my answers.

Magic Lantern is an open source project but using it or contributing to these forums has never precluded anyone from discussing or recommending commercial applications or products here except where those products have violated Magic Lantern's licensing or the terms and conditions of this message board.  The fact that the vast majority of users do use commercial software is evident.

QuoteSo, if you don't need to achieve the exact same colors between various cameras and don't need all the other stuff (luts and support), there's no advantages? Yes, it's our entire choise to feel we want/need it, but if there's no advantage, why would anyone waste money on it?

Sorry I don't quite understand your logic here. Why would you buy anything that you don't want? Aside from the things you mention (accuracy, luts, support etc i.e. some of the advantages) you should take another look at my previous reply and the responses of others as to why Cinelog is regarded as it is. In addition to that, it provides an effective scene-referred processing capability in an app (ACR) that is strictly display-referred and bypasses any requirement to use image-adaptive filters (that will cause flicker). That might sound like marketing spiel but it is factual.

QuoteSince your choise is to keep color conversion linear, your two points of improvement can me the luma curve and color precision. Most people here don't need that precision in color, so on the luma curve there's no better dynamic range preservation compared to alexa-log (the version implemented on MLVProducer, for example)?

What is your assumption about 'dynamic range preservation' and 'alexa-log' based on exactly?

Alexa Log-C is for encoding the 16bit DGA signal from the Alexa's sensor and not an efficient use of the space for transcoding 14bit MLV (and becomes increasingly detrimental to 12 or 10 bit MLV as it spreads code values too far apart and can increase the visibility of banding).

When it comes to transcoding, my choice (dictated by color science and best working practices) is to keep initial color rendering strict and color manipulation to an absolute minimum, deferring color decisions to later in the pipeline. Basically retaining the maximum latitude in a known colorspace.

You again mention 'alexa-log' but what is that exactly? MLVProducer is a great app that can be used for everything (and there are several others too) but there are quality differences and often issues between raw video debayering and encoding with such apps compared to their commercial counterparts else why would those exist and why would people in their millions purchase them? You might answer with another one of the marketing terms I looked up 'herd mentality' but I know quite a few artists, film makers, colorists and developers who might take offence at such a suggestion as they opt to use commercial tools simply because they get the job done without compromises. The free tools on offer often have short comings and, as I described in my last reply, the limitations in open source raw libraries can restrict or limit development.

I'm not detracting from any OS app developer because I know they can be as dedicated as commercial devs and, if you support them, and their tools are good enough for you then who is going to argue with that? Certainly not me.

QuoteAgain, don't get me wrong, I'm not trying to prejudice you or anyone possibly working with/for you. It's just that, if your product has no advantage over a free/open implementation, I think no one should buy it. I can think in a recent example like this: corporations were selling certificate authority for many time; "Let's Encrypt" implement a free/open implementation doing the exact (or better) same thing. Now everyone is going to Let's Encrypt. The same should happen with any product that does not do it's job. Contrary to what marketers say, quality is very important. You can do money with basically anything, but not everything keep itself on top of others if it has no advantages over these other alternatives.

I have stated quite clearly what the advantages are and the added value that comes with Cinelog. If you don't value that then simply don't buy it. 

Regardless of your initial statement above, you seem to have a jaded view towards what I offer but Cinelog is not 'Let's Encrypt' and there is no 'fake news' mentality at work here. Regardless of your insinuations I am perfectly happy to respond here to your questions and will always answer as clearly as I can, within the limits of protecting Cinelog IP. However, if you again choose to imply dishonesty or question the integrity of myself or other users without foundation, I will simply ignore you.
#71
Quote from: bpv5P on July 14, 2017, 02:27:57 AM
@Andy600 you have mentioned that Cinelog is based on AlexaLog. What's the difference between them and why should someone use Cinelog instead of free AlexaLog?

@bpv5P

That's a perfectly reasonable question. I could just say that Cinelog-C profiles provide the best raw image conversion ready for transcoding but that is subjective. There are technical and aesthetic reasons why I believe this is so but the full answer would be very long and likely involve a lot of image comparisons across a lot of apps (something I should probably do for the website). I tend to limit my own opinions to technical aspects and let users' real-world use of Cinelog in creating their images do the talking. It's ultimately about getting the best results but admittedly you do need a critical eye to see it sometimes, especially if the shot is average DR and well exposed.

The profiles are just one part of Cinelog-C. There is also our look luts (we are profiling new film stocks atm) and custom OCIO configurations plus one2one support to factor into things. It's not free but it's entirely your choice if you feel you want/need something or not :)

I have heard it said "I tried xyz app but can't get as good/clean/nice an image as I get with Cinelog". While this is also subjective (and I hope they are not just describing the look of the flat log image lol) there is real math, implemented with a solid understanding of how the host app manages color, that is ultimately producing favorable user responses.

To answer your question better you'll need to be more specific about your 'AlexaLog' and where it comes from i.e. what is the app doing the conversion? Is it just Log-C gamma or the full colorspace? Every MLV app I have tested to date, that offer 'AlexaLog', 'Log-C' or other log colorspaces, seem to have their own interpretation that either does not use all or any of the published math or, if the math was used, is being implemented or interpreted incorrectly by the dev or is otherwise being adversely affected by the libraries used for building the app and restricting/altering the output to something other than what it is labeled as. If you want to provide a sample of your AlexaLog transcode and the original raw file I'll happily convert it to Log-C via Cinelog-C for you to compare.

I have actually worked with and helped several devs and their open source raw apps over the last few years in an effort to bring greater color accuracy, provide math or help with other color science related issues but, as has proven to be the case each time, there has been some fundamental limitation on what can be achieved with the open source raw libraries used.

The Cinelog-C log curve is Cineon, not Log-C (although Log-C itself is based on Cineon - the 'C' stands for Cineon) and there is a good reason for this. Canon and ARRI use slightly different methods to describe and chart relative exposure (I need to dig out the actual math but from memory the base ISO of a Canon DSLR is 400 relative to the Alexa's 800ISO when using ARRI's methods. 100 or 200 depending on the model according to Canon methods) and by substituting Log-C for Cineon you effectively get a close exposure equivalent to what you would get from shooting the same scene on an Alexa or Amira camera at the same EI/Ev. There is a small +offset difference (~2/3 Fstop) but it means you can effectively use Alexa luts and presets just by pulling Cinelog-C exposure a little before the Alexa look/lut. The toe part of the Log-C formula is very specific for optimizing the Alexa's noise profile and not really useful for much else hence why I didn't just go with a Log-C curve (I did experiment with Log-C and many other curves but Cineon was optimal). Cineon log is also ubiquitous in color grading and VFX apps so it's easy to linearize and is also relative to print density i.e. ready-made for film print luts.
#72
Hi @Paul

3rd party profiles like Cinelog-C need to go into a User directory not the main Adobe profiles directory (this avoids non-native profiles being systematically deleted when upgrading Adobe Camera Raw versions).

For Windows the install path is typically

C:\Users\(Your user name)\AppData\Roaming\Adobe\CameraRaw\CameraProfiles

#73
Quote from: jean-paul34 on June 30, 2017, 10:47:10 AM
I am photographer and video maker and I have a problem with the new version (MLV_lite 12bits). Raw magic no longer work.
just a question
Where can I get raw2cdng 1.7.9. for Mac ?

raw2cdng is Windows only and the app you mention that no longer works is not supported here as it violates Magic Lantern GPL. Please look at the sticky topics in the 'Raw Video Postprocessing' section for suggestions.
#74
Quote from: D_Odell on June 28, 2017, 11:11:09 AM
Very useful for making editable footage! Great job!

I wonder if I'm not seeing of if you can implement the following?

  • I can't play the footage, pressing SPACE and nothing happens. Scrolling works though. Any clues?
  • I can't zoom in the image, for example, I record 3,5K sequences with my 5D3. Could you make it possible to zoom in to 100/200/400 %?
  • LUT option? Since I have several LUT with Cinelog color space, an option for applying a Lut prior to export mov would really enhance this app.

Al the best,
David

Edit: Sorry for not knowing about the re-write. Then maybe options for the future updates..

Please be aware that the colorspace options in this app are gamma only. This app currently can not render images correctly in Cinelog-C colorspace. Your Cinelog-C luts will not work as intended and will have incorrect color.

AFAIK there is no realtime playback capability with this app.
#75
Post-processing Workflow / Re: fastcinemadng
April 27, 2017, 02:58:44 PM
Even though you can read/apply Cinelog profiles in RawTherapee or other apps they will only produce Cinelog-C colorspace in Adobe Camera Raw because the profiles contain compensation for a limitation that is unique to ACR when it is used in conjunction with After Effects. Using the profiles in any other raw app (i.e. raw apps without that limitation i.e. any raw app that isn't ACR) you will be introducing a new issue.

I'm not sure why you would even want to use a fixed colorspace management DCP anyway as your app is built on GPU accelerated shaders!?