Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Andy600

#1
Quote from: ilia3101 on February 02, 2020, 07:26:18 PM

Hm, I still want to try out some "university research project" style methods

You should. I would recommend starting here:


https://acescentral.com/t/rawtoaces-calling-all-developer-types/1048/63

https://github.com/ampas/rawtoaces

QuoteI guess not with my own spectral data, don't have access to any of those instruments.

Most don't have the equipment. It's very specialist.

You can find spectral measurements for several cameras online. There are 5D MkII and MkIII for sure but I don't have the link. It's probably linked in the raw2aces thread (link above).


QuoteWhat is stopping them from releasing this data though? I must be missing something obvious :/ I don't really understand how it would harm their business.

Spectral response should become the standard way of describing a camera's colour in metadata, so software can do whatever conversion methods it wants... I don't see what camera companies have to lose. It would be so great.

Totally agree but..


Competition. Sensor technology is a hot topic and these big corporations don't want to give away anything that may give the competition any insight or advantage that could eat into an already diminishing market share.

The first BMD cameras used off-the-shelf sensors and there is spectral data available for some of those. You could also look at the likes of ON Semiconductor https://www.onsemi.com/support who provide development tools for their sensors. You might find something useful there.
#2
Quote from: ilia3101 on February 02, 2020, 04:15:35 PM
I see.

I think there needs to be more alternative raw processing outside of the "Adobe DNG SDK" world.

Of course.

I don't think this is really about Adobe vs the world in terms of color processing. They invented the DNG spec but there have been huge developments in raw capture and processing since.



QuoteThis is interesting. Does any one, or any software do this?


In terms of ACES, IDT's are the responsibility of the camera manufacturer. Some provide IDTs (ARRI, Sony, Panasonic etc) as a transfer function and matrix solution. Aside from the occasional university lab producing spectral data for one or two cameras there is only one company who seem to be dedicated to producing independent IDTs:

http://ismini.tvlogic.tv/en/wlp/cameraprofileidt.html

TVLogic obtained the business from Wowow (Japan) who purchased it from FujiFilm so there is some pedigree behind it but it's only relevant to their own hardware. I use them occasionally but have never had great results with the 3rd party IDTs they provide and opt for the generic colorspace IDT that the camera manufacturer provides

QuoteIs a polynomial even the best option? What about a 2d lookup table in xy space, applied after a matrix - which will give a new x,y value and a Y multiplier for all input xy values. This table would be calculated based on spectral data. (It doesn't have to be xy, maybe somehting more uniform but it has to be linear)

It would be like an extra level of correction after a matrix, could give accurate spectral locus and correct small shifts. What do you think?

Probably not.

I can't be 100% sure it was LUT based but I believe there were some efforts in this direction a while ago that never got off the ground. I suspect the cost vs benefits of measuring then correcting for a specific camera's spectral deficiency are more suited to university research projects, astronomy etc. A monochromator alone costs in the tens of thousands.

I used a cheaper system CamSpecs Express (still hideously expensive) a few years ago in helping develop color calibration for clients with self-build cameras. The device measured spectral response with a selection of band pass and color filters to produced a matrix solution to XYZ. That seems to be another acceptable way of creating an IDT but again it's only a simple linear solution to a complex non-linear problem.

A more practical solution for color calibration is what we already have. A Colorchecker but this is limited to reflectance.

If sensor manufacturers would only provide the spectral data we'd be in a much better place but I can't see the likes of Canon, Sony, Fuji, Nikon etc sharing this information. On the rare occasion where I have dived into IDT creation with spectral data the end result (albeit only matrix based) was different but no better than an official IDT. It just had 'slightly different' color issues.
#3
Resolve installs Libraw.dll for DNG support.

Resolve doesn't support camera profiles, forward matrices and a several other things that are part of the specification. BMD implemented the CinemaDNG subset for the obvious reason that it's for moving images and their own original cameras supported CDNG. There would also be a processor hit if they supported everything that is included in the SDK.


#4
Quote from: ilia3101 on February 02, 2020, 01:45:08 PM
What are the basic differences between the two?


Basically this..

Resolve = Libraw = DCraw = limited DNG SDK support

Adobe = Full SDK implementation.


As I've said many times here everything for color processing DNG is in the DNG SDK ;)

Just because some of the specification is implemented in an app (e.g. Resolve) doesn't mean it is interpreted in exactly the same way as the Adobe interpretation.
#5
Quote from: Luther on February 02, 2020, 11:58:57 AM
I don't think this is accurate. The IDT contains spectral data, while the DNG will contain only a simple matrix. While it's true that you can use ACES color space in any image, ACES is more than just the color space, it's a color management system.

I know what ACES is and does. I consult at several animation studios, some use ACES pipelines ;)

In an ideal world we would measure each individual camera's spectral sensitivity and derive a polynomial solution but that is impracticable and doesn't take into account lenses, filters and a multitude of other influences.

Even when there is spectral data available it is typical to use it only to derive a matrix solution from the camera to AP0 primaries. The Academy's own raw2aces software even does this.


QuoteTrue, but you can easily convert Rec.2020 to any other display space without losing information. The same cannot be said for Rec.709 or P3.

True, Rec2020 encompasses Rec709 and P3 gamuts. Grading Rec2020 is definitely a good idea IF you have a capable, calibrated monitor which, I think, the OP doesn't!?

As with my previous reply, going Rec2020 to Rec709 will also (usually) require a trim pass. It's not as simple as switching ODT.

Quote
If you're already using Alexa WideGamut, I agree. But going from Rec.709 processing to ACES is a huge jump in color quality.

How so? If we're talking about raw and you process it as float data the output device (not accounting for the capture quality) will dictate the 'color quality', whatever that is?
#6
Lets clear a few things up.

You don't need an IDT for raw files in Resolve's ACES environment.

Resolve will automatically debayer raw files into ACES colorspace when ACES color science is selected. DNG files should contain enough information to display the color correctly. If the color is wrong it means the embedded color data is incorrect or incomplete for the scene or even the camera itself.


Tip: always, ALWAYS set your camera white balance and exposure correctly for each scene and lighting change and, if you can, include a proper grey or white reference target at the beginning of each shot. It will save you a ton of effort and guessing later. Taking a still before pressing record will also provide a general color reference and tell you if the app producing the DNG files from your MLV has any issues.


ACES IDTs (Input Device Transforms) are for intermediate camera colorspaces (Log-C, S-Log, C-Log etc) and ADX film scans only. Inverse ODTs (Output Device Transform) can be used as IDTs for monitor and workspace colorspaces if required.


I'll say again - DNG files, .cr2 files and most, if not all, raw file formats do not require any IDT for ACES. As long as the sensor has been accurately calibrated to map to XYZ it can be transformed to ACES AP0. The more calibration information included in the raw file, the better the color rendering and reproduction of captured light.


Digital Camera Profiles or .dcp (not to be confused with Digital Cinema Package) are a form of IDT but they do not target ACES, they typically produce a final look (Adobe Color, Standard, Portrait etc). Cinelog-C profiles target an intermediate log colorspace which is essentially a type of compression.

Cinelog-C in ProRes/DNxHD/HR etc can be used in Resolve's ACES environment with the Cinelog IDT (DCTL) or with an additional transform targeting a different, natively supported colorspace such as Log-C. You can also export ACES files (float EXR) by transforming Cinelog-C to ACES AP0 in After Effects and exporting to EXR. ACES data (AP0), being linear, should only be stored as float or half float in EXR containers to avoid any clipping.

After Effects does not write ACES EXR tags which may be an issue in some ACES apps but most assume EXR files to be AP0 by default.

Multiple transforms between exclusive, incompatible colorspaces are not a good idea. You will always introduce error and likely some clipping, hue rotation or other issues depending on the method used. Using OCIO as you have described is 'creative' but technically incorrect. I'm not saying the look is wrong because that is subjective and your taste.


Incidentally, doing this is adding an unnecessary step:

QuoteAdjustment layer 6 - OpenColorIO
Config. ACES 1.03   
Input Space: Utility - Rec.709 - Dysplay
Output Space: ACES - ACES - ACEScg
Adjustment layer 5 - OpenColorIO
Config. ACES 1.03   
Input Space: ACES - ACES - ACEScg
Output Space: Output - Rec.709

The transform to ACEScg, being an output in the first layer and an input in the second, is a null operation. You can simply go from Rec709 display (in) to Rec709 (out).



ACES in After Effects is limited but can work provided you set it up properly and there are several ways depending on the source material.

Raw ingest and transcoding using a Cinelog-C profiles requires additional steps in setting up a linear After Effects workspace in order to export ACES EXR files. A different workspace setup is required for grading the ACES EXR files although some Adobe color tools will not work as intended.

Resolve is much better for ACES so the choice is to do everything in Resolve (from raw files with or without transcoding) or transcode either intermediate log ProRes (small) or ACES EXR (massive) files from After Effects. You can also work in a pseudo-ACES environment directly in After Effects using the ACES OCIO configuration for colorspace management (relative to the assigned ICC workspace colorspace) but you then get into a minefield and it's very easy to lose track of things without a constant A/B against a dedicated ACES environment like Resolve. This method is similar to how Nuke works with OCIO but with the advantage of Adobe Camera Raw.

A good check to see if your ACES environment is setup and working correctly is to use these materials provided by ARRI:

https://www.arri.com/resource/blob/67438/a87188ffbb425d3f42d013793f767b93/2018-acesreferenceimages-data.zip



They also have some very useful and detailed write-ups with example configurations (Nuke) on ACES workflows:

https://www.arri.com/en/learn-help/learn-help-camera-system/camera-workflow/image-science/aces



Resolve and Adobe Camera Raw will interpret your DNG files slightly differently because they are built on similar but different architecture. They use different white balancing and highlight recovery methods. The difference in demosaicing quality is debatable. Also, if the camera data in a DNG is incorrect or incomplete a Camera Profile or ACR itself can override it with correct data. Resolve can only use whatever data is embedded in the DNG to reproduce color correctly. This goes for all Resolve environments, not just ACES.


Quote"best practice" now is to use Rec.2020 instead of Rec.709/P3


Best practice is to target whatever colorspace the intended playback device displays and only ever grade for the device colorspace(s) you can physically view i.e. your monitor(s).

Theoretically you should be able to just swap out the ODT and get the same result across multiple devices but that's never the case. You should never blindly assume a project graded under the rec709 ODT will translate when switched to a PQ ODT. In practice it always requires a trim pass on a calibrated HDR reference monitor to adjust levels but going to HDR also opens up other possibilities to display richer color and enhance details not visible in smaller display spaces. This invariably leads to an alternative, enhanced and different grade.

Lastly. Do you really need to use ACES? I like ACES a lot and I can see the appeal for multi-cam, multi format shows, CGI and collaborative,cross platform workflows but IMO it's overkill for most things, especially if you don't fully understand how it works and what it's for.





#7
There is no published spec that I know of but it's basically a high precision 1D lut concatenated with a high precision 3D lut. It's quite rare.

This specific lut will only work in BMD Film colorspace and unlike other shaper+3D luts it samples an additional transform in unbound linear space but maintains linear greyscale in the 3D part. It's pretty complex to create and ordinarily this type of colorspace transform would require an additional 1D lut after the linear matrix transform to get from linear to log space. Cinelog (Resolve) does it all in one lut but at the expense of limiting the colorspace (as all luts do), albeit to a space far larger than any real-world colors exist in (similar to ACES). You don't lose anything but technically speaking it does impose a limit because it's a lut.

As I've mentioned before it would probably be better to implement a proper CMS (ACES, OCIO etc) into MLVApp as it would be more efficient and infinitely expandable. Lut color transforms, especially ones like this are quite heavy in terms of file size and would require several GBs of luts to cover transforms to/from most typical colorspaces.
#8
It's not a normal 3D cube lut. It's a hybrid format specific to Resolve and can't be parsed in other apps. It also doesn't conform to the Adobe .cube lut specification although it is based, in part, on it.
#10
Quote from: zcream on October 05, 2018, 12:46:20 PM
Thanks for the 50d check-in. I don't have access to my 50d ATM, could someone kindly test on the 50d..

Sent from my Lenovo TB-8703F using Tapatalk


https://bitbucket.org/hudson/magic-lantern/commits/db4ee396f4a261d33688b528c39447682f871a07#general-comments
#11
Quote from: RÁTNEEK on September 14, 2018, 02:10:16 PM
I just read that BMD gave out source and all for their new Black Magic Raw codec and is available to developers.

Link: https://www.4kshooters.net/2018/09/14/ibc-2018-blackmagic-design-introduces-a-brand-new-blackmagic-raw-codec/

Do you think it is possible to implement it in your already more than awesome MLV APP?

As I understood one of the main differences is that you get one file instead of separate cDNG frames in folders, which are harder to move around disks and harder to read so you get slower previews because of data bottleneck there.

MASC and other developers of MLV APP thank you for this amazing tool.


BMD Raw is not an interchange or container format.

It is only recordable in the camera (and probably in new external BMD RAW recorders).

Like ARRI Raw, ProRes Raw etc there is no BMD Raw output codec other than when trimming original BMD Raw files. You can't render to BMD Raw from Resolve.

I suspect this has come about as a way to reduce licensing costs when incorporating ProRes. They simply get rid of the Apple codecs altogether. Smart move but could backfire if other NLE vendors don't support it. I wonder what this will do for official ProRes support in Resolve in future as BMD Raw is a direct rival to ProRes Raw.
#12
Quote from: ItsMeLenny on August 10, 2018, 04:13:58 PM
..The problem I'm having now is I both cant get it off, or the ring inside it which has all the lens specs on it. When I try to undo that the front lens element unscrews instead.

Getting traction on lens elements and retaining rings can be a pain. I have used rubber sewing thimbles or even rubber coated gloves with moderate downward pressure. This should give you enough grip to unscrew the ring unless it is glued. 
#13
Quote from: 50mm1200s on July 21, 2018, 03:06:15 AM
Found this neat open source software. Might be useful of everyone working with LUTs:
https://lattice.videovillage.co/


Lattice is great but it's not open source and it's Mac only.

You should check out https://cameramanben.github.io/LUTCalc/. The online version is free but the Chrome and OSX versions are only a couple of dollars. You'll need to understand what you're doing to get the best from it but it has a comprehensive feature set and the source code is available if you wanted to go deeper.
#14
Edit: Sorry, this may be counter intuitive to my earlier reply but to clarify*:


Quote from: Ilia3101 on April 15, 2018, 01:09:45 PM
@Andy600 Thanks for the explanations. So ColorMatrix2, when inverted, describes a transform from "CameraRGB" (debayered) to XYZ with D50 white point.(?)

Not exactly. A CIE white point isn't yet assigned but you can theoretically assume it is (or later will be) D65 white because the temperature of the illuminant under which the calibration is taken/made is ~6500K. DNG math works in XYZ space (CIE D50 white or little x 0.3456 little y 0.3585 big Y 1.0000) so the color matrices need adapting to D50 to make DNG WB math work. CameraRGB doesn't have a defined white point other than what the calibration illuminant dictates, there is a point in the matrix where R,G and B would = 1 so this is used as white.

Quote
So... when that image in XYZ is transformed to sRGB (for viewing) it will be the same as having the white balance slider set to ~5000K in a raw converter and looking at it?

That depends entirely on the raw converter and how it calculates/interprets white balance. CameraRGB is not itself neutral and you will be viewing it on a monitor, likely to be using a D65 white point (so the wp should have been adapted for it) and there are white balance multipliers to factor into it which control the slider so the app slider should reflect the as shot color temp (there will likely be a difference in what the WB is interpreted and displayed as depending on which method of white balancing the app uses).

Quote
And it makes a lot of sense that the sensor would respond differently at lower colour temperature, never thought about that before. But I'm a little confused as to what the temperature of ColorMatrix1 (the ones we have in ML code) is... where do I find out?

ColorMatrix 1 is ~2850K (the approximate temperature of an old school tungsten filament incandescent light bulb). AKA CIE Standard Illuminant A

for how DNG works look in the Adobe DNG SDK :)

(*Might have to clarify some of this further as it's from memory. I need to refer to my notes to be sure :-\)
#15
Quote from: Ilia3101 on April 12, 2018, 08:06:40 PM
@Andy600 I have been wondering for a very long time, is ColorMatrix2 D65 white point? I have assumed this, and it seems to match(??), but not tested it with actual comparison. If you could tell me definitely if it is D65 or D50, that would make me quite satisfied. These are adobe matrices I think (is this right bouncyball?)

No. It's not that simple unfortunately. Technically speaking it's D50 but would be observed as green on a display because of the bayer pattern.

The color matrices describe a transform from XYZ (D50) to non-white balanced camera RGB. The D65 part only references how the color calibration was performed i.e. D65 is a calibration under a daylight illuminant (~6504K) and Standard A is under a tungsten illuminant (~2856k). The sensor behaves differently depending on the spectral power distribution of the light source hence why it's a good idea to have 2 calibrations under different temperatures.
#16
Quote from: 50mm1200s on April 12, 2018, 03:24:32 PM
It's the "AlexaV3_K1S1_LogC2Video_Rec709_EE_aftereffects3d". Parameters are Photometric Scaling, LUT Dimension 65^3 mesh and Bits set to default.

Does it include a colorspace conversion?


Quote
If you mind, do you have any resource where we can get this matrice for tungsten?

It's in the camera_matrices.c you posted ;) (the second rows of each set are the Tungsten/StdA matrices)


Quote
Trying to save you some time, here is the camera_matrices.c:

Ok after a very quick look through  it looks as though the full set of Adobe matrix coefficients are there (in camera_matrices.c) but only the second matrix (D65) is assigned. It also shows a Dx0 matrix for the 5D2 ??? (is this used for everything?). The Adobe DNG SDK has everything needed for raw color all in one place so it escapes me why devs continue to cherrypick non-standard info from the internet. It must be a coding thing!?

Another pet peev of mine is XYZ colorspace being assumed to have a D65 whitepoint (as with that xyz2rgb matrix). XYZ colorspace, as referred to in Adobe DNGs and ICC profiles and most apps built on ICC, has a D50 white point and all the math uses D50 with chromatic adaptation where necessary to change the white point. You have to be very careful to not mix up D50 and D65 matrix math or you will get the wrong colors. I'm not saying that's what's happening here but there is a mixture of methods in use and I would have to pick through the code to see how it's working.

Quote
I just don't know how it assign each matrix. Through MLV metadata? I've found mlv metadata not to be so reliable (in the past... don't know if anything changed in past year).

You could add it manually with Exiftool, override it with a DNG profile in ACR or ask the dev to enable it. The actual difference to color is usually very small (but can be more with certain lighting) but the second matrix is preferable for white balancing non-daylight sources (as with your footage for example).

The Log-C math looks correct but where and when does it get applied? Is it in float or int and before/after colorspace is assigned. This can all make a difference. AlexaLog should match Log-C in other apps (at least the gamma part because MLVApp is limited to sRGB primaries from what I can tell).

The white balance multipliers are Canon's and wouldn't be relevant to DNGs if the app utilized the SDK. Adobe's white balancing is IMO far superior and more importantly, neutral but it's a bit more complicated to implement.
#17
Quote from: 50mm1200s on April 12, 2018, 01:09:31 PM
Yes, I suspected it was more complex than I thought.
MLVApp uses AlexaLog from this paper, it is indeed EI800. The ProRes color output is bt609 from ffmpeg. I think Premiere Pro reads it normally by default. The color matrix MLVApp is using came from ACR (actually you're credited in the source code for helping), so it's probably "precise" enough...

Do you know which ARRI lut you are using exactly? If you're unsure you can send me it or tell me the parameters you selected in the LUT generator and I'll check. If it's the full transform then there's your main problem. Your footage is in BT.601 or BT.709 (there's no bt609) and you are transforming it as if it were in a much wider gamut. This will not only cause over saturation but hue rotation because the primaries lay on a different axis and, because the gamut is a lot smaller than Alexa Wide Gamut, you're also likely losing some color information when rendering to Prores 'AlexaLog' in MLVApp.

The matrices are originally from Adobe. MLVApp looks to be writing a single matrix (D65) which should be ok for most daylight shots but white balance accuracy would be improved a bit if it also included the tungsten matrix.

Quote
Yep. Also, the background and the hair tones are in the same shade of grey, so when I try to get the background less magenta the hair just changes together  :'(
I'm also using Lumetri from Premiere for this and not Resolve. My fault, I can't expect very much from Lumetri. [Edit: I should just buy a 18% grey card already, I know]

You can get decent results with Lumetri. Try using secondaries and, if necessary, multiple instances of Lumetri to isolate and grade problem parts of the image (after your primary grade).

I think the #1 piece of color related advice I would give is to always shoot a reference/target. A simple grey card can be very cheap and once you have that in shot you have a reference for exposure and white balance. I would say it's essential for any commercial shoot and most casual shooting really benefits from it.

Quote
After your last reply I changed it a little, but I was doing it because the skin tones just get's too dark after applying (linear) contrast. I can just apply a general gain, but highlights will clip. I'm using a curve like this (you can see I'm quite agressive in the highlights):



If you have any pro tip for me, I take it :)

Try adjusting the contrast pivot point lower if possible. Alternatively try increasing overall gain/exposure then pull down shadows and rolloff highlights i.e. a classic s-curve. This should give a more natural look.

Quote
Thanks. Indeed, the noise reduction (NeatVideo) is way too strong.

Yes. I have a rule with NR and that is to only use it if it's really necessary and then use as little as possible, limiting it to whichever channel(s) the noise is most apparent i.e. R,G,B or Luminance. Neatvideo is extremely good but is very easy to overcook if used as a broad stroke across everything. I also tend to limit sharpening to luminance or use highpass filtering on skin but that's always subjective.

Quote
Thanks a lot for helping Andy, I'm learning very much these days...

Your welcome :) I'll check out MLVApp's color when I get bit more free time.
#18
I suspect 'AlexaLog' is only the Log-C curve (1D) so if you're using an official Alexa 3D lut that transforms both the gamma and gamut from Log-C to Rec709 you will get these types of color problems.

You need to know the gamut that the image is in before applying a specific technical 3D lut or you are only compounding your problems. If you don't know the gamut it is safer to use only a 1D lut to get from Log-C to Rec709 and then add your own color correction (before the lut).

This also assumes also that 'AlexaLog' is actually using Log-C math and that there are no colorspace or levels issues in the app. Also, you really need to know which Log-C curve is being used because it changes relative to exposure. The default as used in most NLE's and color grading apps is EI800.

Where can I find AlexaLog? I'll check it when I have some time.

Re: WB. Maybe. I didn't do any grading, only set a WB. I'm going only by what I can see on a vector scope and there is no neutral target in the shot. The model has a pink complexion and the beautician/make-up artist is more of an olive color so cooling WB will tend to make the pink hues more blue and less life-like, especially under mixed lighting. If I were grading this I would certainly be using qualifiers to isolate and treat the different skin tones independently.

Try doing a basic grade without a lut and see if you still get clipping. +3 on mid-tones is quite extreme and yes, will likely cause some banding, especially if done after the lut. Why are you pushing mid-tones so much?

This is purely a subjective observation and you may actually be going for that look but I find the skin smoothing (possibly extreme noise reduction?) in 01.png to be way too much. It completely loses any texture in the skin and looks very unnatural. Try dialing back on the effects and you'll get a much better look ;)

Are you using Lumetri in Premier or After Effects?
#19
@50mm1200s

The problem is not the white balance unless you used As Shot or Auto.

I dialed in WB at 3850k (no tint) for a reasonably neutral balance but you won't get it precise without knowing the lighting or having a gray/white card target in the shot.




On the scopes it looks like you're also using a film lut or film look preset too? That is adding some heavy saturation to reds and magenta. The lut/look is also clipping highlights (quite badly) in the other shots and there's some unpleasant banding in the highlights. I would suggest trying to grade the look yourself or try a different lut/look but I see no significant problems with the DNG.

#20
My 50D is out on loan at the moment but I'll answer a couple of your questions.

1. I have not had this happen personally. Have you tried different cards? Carefully cleaning the contacts etc?

2. I think you can re-assign the record button!?

3. Dual ISO does not work for raw video in the 50D.

4. You can fine tune shutter settings but I don't get exactly 180 degrees either. It's either slightly more or slightly less (I choose slightly less). If very much doubt you can tell any real difference in cadence and motion blur against something shot with a shutter at exactly 180 degrees. And, in some 'pro' cinema cameras, although it may say 180 degrees on the settings screen they too can be one way or the other depending on the internal clock frequency.

5. It's likely not just in mid-tones but that's where you're most likely to see it. It could be CA or white balance. Try other apps for processing first. The white balancing algorithm in some apps can cause magenta contamination (I have seen it happen with raw footage in in Resolve) so that's where I would put my money. If its happening across multiple apps then try shooting a repeatable test without any tint offset being set in camera, try a different lens with a different focal length, try a UV filter, basically try everything you can think of in camera then try all the apps again.   

6. Sounds like a bug.

re: 10bit (and 12bit) it is not supported in the nightly builds (yet) but there are some working 10/12bit builds for the 50D. https://bitbucket.org/daniel_fort/magic-lantern/downloads/.

re: ISO values - no. For that you would need to find them using the ADTG build then edit and compile your own build. It may be possible with a script but I don't have a clue there.
#21
Ok, should have said 'container' not metadata  ::).

Agree with the 'Sequence Footage' suggestion. It's set and forget most of the time.

The 'Interpret Footage' function is still used if you have different footage with various frame rates.
#22
Right+click on some imported DNG footage and choose 'interpret footage'. Set the FPS and click ok then right+click on it again and select remember interpretation. Then select your other footage and choose 'apply interpretation'.

You need to manually interpret any footage that has unrecognizable or absent metadata such as most raw and JPEG image sequences.
#23
Hi @saf34,

You shouldn't be bumping up exposure in ACR with Cinelog as this adversely affects the math. If you need to offset exposure only do it in AE as described in the user guide.

My first suggestion is to make sure you are only using Raw metering when shooting raw. If you are consistently adding 1+ stop in post to your own taste you could try offset metering -1 stop then expose according to what you see on the LV screen however this is not best practice for achieving the best SNR and dynamic range for the chosen exposure (i.e. what Cinelog is basically designed to do). If you typically expose only using LV you will often clip highlight or shadow information unnecessarily (except where HDR situations force you to do so). It is far better to retain as much information as possible when shooting and make these kind of aesthetic decisions in post rather than clip it when recording and limit your options later.

The Picture Style through which you are monitoring is a little different to Cinelog Rec709 and has dynamic controls for altering contrast, saturation and color tone whereas Cinelog Rec709 is a fixed output (with a tone curve derived from a math function) and assumes the input is a properly metered exposure,  however both increase perceptual brightness by approximately the same amount (~1 EV). It's also worth noting that the LV screen is not calibrated to produce a Rec709 image in the same way as a display or monitor.

There is no one definitive Rec709 look so don't be afraid to experiment. Have you tried using a different look e.g. the Cinelog DSLR looks which are built to better simulate Picture Styles?
#24
The ACES container format file (EXR) require a camera-specific IDT because the colorspace and white balance are already defined as (or should be) Linear ACES AP0 primaries. DNGs do not have a defined colorspace as such but the color matrix/matrices and white balance multipliers describe a transform to XYZ space and from there the ACES app can put the data into ACES AP0 colorspace. Problems arise when the white balance matrices are not correct/accurate or the implementation of color management/ACES in the app is not handled correctly.

Re: MLV App. Assuming the colorspace transforms are correct in the app (haven't tested so I'm not sure if it is), the corresponding IDT for your chosen output colorspace settings should work in other apps (Resolve, Nuke, Fusion etc). If you are creating intermediate log files you should aim to retain as much color information as possible so choose ProRes XQ or 444.

You should note that all PC based MLV apps are typically built with FFMPeg/Libraw libraries so the codecs are not official Apple ProRes and are limited to 10bit. This may be an issue if you create content for TV broadcast.

If you really want to use ACES I would suggest converting your MLVs to DNGs for use directly in DaVinci Resolve. However, initial color accuracy will depend very much on the MLV2dng conversion and how the converter implements white balancing, color matrices etc because they are not all the same. Keep your MLVs safe until your are satisfied with the raw2dng conversion.
#25
Quote from: kyrobb on December 28, 2017, 04:40:48 AM
The 50D build however goes a bit wonky for me. About every 30ish frames I'll get one frame where the bottom 3rd of the frame offsets to the left slightly. This happens in both 10 bit and 12 bit.

What build are you using and which app for MLV>DNG?

Can't reproduce it here.