@Andy600 thanks. I have a basic understanding of what log does, except the clog/logc ei stuff but thanks for laying it out for me. With regard to the exposure slider in Mlrv, it's a handy tool if exposure wasn't set properly when shooting but I try to make a point of not to using it. I also see the most efficient workflow being dragging a bunch of MLVs onto MLRV to batch them all with the current gamma curve which means no chance to mess with it per clip!
@baldand: any chance of drag and drop batch functionality coming soon?
Being able to fix WB is also nice, if we ever get quick look thumbnails we could identify any to leave out of the batch and add to the queue separately with tweaked WB.
My layman's view of the workflow concept is this. rec709 h264 out of the camera suffers 3 main issues.
1. Discarded detail and compression artifacts due to h.264
3. Major loss of chroma information due to 4:2:0
2. Highlight and shadow information beyond the white and black points are clipped and non recoverable
The prores 4444 codec fixes the first 2, and if the 14bits are scaled(compressed) down to 10bit (or 12 with XQ), that fixes #3. We loose some bits, but the gamma curve prioritises the most useful parts of the information so we retain more bits where we need them.
Of course if we view the entire sensor data compressed like this, it's going to look extremely flat, but we can use a 3d LUT to expand the range back to rec709 etc. to view it as it would have looked straight from the camera.
The important part being that this LUT be applied at the end of the processing pipeline so we still have all that highlight and shadow info to play with before it gets clipped again by the LUT.
The attractive thing about the visionLog curve is that the 3D output LUT could be one of their Osiris film LUTs and we don't need to go through another lossy input LUT step.
Am I on the right track here? The BMD color space fits in there somewhere but I'm not sure I understand where and why:)