There's nothing to gain creating 16bit images from 8bit, it just gets padded with zeros, unless you use something like a denoiser for example to make interpolated values going 8 to 16bit image files or even denoise at 32bit in memory within the app, then there might be some value in generating and storing 12mb per frame 16bit image sequences compared to 2mb 8bit frames. But the best place to do that is in memory within a 32bit float workspace.
I get what you're saying but I don't agree with you're point, sorry. As long as the approach is consistent on both 'streams' either apply the LUT first or after which as said is best done in a 32bit workspace, non linear even for the difference it'll make and how / where else can a LUT be applied on an image sequence other than in a suitable app? Then preferably at higher precision than 8bit.
Again all this LUT stuff just applies to Cinestyle, can just grade it like any other source, don't have to apply the LUT. How would anyone treat flaat10 say, just grade it and that would be after merging?
There are no 'correct brightness' Cinestyle like Neutral, flaat or whatever is just a gamma encoded representation of the scene in sent to the camera encoder as 8bit 0 - 255 JFIF raw 4:2:2 YCC and twisted into a BT709 primaries, full range luma, BT601 matrixed (T2i, 5D MKII) compressed h264 file. :-)
The closest we get to a 'physical / realistic way' is to blend and merge in linear light after first making an attempt to correctly decompress and more difficult to correctly linearize that twisted h264 source, going back to my earlier comment about more importance on doing the initial YCC to RGB conversion as 'best' we can. For example I don't like the way AE manages that.
Consistent approach applying LUT or grade prior or after treating the two 'streams' the same is all that is necessary.
Not if your app linearizes the source correctly and that using non 32bit aware plugins, filters and effects is avoided. But if you mean you will see a different result compared to applying the LUT in a gamma encoded workflow, sure.
'Wrong' is subjective, the whole process is based on interpretation of the source anyway. It's also dependent on how the application works whether just blending in linear light, color processing may not be done in linear light. There are many steps to just getting an RGB image sequence out that can produce 'wrong' results to get picky about before being concerned about merging Cinestyle or LUT applied version.
Fair enough, but 'correct' is subjective, like said earlier just test a 'before' and an 'after' merging, see which looks the best but that has nothing to do with producing 16bit image sequences which make no improvement.
Just apply the LUT at 32bit precision in the NLE, Compositor before or after. Again this is just to do with Cinestyle, it's just a RGB curve to take a LOG looking image to a more typical rec709 image nothing more, just a grade, nothing to do with 'physically correct brightness'
I get what you're saying but I don't agree with you're point, sorry. As long as the approach is consistent on both 'streams' either apply the LUT first or after which as said is best done in a 32bit workspace, non linear even for the difference it'll make and how / where else can a LUT be applied on an image sequence other than in a suitable app? Then preferably at higher precision than 8bit.
Again all this LUT stuff just applies to Cinestyle, can just grade it like any other source, don't have to apply the LUT. How would anyone treat flaat10 say, just grade it and that would be after merging?
There are no 'correct brightness' Cinestyle like Neutral, flaat or whatever is just a gamma encoded representation of the scene in sent to the camera encoder as 8bit 0 - 255 JFIF raw 4:2:2 YCC and twisted into a BT709 primaries, full range luma, BT601 matrixed (T2i, 5D MKII) compressed h264 file. :-)
The closest we get to a 'physical / realistic way' is to blend and merge in linear light after first making an attempt to correctly decompress and more difficult to correctly linearize that twisted h264 source, going back to my earlier comment about more importance on doing the initial YCC to RGB conversion as 'best' we can. For example I don't like the way AE manages that.
Consistent approach applying LUT or grade prior or after treating the two 'streams' the same is all that is necessary.
Quote(Because even if you're working in 32 bit linear you can alter the pixel's in a non-linear way
Not if your app linearizes the source correctly and that using non 32bit aware plugins, filters and effects is avoided. But if you mean you will see a different result compared to applying the LUT in a gamma encoded workflow, sure.
Quote(baking the gamma) and it's still linear workspace and if you interpolate "wrong" values in a non-linear way (for example gamma 2.2) you will get a totally wrong result.
'Wrong' is subjective, the whole process is based on interpretation of the source anyway. It's also dependent on how the application works whether just blending in linear light, color processing may not be done in linear light. There are many steps to just getting an RGB image sequence out that can produce 'wrong' results to get picky about before being concerned about merging Cinestyle or LUT applied version.
QuoteThat's why I think it may be important to deal with the exact time to apply the LUT to get the correct color values)
Fair enough, but 'correct' is subjective, like said earlier just test a 'before' and an 'after' merging, see which looks the best but that has nothing to do with producing 16bit image sequences which make no improvement.
Just apply the LUT at 32bit precision in the NLE, Compositor before or after. Again this is just to do with Cinestyle, it's just a RGB curve to take a LOG looking image to a more typical rec709 image nothing more, just a grade, nothing to do with 'physically correct brightness'