Here is what I would like to be able to do in ML.
I tend to record videos with minimal contrast in picture style, HTP enabled, and enhancing contrast by stretching levels in post-processing* on the compressed data. But as the compressed data is only 8 bits in bit depth, I've been wondering if it would be possible to do this in-camera, just after picture style is applied. If I recall correctly, the bit depth is still 10 bits / channel at this point. I have some programming experience, but my educated guess would be that this sort of thing is undoable in ML even if I had all the programming experience in the world?
I know we can write LUA scripts, but my understanding is that the scripts can only operate the camera when it is in an 'idle' state, by which I mean that the camera is not currently exposing a still frame, or recording a video clip.
Otherwise, if possible within reason, I'd be interested in looking into this...
* (Avisynth scripting)
I tend to record videos with minimal contrast in picture style, HTP enabled, and enhancing contrast by stretching levels in post-processing* on the compressed data. But as the compressed data is only 8 bits in bit depth, I've been wondering if it would be possible to do this in-camera, just after picture style is applied. If I recall correctly, the bit depth is still 10 bits / channel at this point. I have some programming experience, but my educated guess would be that this sort of thing is undoable in ML even if I had all the programming experience in the world?
I know we can write LUA scripts, but my understanding is that the scripts can only operate the camera when it is in an 'idle' state, by which I mean that the camera is not currently exposing a still frame, or recording a video clip.
Otherwise, if possible within reason, I'd be interested in looking into this...
* (Avisynth scripting)