I'm a programmer in a very different field, and I'm just starting to try to get up to speed with understanding the ML / Digic context, so please excuse any naivety in this question:
One way that ArriRaw, amongst others, reduces its data requirements is to write the 16bit linear data from the sensor into an uncompressed raw 12bit log data stream, which - given the familiar inefficiency of linear encoding of light values - drastically reduces frame sizes.
The current 14-to- 12- and 10-bit strategy is an amazingly impressive piece of work, and I can see took years to achieve, and a lot of specialist knowledge. If I understand correctly, it's a bit-chop lossy reduction in representation precision. Obviously the raw-log-encoded method would be less of a quality drop from 14- to 10-bit, if it existed.
So before I 'go off on one' (as my old AI programming tutor used to put it when he saw me disappearing down a research rabbit hole / dead end again):
Is e.g. a 14-bit to 10-bit log-encoding of the raw uncompressed (linear) sensor data - rather than a lossy bit-chop - computationally plausible in this context? Or is it waaay too ambitious for the resources available on-camera during recording?
It's caught my imagination, but I realise it might be a silly idea if you know enough - so if that's the case, I'll be sensible and take on a far more manageable task from the 'to-do' list as a 'teach myself' project instead...
Thank you!