[IMPOSSIBLE] IDEA about lossy 14EV movie recording even on 21 MB/s bottleneck SD

Started by Scipione205, September 03, 2013, 01:56:49 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Scipione205

Hi,
this night I was thinking about how to borrow a 5D MkIII to shoot my action short film, or what best profile+workflow to use for H264 recording if RAW wasn't possible.

Then I got an idea, about recording lossy 14 stops of dynamic range on my 600D (Rebel T3i).

I was thinking about a sort of motion JPEG, but instead of RGB JPEGs, RGBE JPEGS (Red, Green, Blue and Exponent) or something like that (or even for every frame, a normal JPEG and a grayscale JPEG for exponent). This allows hig dynamic range, so it would be possible to store the 14 bit data from sensor without clipping higligts and shadows (maintaing 14 stops instead of 8).

Then I thought that JPEG2000 supports 48 bit depth (16 bits per channel).

Also another solution would be to store 2 pictures (24 bit) per frame (one spanning form 1 EV to 8 EV, the second from 9 EV to 14 EV).

So that we will record a sort of lossy RAW. I was thinking about 600D wich is limited by the SD slot bottleneck to 20 MB/s; at this speed, assuming 23.976 fps, the maximum frame size would be approx 800 KB, and for 2-pictures-per-frame it would be max 400 Kb for piture.

The S2 size (JPEG, 1920x1280, fine compression) is approx 1.3 MB as of T3i's manual, and about 3-500 KB as of my tests. It's too high, maybe cropping 1920x800 can work.

It's just an idea, I don't know if it would be possible (I don't know how the camera works, I'm starting right now to understand that).

What does the developers think?

I think that if we can enable the T3i to shoot 14 stops dr lossy video footage, we will push the envelope on serious cinema filmmaking on cheap machines... this will push the manufactures to develop even better digital cinema movie cameras at even low prices.


dlrpgmsvc

[1] We can do ML "burst silent pics" but they are high resolution, raw (not lossy) and very low FPS (not suitable for normal movies).
[2] We can use Canon normal burst mode with jpg, but at very low FPS (not suitable for normal movies).
[3] We cannot process the raw video data during acquisition: we are reading the raw video frames data from the Live View buffer.
[4] Post-processing in-camera must be realtime (if you want to process the raw flux down to lossy as you want) but not enough processing power is inside any current camera.
[5] We cannot process h264 lossy video data during acquisition, because H264 compression is done in-camera inside a "black-box" no one can access inside directly.

So... good ideas, but not at all applicable here... sorry  :'(
If you think it's impossible, you have lost beforehand

Scipione205

I'm trying to better understand how the cameras works. The H264 encoding is done by a dedicated DSP, and the JPEG compression ?

bjacklee

Quote from: dlrpgmsvc on September 03, 2013, 02:16:06 PM
[5] We cannot process h264 lossy video data during acquisition, because H264 compression is done in-camera inside a "black-box" no one can access inside directly.

Yup I red about this before somewhere in this forum but just to clarify, we cannot access the "black-box" via ML because its impossible to do so or it can be accessed but needs more analysis? If its accessible, I'm sure that our talented developers will find ways to do it in time.. :)

dlrpgmsvc

Quote from: Scipione205 on September 03, 2013, 02:57:28 PM
I'm trying to better understand how the cameras works. The H264 encoding is done by a dedicated DSP, and the JPEG compression ?

Dunno, TBH. But even if we can rewrite it or modify it, it will be a procedure equally or longer than the original one, so no hope to squeeze out more FPS o Images Per Second. The faster method to write an image/type frame burst is the dng silent pics, but they last only few frames due to camera internal buffer capacity. JPG is compressed and it seems at first glance faster to write on memory card: true, but the processing time to compress it (by a DSP or a dedicated internal firmware routine) nullify this advantage, and also makes it worse in terms of "lost" time
If you think it's impossible, you have lost beforehand

dlrpgmsvc

Quote from: bjacklee on September 03, 2013, 03:14:14 PM
Yup I red about this before somewhere in this forum but just to clarify, we cannot access the "black-box" via ML because its impossible to do so or it can be accessed but needs more analysis? If its accessible, I'm sure that our talented developers will find ways to do it in time.. :)

Dunno, TBH. However, I can think than this procedure is done by an external chip, because the task is very cpu-intensive, and from the data about the power of the main processor of the cameras and from the results we can achieve re-programming it, we are sure it cannot cope with this task, so there must be an external chip, very likely if not for sure. So, we can re-program only the main CPU where the main core routines reside: a long long slower CPU than this dedicated H264 chip. The result ? We can only tell this external chip some few parameters, but limited, like a limited quality control over compression. We can also intercept the raw data coming into this chip and write it to the memory card (raw video, the great last achievement of Magic Lantern), but surely the on-camera processing of this raw data flux is too slow for the main CPU that we can control, so only 1 FPS (for example) can be achieved. More: there is no so much memory left for our CPU programmings, so it is sure that a complex video compression algorithm cannot be implemented. For sure.
If you think it's impossible, you have lost beforehand

1%

What you want to do is forget about H264 and figure out how to use the jpeg compressor directly a la edmac. It *CAN* make a jpeg of whatever (right now small) size at 20FPS or greater, it can process 2 frames at a time in theory too.


Scipione205

and if 2 frames at a time is too much for the CPU, if we store a single S2 jpeg per frame for the highlights (from EV 9 to 14, using 6 stops available from th 8 of 32bit JPEG) while saving an H264 (EV 1 to 8, using a linear picture style) with GOP 1? Then a program on the PC will reconstruct a 14bit per channel sequence. Sound disabled, of course.

I'm just trying to push the envelope by squeezing our brain.

1% what do you think?