DNG to ProRes questions (422 vs 422HQ vs 4444)

Started by Thrash632, September 19, 2013, 08:33:18 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Rewind

1280x720 crop of the sensor contains 230400 of Red, 230400 of Blue and 460800 of green pixels.

maxotics

So, because there are twice as many green values, in a sense, the final color information is more like 12bit because the green is oversampled?  Is that correct?

I understand that one can probably not see the difference and that the values are saved as 14bit.


Rewind

This is not technically acceptable to think in terms like 'in a sense'.
The final color information is exactly 14 bit.
How can you get the bit depth loss from four pixels, when each of them carries the 14 bits of color information?
The luma is the different matter.
The best debayering algorithms get you about 20 to 25% less luma resolution than the number of your photosites.

maxotics

Hi Rewind.  If there are twice as many green pixels then red and blue, forgetting our psychological sensitivity, then there is less red and blue information.  Is that not a technical fact?  Or put another way, in a perfect world each pixel would get a single red, green and blue value, like a Fovean sensor.  Instead, greens are used for luma plus chroma, right? I don't think I'm crazy here (and also own a few Sigma Fovean cameras).  The Bayer method of capturing color data has known and obvious problems.  There are many "senses" in which they do not truly contain accurate information.  But I don't want to debate you on it :) 

My bigger question is this, I think what you said helps me make sense of something you talked about in another thread.  You had told me to use DNGs/TIFFs instead of trying any sort of ffmpeg intermediate rendering.  I think I understand now.  What you were saying, and correct me if I'm wrong, is that most the improvement one can get in quality comes from the choice of debayering one can use (or can be used by software).  Because most (all?) ffmpeg output throws out the bayer data I lose that ability to change the image through a different choice of de-bayerings.  Is that true?

Thanks!

NedB

@Thrash632: Would you mind please updating the title of this thread to read "DNG to ProRes questions (422 vs 422HQ vs 4444)". There is, as has been stated, no such thing as ProRes 442 nor ProRes 444. I only ask you because it is my understanding that only the author of the thread can rename it. If this isn't true, please just disregard my request. Thanks.
550D - Kit Lens | EF 50mm f/1.8 | Zacuto Z-Finder Pro 2.5x | SanDisk ExtremePro 95mb/s | Tascam DR-100MkII

Rewind

QuoteYou had told me to use DNGs/TIFFs instead of trying any sort of ffmpeg intermediate rendering
That was not me. I know almost nothing about ffmpeg.
The truth is any video or image format (including tiffs) has been debayered in some or another way.
Dng is raw wrapper, so it holds all the information and you can use it to debayer whichever method you want.

Thrash632

Quote from: NedB on September 20, 2013, 11:16:19 PM
@Thrash632: Would you mind please updating the title of this thread to read "DNG to ProRes questions (422 vs 422HQ vs 4444)". There is, as has been stated, no such thing as ProRes 442 nor ProRes 444. I only ask you because it is my understanding that only the author of the thread can rename it. If this isn't true, please just disregard my request. Thanks.

Done  :)

mkrjf

So assuming all the readers of this forum know how to use goole and can read...
Just trying to fit in with the 'tone' of this thread ;)

http://documentation.apple.com/en/finalcutpro/professionalformatsandworkflows/index.html#chapter=10%26section=2%26tasks=true
"Apple ProRes 4444
The Apple ProRes 4444 codec offers the utmost possible quality for 4:4:4 sources and for workflows involving alpha channels. It includes the following features:

Full-resolution, mastering-quality 4:4:4:4 RGBA color (an online-quality codec for editing and finishing 4:4:4 material, such as that originating from Sony HDCAM SR or digital cinema cameras such as RED ONE, Thomson Viper FilmStream, and Panavision Genesis cameras). The R, G, and B channels are lightly compressed, with an emphasis on being perceptually indistinguishable from the original material.

Lossless alpha channel with real-time playback

High-quality solution for storing and exchanging motion graphics and composites

For 4:4:4 sources, a data rate that is roughly 50 percent higher than the data rate of Apple ProRes 422 (HQ)

Direct encoding of, and decoding to, RGB pixel formats

Support for any resolution, including SD, HD, 2K, 4K, and other resolutions

A Gamma Correction setting in the codec's advanced compression settings pane, which allows you to disable the 1.8 to 2.2 gamma adjustment that can occur if RGB material at 2.2 gamma is misinterpreted as 1.8. This setting is also available with the Apple ProRes 422 codec."

So let's rephrase the question:
given the sensor data that is present in a ML 5DMk3 raw video output file:
1) what does the current code (rawmagic or raw2dng followed by FFMEG or Davinci or whatever) do to recombine the collection of RGGB picture elements into a 'correctly' balanced (across channels) RGB (or YUV or whatever colorspace is used)?
2) when it is all said and done - how much of the original information actually made it into the output file? No way it is 100% or unshifted in color
3) where in the sensor did the data come from and is it really correlated at the pixel level with the light field presented to the sensor? Zoom mode 1:1 vs. normal subsampled sensor, for example

And how do we really know that is true? Forget what you see on your 10 bit rec709 HP dream color or 12 bit P3 calibrated barco -
has anyone written a program and fed test data into the process with a ramp or similar to cover the entire dynamic range that the sensor could generate and then see what is there after your post processing steps.

You can even just do it for luminance with no color and verify what happens to the green channels. Take a video of solid green and see what you get. But some science, please.

I have done some tests with sunsets and very dark to very bright - and all I know is it is noticeably less color depth and 'accuracy' then Red Camera using identical canon lenses and post processing (pro-res4444 and apple color or davinci or premiere or dng from either raw2dng or rawmagic). Granted the sensor crop and de-bayer, etc. is completely different.

So I challenge you - prove your claims by demonstration rather than conjecture.
Thx Mike