5D3 Raw 4:2:2 or 4:4:4?

Started by ilia, June 24, 2013, 07:03:52 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

ilia

Is the 14bit Raw outputting 422 or 444 color?  Any benefits to render out the dng sequence to ProRes 4444? 

Viente

As far as i know DNG is RGB color space which doesnt have color subsampling related to YUV

Felixlgr

RAW is RAW no compression in color space...

When you render ProRes 4444 it supports almost the full bandwidth ( 12bit ) of the 14 bit raw files... be sure to render to this codec for maximum quality. All Pro Res 422 flavours are 10-bit

From Apple ProRes Whitepaper

"Apple ProRes 4444 supports image sources up to 12 bits and preserves alpha sample depths up to 16 bits. All Apple ProRes 422 codecs support up to 10-bit image sources, though the best 10-bit quality will be obtained with the higher bit rate family members—Apple ProRes 422 and Apple ProRes 422 (HQ)."

As far as i know there is no video codec that supports the full 14 bit info of the RAW files except some hardware driven 16bit codecs. Thats why the RAW workflow using Davinci Resolve round trips is to me the best road to go if you want the maximum quality from your raw files

pavelpp

When I look at ffmpeg documentation for -pix_fmts it lists yuv444p14be and yuv444p14le formats, which should be good enough, no? Why is ProRes better?

deleted.account

ffmpeg provides 16bit RGB and 4:4:4 rawvideo YCC.  Described as rgb48 and rgb48le, yuv444p16 & yuv444p16le, (le) depending whether LSB is first or not. Whatever color space gamut preferred, linear or gamma encoded. A bit academic really as it's what the grading app or NLE supports.

My understanding is that raw is not even in a color space, nor is it 14bit per channel when talking about say 10bit YCbCr Prores or 16bit per channel RGB48.  4:4:4:4 is with alpha, 4:4:4 is not.

When the raw data is 'developed' into more 'usable' RGB data it can be matrixed into a color space like sRGB, AdobeRGB, Prophoto, XYZ etc either baked or interpreted in a color managed raw handling app, then subsequent color processing would be done on RGB data not raw.

Only a small proportion of the operations done in creating a 'grade' or 'look' are actually done on raw data ie: raw to RGB at best bit depth the app can muster. From then on most opps are done on RGB data preferably in Lab.

Benefit on importing raw over 16bpc is control over WB, debayer algo etc and avoiding all the intermediate storage.

**EDIT** Posted same time as pavelpp. Answer, application support. Only know of Avisynth that will handle those ffmpeg formats and plugin support limited. Dither tools being one.

Videop

Hi, my first post here!

This I don't get.. Uncompressed 10bit video with YCC 4:2:2 subsampling is approx. 120MB/s.

How on earth can 14bit raw be less?

Going from 10bit to 14bit means an increase of 40% in data and using full color sampling (equivalent to 4:4:4) should increase bitrate further.

Also since the "raw" file we record to CF is something like 1080P obviously the sensor output must have been manipulated from "all pixels" to pixelbinning 9>1 or line skipping hence the term raw could hardly apply. Debayering must have taken place in the Digic processor already.

14bit color depth x three primary colors x 1920x1080 pixels x 24fps = 2090Mbit/s

2090Mbit/s equals approx. 261MB/s.

I use Sandisk Extreme Pro CF with max 90MB/s.

Someone please enlighten me.  :o :D


deleted.account

There's an abundance of info on the web about camera raw. Best search and be enlightened. Suffice to say camera raw is not 3 channels at 14bit color depth.

g3gg0

http://en.wikipedia.org/wiki/Bayer_filter will explain most of the thing.
you dont have RGB values for every single pixel, so 1/3rd data amount.
Help us with datasheets - Help us with register dumps
magic lantern: 1Magic9991E1eWbGvrsx186GovYCXFbppY, server expenses: [email protected]
ONLY donate for things we have done, not for things you expect!

Videop

Thanks!

I will do more homework but I don't see how 14bit RAW (without any kind of subsampling) can be more efficient in storing (more) info than 10bit YUV 4:2:2 uncompressed. It just doesn't make sense.


vroem

Maybe other kinds of compression are better, but bayer is what comes off the sensor, that's why it's referred to as raw (raw meaning unmodified*). And you won't get more information by recompressing it, I'm sure you understand that.
You could however influence quality when choosing the debayering method.

You really need to learn what a bayer pattern is. But even if you don't, you should know that one of the reasons all color cameras (except for 3CCD ones) get their color from an optical bayer filter is because it's a very efficient way of compressing the data.

There I said it. A bayer filter is like an optical compressor applied before capturing.

*) You can do some transformation on raw that results in raw. An example is line skipping: by repetitively skipping a multiple of 2 H/V lines every n lines, you will result in another true bayer pattern. This is what happens to all ML raw videos that use full sensor width, except on the 5D3

Videop

Quote from: vroem on June 28, 2013, 02:48:18 PM
Maybe other kinds of compression are better, but bayer is what comes off the sensor, that's why it's referred to as raw (raw meaning unmodified*). And you won't get more information by recompressing it, I'm sure you understand that.
You could however influence quality when choosing the debayering method.

With you all the way.

Quote
You really need to learn what a bayer pattern is.

Know what it is.

Quote
But even if you don't, you should know that one of the reasons all color cameras (except for 3CCD ones) get their color from an optical bayer filter is because it's a very efficient way of compressing the data.

Or a 3CMOS such as EX1R which has no Bayer pattern or filter.

Quote
There I said it. A bayer filter is like an optical compressor applied before capturing.

Don't get that.

Quote
*) You can do some transformation on raw that results in raw. An example is line skipping: by repetitively skipping a multiple of 2 H/V lines every n lines, you will result in another true bayer pattern. This is what happens to all ML raw videos that use full sensor width, except on the 5D3

So if ML raw video on 5D3 use all the photosites of the sensor, how can the signal be raw? It must bake together groups of photo sensors/photo sites and somehow the 14bit information of something that will end up as 1080p (approx. 2MP) MUST be a higher bitrate signal than 10bit 422 1080p.

Either I have more to learn than I realize or I suck at getting my message out of my brain. :-)


vroem

Quote14bit color depth x three primary colors x 1920x1080 pixels x 24fps = 2090Mbit/s
There. I corrected that for you.  :)

The Bayer color filter array will filter one color per photosite: either red, green or blue. As a result, for every 4 pixels there is 1 red, 2 green and 1 blue. There are more greens because humans are more sensitive to it. That's what makes bayer "compression". To get true rgb color you will need a debayering (demosaicing) algorithm. It will interpolate for every pixel the missing 2 colors from the neighboring pixels. Typically this is done by photo editing software. By the way: this is where moiré is generated.

The reason why cameras use either a bayer filter (or prism in 3CCD/3xCMOS) is simply because in the end all sensors are grayscale, so we need to filter each color and feed it separately to the sensor. Either on photosite level (bayer) or else on sensor level (3ccd).

About 5D3: I'm not sure ML devs know the exact technique it uses to downscale to Full HD.
Here is what I know:
- The output is 14bit bayer, same format as the sensor but 1/3th of the resolution in both dimensions, so exactly 1/9th the information
- 5D3 probably uses all pixel information in its downscaling (unlike line skipping which typically uses one pixel out of nine)
- Its downscaling is better than line skipping: less moiré and aliasing artifacts

Videop

Quote from: vroem on June 29, 2013, 12:12:51 AM
There. I corrected that for you.  :)

The Bayer color filter array will filter one color per photosite: either red, green or blue. As a result, for every 4 pixels there is 1 red, 2 green and 1 blue. There are more greens because humans are more sensitive to it. That's what makes bayer "compression". To get true rgb color you will need a debayering (demosaicing) algorithm. It will interpolate for every pixel the missing 2 colors from the neighboring pixels. Typically this is done by photo editing software. By the way: this is where moiré is generated.

Ok, I think we are close now.. So a 22 Megapixel sensor like in 5D3 means 22 million photosites, not 22 million groups of four (two green, one blue, one red) photosites or actuall photo sensing elements?

Quote
The reason why cameras use either a bayer filter (or prism in 3CCD/3xCMOS) is simply because in the end all sensors are grayscale, so we need to filter each color and feed it separately to the sensor. Either on photosite level (bayer) or else on sensor level (3ccd).

This I understand.

Quote
About 5D3: I'm not sure ML devs know the exact technique it uses to downscale to Full HD.
Here is what I know:
- The output is 14bit bayer, same format as the sensor but 1/3th of the resolution in both dimensions, so exactly 1/9th the information
- 5D3 probably uses all pixel information in its downscaling (unlike line skipping which typically uses one pixel out of nine)
- Its downscaling is better than line skipping: less moiré and aliasing artifacts

But in such case wouldn't the info actually be "de-bayered"?
Anyway I'm looking forward to finding out how this is managed in 5D3.

Thanks!


togg

Quote from: vroem on June 29, 2013, 12:12:51 AM
- 5D3 probably uses all pixel information in its downscaling (unlike line skipping which typically uses one pixel out of nine)
- Its downscaling is better than line skipping: less moiré and aliasing artifacts

strange stuff :))

vroem

Quote from: Videop on June 29, 2013, 01:34:19 AM
Ok, I think we are close now.. So a 22 Megapixel sensor like in 5D3 means 22 million photosites, not 22 million groups of four (two green, one blue, one red) photosites or actuall photo sensing elements?
No every photosite generates one pixel. This is true for all digital imaging sensors.
So 5D3 has 22M photosites = 22M pixels = 11M green pixels + 5.5M red pixels + 5.5M blue pixels
Before demosaicing the pixels have only a 14bit brightness value, the demosaicing will reconstruct the full rgb color for every pixel by interpolation.

QuoteBut in such case wouldn't the info actually be "de-bayered"?
Anyway I'm looking forward to finding out how this is managed in 5D3.
Whatever 5D3 does, its raw video output is Bayer.

Redrocks

Quote from: vroem on June 28, 2013, 02:48:18 PM

You could however influence quality when choosing the debayering method.


I watched an FXPHD course recently about RED workflow and Mike Seymour was talking about handling the raw files. He advised storing the raw files saying that debayering techniques are constantly improving and that they test out new software with footage shot in 07 and see improvements in 'quality'. Does this principle apply to ML RAW?

vroem

Of course. It applies to all raw.

Videop

Quote from: vroem on June 29, 2013, 02:54:36 AM
No every photosite generates one pixel. This is true for all digital imaging sensors.
So 5D3 has 22M photosites = 22M pixels = 11M green pixels + 5.5M red pixels + 5.5M blue pixels
Before demosaicing the pixels have only a 14bit brightness value, the demosaicing will reconstruct the full rgb color for every pixel by interpolation.

Thanks that explains a lot to me. I thought bayer type camera sensors typically had like four photosites for one pixel... just like an LCD display have three (one R, one G, one B) photo diods for each pixel.

I guess that also explains why a certain signal can be smaller in data size when in "non debayered"/raw form as compared to a YUV or RGB signal where new information actually have been added (interpolated).

As hinted earlier but not understod by me, my calculation was three times to big. Dividing my number by 3 actually results in a data rate very close to the actual ML 14bit raw output.

Thanks!





pavelpp

Quote from: vroem on June 29, 2013, 12:12:51 AM
There. I corrected that for you.  :)

The Bayer color filter array will filter one color per photosite: either red, green or blue. As a result, for every 4 pixels there is 1 red, 2 green and 1 blue. There are more greens because humans are more sensitive to it. That's what makes bayer "compression". To get true rgb color you will need a debayering (demosaicing) algorithm. It will interpolate for every pixel the missing 2 colors from the neighboring pixels. Typically this is done by photo editing software. By the way: this is where moiré is generated.

The reason why cameras use either a bayer filter (or prism in 3CCD/3xCMOS) is simply because in the end all sensors are grayscale, so we need to filter each color and feed it separately to the sensor. Either on photosite level (bayer) or else on sensor level (3ccd).

About 5D3: I'm not sure ML devs know the exact technique it uses to downscale to Full HD.
Here is what I know:
- The output is 14bit bayer, same format as the sensor but 1/3th of the resolution in both dimensions, so exactly 1/9th the information
- 5D3 probably uses all pixel information in its downscaling (unlike line skipping which typically uses one pixel out of nine)
- Its downscaling is better than line skipping: less moiré and aliasing artifacts

Very good summary