DNG to ProRes questions (422 vs 422HQ vs 4444)

Started by Thrash632, September 19, 2013, 08:33:18 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Thrash632

Hello,

I have been testing shooting raw on my 5d mark iii with magic lantern. So far I gotta say that I am blown away by the image quality. I'm preparing to shoot a short film in raw and had a few questions.

My workflow is as follows:

1. convert from .raw to .dng using rawtodng
2. import dng sequence into After Effects (cs5.5) (ACR settings: auto white balance, linear curve, adjustments set to 0 for brightness, contrast, exposure, etc, this way I avoid flickering)
3. export to ProRes to edit in Premiere

My questions have to do with the third step. I've tested exporting the same dng sequence to ProRes 442HQ and 4444. When I make adjustments in Premiere, such as boosting the saturation, adjusting highlight/shadows, using the color corrector, etc, the results are exactly the same for both the 442HQ and 4444 files. A matter of fact, if I push the saturation higher than 10 using the Color Balance (HLS) filter, the colors blow out. They blow out equally in both the 442HQ and 4444 files.

Correct me if I am wrong, but it was my understanding that ProRes4444 retains more colors than ProRess442HQ (12bit vs 10bit). So why are they reacting the exact same way? Where would I notice a difference between the two?

My second question is how to transcode to ProRes444 rather than 4444. I do not know how to disable the alpha channel. When exporting from After Effects, I made sure to use the lower bit depth when exporting. But when I put the clip in Premiere, it says that there is an alpha channel still.

Any advice would be appreciated, thank you!

deleted.account

Rather than get someone to explain chroma subsampling to you, perhaps first search the term on the www. Its expained so many times it becomes nausiating. :-)

Prores 4444 is completely and utterly pointless to convert to from camera raw, its 12bit because 10bit Y Cb Cr + alpha channel as there is no alpha channel in the source and its fricking easy to create one further down the chain perhaps save disk space.

Blow out, do they really blow out? Or just appear to on your 8bit display referred monitor via the rec709 / sRGB gamma curve? In reality in a 32bit float workspace they don't blow out but still appear to, in an 8 or 16bit workspace they would clip and in all case they appear to blow out because you are driving values beyond what a display can show.

But chroma subsampling has nothing to do with blow outs. :-)

Thrash632

I didn't know to google chroma subsampling because I don't know what it is. That's why I'm asking here. Now that you brought it to my attention I will do some research on it.

I understand that ProRes4444 is pointless because the original raw dngs do not contain an alpha channel. I want to know how to save to ProRes444 using Adobe After Effects and Media Encoder. I don't see a setting for it. I've googled it with no luck.

I'm on a macbook pro circa-summer 2010. So these monitors are only 8bit? What's the point of having a 16bit and 32bit workspace in after effects if you can't view higher than 8bit?

When I grade the original dngs using ACR and bring up the saturation, the colors don't appear to blow out. So I'm confused as to how my monitor would be the problem.

Midphase

There is no such thing as ProRes 444, only ProRes 422HQ which is really fine for what you need.

But y3llow is right, you need to get better at your internet search skills:

http://documentation.apple.com/en/finalcutpro/professionalformatsandworkflows/index.html#chapter=10%26section=2%26tasks=true


dmilligan

Chroma subsampling reduces color resolution, but not color depth (unless of course the codec you use also reduces bit depth). I'm not sure what bit depth prores uses but it may be that you are loosing color flexibility b/c prores is a lower bit depth than the 14bit raw from the camera.

You could heavily chroma subsample (421) and keep the bit depth at 14, and you would still retain your "pushability" of the colors. You can also do no chroma subsampling (444) and still drop the bit depth to 8, and loose your color flexibility.

I recommend 422, because the human eye cannot see color in high resolution like it can luminance, and there's pretty much no reason to keep all the color resolution of the original (impossible to tell the difference really).

maxotics

Quote from: y3llow on September 19, 2013, 09:31:55 AM
Rather than get someone to explain chroma subsampling to you, perhaps first search the term on the www. Its expained so many times it becomes nausiating. :-)

It's just as nauseating to listen to people on this forum tell others to "look it up on the Internet".  If you don't have something nice to say, why say it?  Either answer the question, or don't.

What's upsetting to me Y3llow, is you do seem to have this knowledge and I found what you wrote VERY interesting.  I feel I've learned a bit more about this subject; however, not all of it.  That's the way with readings this stuff on the www.  I read it.  I still don't get how it applies to Magic Lantern RAW. 

For example, I want to convert TIFFs from RAW to a clip using ffmpeg.  I've searched everything I can find and I still don't know what CODEC or settings would give enough, but not too much, quality.

I like Cineform 422.  But I don't know what would be the closest in ProRes or H.264.  Again, I read, but I just don't know it as well as you.

Finally, if you're going to direct someone to the www, be specific.  Please point me to the www page where I can get my questions answer and not annoy you too ;)

Hope you don't feel I'm attacking you.   What I'm really trying to say is I appreciate when someone like you responds.  No need to remind people like Thrash and me that we're dolts :)

Thrash632

Thanks maxotics, you understand where I am coming from.

Magic Lantern raw can revolutionize the film industry. If we really want this to happen then we need to help one another and spread knowledge far and wide. I did hours of research before I made this post, so it's not encouraging when someone tells me that I'm not searching good enough. If we want to encourage people to use ML raw, we need to help them when they have questions. Otherwise how can we expect ML raw to really take off? I didn't mean to go on a rant, but anyways...

I have a few more questions re: going from DNG to ProRes

1. I've read conflicting info re: setting the After Effects project settings bpc to 8, 16 or 32. I've tested all 3 and when I export to ProRes and grade in premiere, the colors blow out the same. Which bpc project setting should I use? Am I correct that since the DNGs are 14bit, that 16bpc is more than enough? Or are these not the same units of measurement?

2. When exporting to ProRes4444, I have the option of Gamma Correction. Should I leave this to None or Automatic? Could this be why the prores colors are blowing out when I up the saturation a tiny bit?

The colors blowing on when I up the saturation on the ProRes4444 files has me stumped. If I bring the saturation higher than 5 or 10, the colors blow out. I am able to push the regular .h264 files from my 5d mark iii much further. That just shouldn't be.

The only thing I can think of is that somehow the color bit depth is getting lowered when converting from DNG to ProRes4444.

Midphase

Trash,

I think exporting through AE is not the best solution.

Why don't you use Resolve?

Thrash632

Quote from: Midphase on September 19, 2013, 10:12:13 PM
Trash,

I think exporting through AE is not the best solution.

Why don't you use Resolve?

Unfortunately Resolve won't even load up on my computer. I have a Summer 2010 17in MacBook Pro. The graphics card isn't compatible. Resolve 9 would SOMETIMES boot up and work, but 10 just crashes when I try to boot it every time.

maxotics

I tried Resolve 10 on my Windows machine and it wouldn't run either.  Finally discovered I needed to update my Microsoft DirectX.  I know this doesn't apply to you, Thrash, but it might help others who read this thread.

Thrash632

Okay, so I just figured something out:

I imported the After Effects sequence containing the original DNG files directly into Premiere. I then applied the Color Balance filiter and boosted the saturation to 15. The colors blew out exactly like they did with the ProRes file.

This means that the problem is not when transcoding from DNG to ProRes. The problem has to do with Premiere. Could it have to do with a change in color space or something? I'm going to do some more research.

deleted.account

Quote from: Thrash632

I've tested exporting the same dng sequence to ProRes 442HQ and 4444.

Quote from: Thrash632 on September 19, 2013, 07:37:51 PM
I didn't know to google chroma subsampling because I don't know what it is. That's why I'm asking here. Now that you brought it to my attention I will do some research on it.

So you want to export to format's you don't understand. A quick net search on prores or reading the documentation would have explained what the difference between 4:2:2 & 4:4:4 is and therefore what chroma subsampling is. Without someone having to write paragraphs explaining it.

QuoteI'm on a macbook pro circa-summer 2010. So these monitors are only 8bit? What's the point of having a 16bit and 32bit workspace in after effects if you can't view higher than 8bit?

When I grade the original dngs using ACR and bring up the saturation, the colors don't appear to blow out. So I'm confused as to how my monitor would be the problem.

So you're kinda doing it again, why do you expect anyone to explain about workspace bitdepth and doing processing at higher bitdepth? You have the documentation on Adobe AE or Resolve workspace bitdepth? Said enough for you to really look for yourself.

Quote from: maxotics on September 19, 2013, 09:23:26 PM
It's just as nauseating to listen to people on this forum tell others to "look it up on the Internet".  If you don't have something nice to say, why say it?  Either answer the question, or don't.

I'd rather say little than talk a lot and say very little ;-). Point in the right direction and they fill in the rest. But I don't see why I should have to defend myself to you.

Quote from: maxotics
What's upsetting to me Y3llow, is you do seem to have this knowledge and I found what you wrote VERY interesting.  I feel I've learned a bit more about this subject; however, not all of it.  That's the way with readings this stuff on the www.  I read it.  I still don't get how it applies to Magic Lantern RAW. 

For example, I want to convert TIFFs from RAW to a clip using ffmpeg.  I've searched everything I can find and I still don't know what CODEC or settings would give enough, but not too much, quality.

I like Cineform 422.  But I don't know what would be the closest in ProRes or H.264.  Again, I read, but I just don't know it as well as you.

The thread is about prores and the difference between chroma subsampling and how to export from Adobe products, not your problems.

QuoteFinally, if you're going to direct someone to the www, be specific.  Please point me to the www page where I can get my questions answer and not annoy you too ;)

Personally I'll put in the same amount as effort as the OP and no more.

QuoteHope you don't feel I'm attacking you.   What I'm really trying to say is I appreciate when someone like you responds.  No need to remind people like Thrash and me that we're dolts :)

You were attacking me and adding the self deprecating remark about dolts doesn't change that, thing is for those asking dumb ass questions, appreciate that people go to great efforts to produce effective search engines, documentation for products, free training material and wiki's etc, they do that to help, so why not use it?

Quote from: Thrash632 on September 19, 2013, 11:19:15 PM
Okay, so I just figured something out:
Could it have to do with a change in color space or something? I'm going to do some more research.

Monitor bit depth and gamut, is there a reason you want to push the saturation etc beyond the norm?

RenatoPhoto

Please keep foul language and personal attacks out of this forum, this applies to all involved.
http://www.pululahuahostal.com  |  EF 300 f/4, EF 100-400 L, EF 180 L, EF-S 10-22, Samyang 14mm, Sigma 28mm EX DG, Sigma 8mm 1:3.5 EX DG, EF 50mm 1:1.8 II, EF 1.4X II, Kenko C-AF 2X

maxotics

Can we start from the beginning again?  This is what I understand, and don't understand. (Thrash, hope you don't mind my questions).

A Bayer sensor works by using separate pixels for Red, Green and Blue (RGB).  They are not in equal numbers, and because of various other factors, are de-bayered; that is, rolled up into one value.  So the RGB is added together to give a single color.  That number would be 255*255*255?  Plus luminence?  You can represent colors in many ways, as frequencies too, so the numerical values we assign to colors are ARBITRARY.  Some CODEC just assign a number from 1-256, say for each color, knowing they'll blend on the screen to give you other colors.

IN FACT all the confusion begins because humans have limitations in how much difference they can see in color, resolution, brightness AND how much difference devices will show (different monitors, iPhones, TVs, etc.)  I want to point out, that because these limitations are often unknown, and difficult to quantify, NO QUESTION in my mind is stupid about this stuff :)  Every person and company has a different strategy for compressing data because, and this MUST be pointed out--most computers cannot display 30fps 1080p video uncompressed.  If they could, if for the sake of argument computers were 100x more powerful than they are, none of this would be difficult because video would be shown in very close to its RAW form.

Midphase wrote, "There is no such thing as ProRes 444, only ProRes 422HQ which is really fine for what you need."  And then posted a link to Apple's description of various CODEC.

Apple writes, about the 444 Codec: The R, G, and B channels are lightly compressed, with an emphasis on being perceptually indistinguishable from the original material.

What does that mean, "perceptually indistinguishable from the original material"?  Is it a marketing boast and I don't really need it, or do I?  How am I supposed to know whether that is important, OR, as Apple write "The Apple ProRes 422 (HQ) codec offers the utmost possible quality for 4:2:2 or 4:2:0 sources"

Is Magic Lantern a 4:2:2 or 4:2:0 source? I thought it is 4:4:4.  And if that is so, shouldn't one use the 444 codec.  Again, and please don't take this the wrong way, but how can you be sure that the 422 is "is really fine for what you need".  Again, I don't doubt that you're right.  I just don't understand WHY.  How does one know what I really want?  Or, maybe the answer was here: "utterly pointless to convert to from camera raw, its 12bit because 10bit Y Cb Cr + alpha channel as there is no alpha channel in the source and its fricking easy to create one further down the chain perhaps save disk space."

Again, I'm sure that's right, but that's a LOT of information in one sentence.  "its 12bit because 10bit Y Cb Cr + alpha channel" then "as there is no alpha chanel in source" but EASY to create one further down the chain?  Create something out of nothing.  I'm lost.  SORRY!

Is is not clear on Apple's site, or anywhere else.

Another difference is 444 has "Lossless alpha channel with real-time playback"  Why is that not important.  Again, my guess is that for online video it isn't.  But if Thrash wants to show his end-product in a movie house?  Then what?

The bottom line is that all these companies are marketing their solutions as the best, Apple, Blackmagic, Adobe, etc.  They don't care if you don't really need what they sell.  This is what I'm up against.  And Thrash.  Yes, there are people who try to explain this stuff objectively.  It's helpful.  But at the end of the day, how can one decide what to use, or not use, unless they ask what may appear to be a stupid question :)




NedB

@maxotics: You've got the Bayer stuff almost right but not quite. (See http://en.wikipedia.org/wiki/Bayer_filter.) A Bayer filter (an array of RGB color filters) sits above the sensor. This causes each pixel below the Bayer filter to be sensitive to the colors Red, Green OR Blue. The filter pattern is 50% green and 25% each red and blue (because the human eye is much more sensitive to changes in the level of green light than changes in blue or red). In the case of the sensors used in the Canon DSLR's, each pixel records a 14-bit value from 0 to 16,383 ((2 to the 14th power)-1). These values are not "de-bayered", but they are, in a sense, "rolled up into one value". It's just that the value simply represents how much of red, green, OR blue light reached the individual pixel during the time the sensor was collecting light (the shutter interval). It's not a mixture of R, G, and/or B. Luminance does not enter this equation at all. It only comes into play when we try to reduce the amount of data we are going to have to move around and/or store, via chroma sub-sampling.

The "raw" output of the Bayer is a Bayer pattern image, which must thereafter be "Debayered". Debayering is a demosaicing algorithm which uses complex mathematics to interpolate values of R, G, and B for each individual pixel. [The resulting data can be thought of as 4:4:4, simply because there has been no compression (or "sub-sampling") of the chroma channels yet. Each pixel has its own values for R, G and B. In contrast, H.264 data (remember that old Canon video stuff??) can be thought of as 4:2:0 (Google it)]. There are many different debayering algorithms, which vary in quality and speed. So ML raw is a sequence of 14-bit values, one for each of the x times y (width x height) of the captured frame. That's why the size (in bytes) of an ML raw frame can be calculated by (width x height x 14 bits / 8 bits per byte) or 1.75 w x h bytes.

There is a lot more in your post to respond to. If you would let me know whether or not the above is helpful, I will elaborate. Some of this ML stuff is quite complicated (my jaw hangs open sometimes, reading a1ex's posts...), and it's no wonder some are confused. I suppose the manufacturers do assume a level of advanced knowledge of the basics which may be above that of the average ML user, or even most ML users. Cheers!
550D - Kit Lens | EF 50mm f/1.8 | Zacuto Z-Finder Pro 2.5x | SanDisk ExtremePro 95mb/s | Tascam DR-100MkII

maxotics

@NedB  Very, very helpful.  Thanks.  I'm interested in Thrash's overall question about Codecs so going to try to fill in the missing pieces.  If this is covered somewhere else, point me to it.  Otherwise, when we're finished, someone (I'd be happy to do it) can create a post that walks someone through the real-world answer to Thrash's original question.

So what Y3llow was saying is that the Canon camera does not capture Luminance, it is already baked into the value of how much Red, say, a pixel received under a filter.  When he says "create it downstream" he means you could create a separate luma value, if that's what you really wanted for some reason, though he saw no reason since you already have it, in a very good way to begin with.  Is that right?

Sorry to be dense.  So when my EOS-M captures in crop mode 1280x720. 

It is saving 1280x720 x 3 (RGB pixels) x 14bits = 38,707,200 which divided by 8 is 4.8 megabytes ? 

Or are you counting the value from the 3 RGB pixels as one? so

1280x720x 14 bits = 12,902,400 or ( divide by 8 ) 1.6 megabytes per frame?

I answered part of my own question.  I have a 394 meg file at 250 frames which equals 1.576 megabytes per frame.

If that's so, doesn't it mean that Canon or ML already debayers the 14bits per RGB pixel?

NedB

@maxotics: You're welcome, no problem. As for the further questions:

1. Right, Canon DSLR's do not capture luma (there is a subtle, somewhat geeky, difference between luma and luminance, which is irrelevant for our purposes, I think...) directly. Rather, the value can be calculated from the values of R,G and B. Each pixel receives a certain amount of light, filtered through either a red, green or blue filter. This amount of light is stored as a 14-bit value.

2. An alpha channel is used to indicate which parts of an image are transparent and which are not. It is simply a grayscale image, where the parts of the image which should be transparent (i.e. where a background plate should show through when layered behind it in, say, After Effects or some other compositing program) are black, the parts which are completely opaque are white, and the parts which are semi-transparent are gray, with the amount of gray being proportional to the level of transparency. I imagine what Y3llow means by "create it downstream" is that there is no reason to transcode Canon raw into any format which has an alpha channel, since this alpha channel (not a luma channel as you may have thought) can be created in a compositing program (by "keying" out a greenscreen, for example) at some later point. The raw frames themselves can remain without an alpha channel. For this reason, ProRes4444 is somewhat of an overkill. However...

3. ...you do have a point in that ProRes422 or 422HQ do sub-sample the chroma (that is, decrease the resolution of the color channels after the signal has been transformed from RGB to YCbCr). So there is some loss in this transcode, but it is probably all but invisible to the eye. If you are not doing a lot of green/blue-screen keying, then you almost certainly don't need 4444.

4. (1280 x 720 pixels x 14 bits/pixel) / 8 bits/byte = 1,612,800 bytes/frame. In this calculation, you will notice there is no RGB. Canon raw video is just that, raw, meaning not yet debayered or demosaiced. So this means that...

5. No, neither Canon nor ML debayers the raw video which ML captures. The only debayering going on in the camera is when you record H.264 video. The Canon feeds the raw video to a "black box" (meaning non-Canon people can't easily see inside it) encoder chip which decreases the resolution by about a third (varies in method and amount and also by camera) and encodes the result (possibly with a small up-rez, for cameras like the 550D, etc.) into H.264 at 1920x1080 or whatever you set in the Canon menu.

The fact that the Canon cameras deliver a down-rezzed (from the full sensor resolution), but not yet debayered, video stream to the LiveView monitor is the happy accident which allowed the ML devs to figure out how to save this raw stream. If this stream did not already exist in these cameras, it is unknown whether it would have been possible to get raw video out of them.

Whew. Cheers!
550D - Kit Lens | EF 50mm f/1.8 | Zacuto Z-Finder Pro 2.5x | SanDisk ExtremePro 95mb/s | Tascam DR-100MkII

Thrash632

NedB, thanks for your posts. I learned a lot from them.

Since ProRes4444 contains an alpha channel that we do not need and ProRes442HQ is throwing away color resolution, is there anything in between the two?

When I export from After Effects, I see an option to choose RGB, RGB+Alpha or just Alpha. I choose RGB, but when I import the file into Premiere and view its properties in the Project window, Premiere is telling me that the Alpha channel is still there.

How do we transcode to ProRes444 (ProRes4444 without the alpha channel)? I want to be as efficient as possible in my transcode while obtaining as much original information as possible.

As a side note, I upgraded my OS on my MacBook to Mountain Lion and updated the driver to my graphics card. Now DaVinci Resolve 10 Lite works with no problem! If you have a 2010 MacBook Pro, then do these updates to run DaVinci. Since the color grading tools in Premiere do not play nicely with ProRes files or DNGs linked through After Effects derived from ML Raw, I'm going to mess around with using Resolve.

maxotics

Thanks NedB.  Sorry I have to go back to the beginning still :) 

You wrote "In the case of the sensors used in the Canon DSLR's, each pixel records a 14-bit value from 0 to 16,383 ((2 to the 14th power)-1). These values are not "de-bayered", but they are, in a sense, "rolled up into one value".

I wish the industry had a good word for that you're trying to say, and me to understand ;)

From my research there there seem to be, indeed, two "de-bayering" steps. 

The first step is the camera takes grids of 4 pixels, usualy 2 green, 1 blue and 1 red, and de-bayers/rolls-up their value to a single R,G,B value.  The sensor data under the filter, you mention, is never made available.  This is true for all cameras.  I would call it de-bayering, but because this information is never output everyone ignores it and leaves it nameless ;)  BUT FOR UNDERSTANDING is should not be ignored?  Am I right?

So from the beginning.  The camera's sensor has homogeneous light-sensing pixels/sensors.  It places a filter over these sensors to have them detect separate colors, R,G,B.  The cameras do NOT use these values, but create new ones, synthetically, from the average of those values.  Also, many of those values are used 2-4 times.  A blue value for one of these sensors may feed the calculation for multiple adjacent pixels. 

The reason this is important to understand, I believe, is you can't think of the pixel data as corresponding to "pixels" on the sensor.  The sensor picks up color values and CREATES what are really abstract values which have been hashed in a one-way value from the physical sensor data.

So when you wrote, "Each pixel receives a certain amount of light, filtered through either a red, green or blue filter. This amount of light is stored as a 14-bit value." I think you might have meant "the camera takes 3 light readings under the red, green or blue filter and creates a 14-bit value that represents a pixel value, which is from the general area of the sensor, but not an exact location as you'd think pixels on your screen."

Whew is right!

Rewind

When NedB says 'each pixels' he means exactly the EACH pixel. And he's right.
This is how raw file looks before debayering:

Just the values of all the sensor pixels.
It is totally up to you, how to interpret this data later. This why there are many different demosaicing algorithms out there.

When you're shooting h264, the debayering obviously occurs inside camera (black box). But raw is raw.

maxotics

Sorry, Rewind, I don't think you're correct on this.  You're correct in that we're given three R,G,B values to work with, but they are NOT the values of the sensor pixels.  Of course, you are right, in practical terms, it's what we have to start with.

I found this fascinating, hope you do too!

http://photo.stackexchange.com/questions/9738/why-are-effective-pixels-greater-than-the-actual-resolution


Rewind

Again, raw data contains actual bayer pattern values.

QuoteIt places a filter over these sensors to have them detect separate colors, R,G,B.  The cameras do NOT use these values, but create new ones, synthetically, from the average of those values.  Also, many of those values are used 2-4 times.  A blue value for one of these sensors may feed the calculation for multiple adjacent pixels.
This quote is absolutely nonsence if we are talking about raw. Averaging=debayering occurs in camera only for jpeg output.

QuoteYou're correct in that we're given three R,G,B values to work with
Not true. We're given with R,G,B,G values. And it's up to us how to deal with them. Did you even notice, that actual raw file (use Photivo for example with no demosaicing - i told you several times) contains green pixels as much as twice more then others? They are actual pixels from sensor.

NedB

@maxotics: No, I meant it exactly as I wrote it. Rewind is right! Of course I do not know where your research has taken you, so it's difficult for me to refute it step-by-step. But I will tell you that your understanding of debayering is simply incorrect. Each pixel receives a certain amount of light, which is the light coming into the camera through the lens optics, filtered by one (and only one) filter, either red, green or blue. That is, each pixel under a green filter (and about 50% of pixels fit this description) stores the amount of green light hitting it. So also for the pixels under the blue and red filters (25% each of the total number of pixels). It stores this value (per pixel, not per "general area", and not 3 values for R, G and B, but ONE value) as a 14-bit number. The result of all these measurements is the array shown in Rewind's post. If you think of each of these boxes as representing one pixel with a 14-bit value, the "trick" of demosaicing (also called debayering) is to interpolate, from this value and depending on the surrounding values, via complex mathematical algorithms (of which there are many) values of R,G and B for each pixel so as to render an image which looks like what the camera saw. There are no "two steps" to debayering, and no "rolling-up". The sensor data is completely available, and is called the raw data. After debayering, each pixel has a 14-bit value for each of its R,G and B channels

Simply put, everything you wrote between "From my research..." and "Whew" is simply incorrect. Also, as I stated above, Rewind is absolutely correct. Raw data has a single value for each pixel, but you have to remember that the arrangement of the Bayer filter is known, which means that we can use these two things to give us RGB data. In effect, raw data are indeed the values of each sensor pixel. Only after debayering does each pixel have a value for each of R,G and B.

In rereading your post, it almost seems as if you are conflating the idea of a Bayer filter with that of color sub-sampling. I can understand this confusion, because the concepts are somehow quite similar. But they are not the same concept, and in fact raw data is the data coming "raw" right from the sensor. Please don't continue to argue this point, do some more research. If you still have questions I will try to help.

In the meantime Rewind has also replied, and I might just add: "What he said..." Cheers!
550D - Kit Lens | EF 50mm f/1.8 | Zacuto Z-Finder Pro 2.5x | SanDisk ExtremePro 95mb/s | Tascam DR-100MkII

Midphase

Quote from: Thrash632 on September 20, 2013, 07:34:42 PM
Since ProRes4444 contains an alpha channel that we do not need and ProRes442HQ is throwing away color resolution, is there anything in between the two?

If you read the Apple link I sent you, then the answer is no. I don't know if AVID DNxHD is any different.

Here is what it comes down to, and it's actually quite simple:

1. How does it look to you?

2. How much space do you have on your hard drive?

If ProRes 422HQ looks good enough to your eyes (it certainly does to mine and to most of my colleagues), then use it and don't worry about all the crazy math which ultimately might not have any effect on your needs.

If you don't mind managing more data and you have plenty of space on your drives, then by all means do everything at ProRes4444 and be done with it. It's the highest quality compressed DI codec you can use unless you go to uncompressed video and DPX image sequences which you will not be able to playback without a very fast RAID and will be seriously hard on your computer (but you're welcome to try it if you don't believe me).

maxotics

Don't kill me guys!

NedB you wrote, "You wrote 'In the case of the sensors used in the Canon DSLR's, each pixel records a 14-bit value from 0 to 16,383 ((2 to the 14th power)-1). These values are not "de-bayered", but they are, in a sense, 'rolled up into one value'. 

Am I correct in concluding then that each pixel is either R,B, or G and if we have a 1280x720 frame, then 460,800 will be green and 230,400 will be blue and 230,400 will be red?

Thanks!  Again, don't kill me :)