Pixel binning : How does it work with a CMOS RGGB sensor ? Is it a process done before or after photocytes charges are read and turned into data ? Does it really helps reducing noise ?
Also, I've seen a 5.7k anamorphic build for the 5DM3 that use 1x3 bining 1920pix then unsqueeze the raw file in post to get back the 5760 horizontal pixel from the sensor. Does it imply pixel binning can be reversed in post ?
https://www.magiclantern.fm/forum/index.php?topic=16516.0Binning is done in analog domain, We can't Un-bin the pixels is post, we unsqueeze RAW file to correct the aspect ratio, so there is a quality loss if you compared it to 1:1 (Read every pixel on sensor without Binning or Skipping), Example: Look how the details between 1x1, 1x3 , 3x3:
https://www.magiclantern.fm/forum/index.php?topic=16516.msg210023#msg210023Ignore 3x1 image, it's have jagged line issue, but re-solved after I made the tests, didn't make a new test showing true quality of 3x1.
Yes, Binning does reduce the noise.
1:1 pixel readout mode : So, it means that all photocytes in the captured area of the sensor are recorded ?
All pixels are been recorded, without Binning or Skipping
Oversampling and downsampling : I'm really confused about the difference about those two (if there is any) and when can we use them. I've seen a debayer process that combines 2x2 pixels to form "super" RGB pixels. Is this downsampling ? Can we downsample an RGB or YCbCr picture already debayered ? In a 1080p screen (like my laptop), a UHD video file downsample to 1080p and the same UHD video exported full res will appear different ?
lossless RAW : I guess it's a RAW video file that is compressed with a lossless compression ? So, different than uncompressed RAW
Lossless RAW use lossless compression, you can decompress it using e.g MLVApp to get the full size of uncompressed RAW again, no quality difference, no quality loss, at all.
Debayer process : This is a huge one. I know what's the idea behind it and already found some articles about things like bilinear or AMaze. I'm more interested about the reasons to choose a process instead of another in MVL App. And, in Davinci Resolve, I never saw an option to choose a specific process. So do you know what debayer process software uses ? Does it depend on the camera that shoot the file ?
Google has good answers, MLVApp uses open source de-bayers, some are better than others in some aspects, Davinci Resolve might use it's own developed de-bayers, and they simply didn't include more than one.
SD overclocking and card spanning : Can you explain what are they ?
Canon has limited SD card controllers to 40 MB/s in most cameras that use ML, however the controllers support up to ~104 MB/s, SD overclocking unlocks the write/read limit from Canon, not all cameras are stable in these high write/read speeds.
Card Spanning, is to use both CF and SD card for recording, in ML it use it to record high resolution RAW video that requires around more or less 120 MB/s
CF card can reach up to ~85 MB/s write speed on 5D3
SD card can reach up to ~60 MB/s write speed on 5D3 (Maybe more)
Card spanning will combine the two write speeds from the CF and SD, to get up to 130 MB/s in Video mode.
Upscaling : 5DM2 can record 1824x1026p RAW video. If I want to deliver a 1080p video, I need to upscale my file. How does it work with pixels ?
?
Canon upscale it to 1080p while doing H.264 Encoding, but the RAW data is 1824x1026p on 5D2 , there is no native 1080p in non-crop mode, except for 5D3.
Centered crop mode : What is the meaning of "centered" ? Does it mean other crop modes aren't centered ?
All presets I think it's centered, but this one will not modify the resolution in x5 which is 3584x1320 in 5D3, by using centered crop mode it will center the recording area of sensor for you if it's not centered already.
These are a quick answers, it may clear up some things for you.