Phase detect pixels and dual pixels

Started by alliumnsk, January 05, 2015, 12:23:02 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

alliumnsk

Is it possible to retrieve values from phase detect pixels? -- I mean, for usage in image formation rather than focusing.
Has someone actually implemented it? (e.g. for 70D it could produce three RAWs : left, right and both, if I understand correctly)

DigitalVeil

I don't think it really works that way. It's not a 40MP sensor. Each pixel has two photodiodes that somehow split the incoming signal into 2 viewing angles and tell the lens to move until both angles align. At least that's how it sounds to me, Canon hasnt disclosed much on how exactly it works.
Glass: EF-S 24mm f/2.8 STM, EF 50mm f/1.8, EF-S 55-250mm IS STM, EF-S 18-55mm IS STM

alliumnsk

A single pixel cannot tell where to move the lens, it is necessary to retrieve the "left" and "right" images with large enough pixel count and compute cross-correlation of them. So there must be a way to read at least small portion of sensor as left and right, maybe sequentially
...
What about pixel-masked OSPDAF e.g. in 650D? The masked AF pixels look just like regular ones except for their optical properties.

dmilligan

"Left" and "Right" have to do with the direction the light comes from, not where it falls spatially on the sensor. It's not like every pixel is just divided in half spatially. You have two sensors collecting light for the same 'space' or pixel area, just collecting it from different directions. In fact, if you collected light for different spatial areas, there's no way you could use that information for phase detection (which is why you couldn't just use a normal sensor for this, but you need this special technology).

Think about how phase detection works: you measure light coming in from two different directions with two different sensors and make sure both sensors show the same thing. If it does you are in focus. (see this example: http://graphics.stanford.edu/courses/cs178/applets/autofocusPD.html)

Now lets run that backwards: if you are in focus, the two separate phase detect sensors will be showing the same thing => there's no extra spatial information you could gather from that (which is your intention right?). The 'left' and the 'right' should have exactly the same image.

alliumnsk

I was asking about on-sensor PDAF in 70D and likes, and your lecturing me on how AF through reflex mirror works is simply impolite. In dual pixel AF pixel is indeed divided
Again, I am asking not how AF works, but about if there is a way to retrieve values from AF pixels in 650D where it is pixel-masked or dual pixel 70D.

dmilligan

It's the same principle.

Feel free to refute this statement:

Quote from: dmilligan on January 07, 2015, 01:32:39 PM
If you are in focus, the two separate phase detect sensors will be showing the same thing => there's no extra spatial information you could gather from that (which is your intention right?).

alliumnsk

Straw man.
1. It is not my intention
2. If you defocus image so the "left" and the "right" subpixels show the same thing shifted by 1/2 of large pixel, you are doubling spatial resolution at expense of some sharpness, so after proper sharpening you get 2x resolution.
My intent is:
1. see if it possible to extract 3D infomation from the scene
2. measure angular response of PDAF pixels just out of curiousity

a1ex

AF points on 650D are very easy to read: from a LiveView RAW stream (or from a raw video). You just have to know their positions (hint: PinkDotRemover).

To my knowledge, nobody attempted to understand the meaning of those pixels. You can do this fairly easy, by taking the source code of some RAW converter, and creating an image only from the focus dots (very low-res, but could be interesting to watch).

About 70D, it's too early to ask this question. When you started the thread, ML was just in "hello world" stage. Think about this before calling others "impolite".