Utilizing Dual-Pixel Sensors for Depth Map Generation

Started by Taynkbot, October 25, 2018, 06:29:50 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


I would like advice on using my dual-pixel (dp) 5D4 to generate a depth map. Here is a paper I found discussing what I believe to be a compatible idea (though I'm not entirely sure if Canon's dp is capable of the differential phase detection outlined here):


I would like to attempt to generate a depth map manually, in post, from a dp RAW image before getting more ambitious and trying to incorporate it into ML. If it is possible in post, I would like to attempt to automate the process in-camera (though I admit I'm fuzzy on how computationally heavy are the algorithms needed to generate a depth map) by creating an option similar to the option that records both RAW and JPEG versions of an image, but in this case I would like the JPEG to be a generated depth map.
The in-camera option will probably be too intensive for my camera to perform acceptably, but I would appreciate advice and more information.

Do you know of any programs that already exist that can take two images, half a pixel apart, and combine them into a stereoscopic image? Any open source code that might aid me would also be very welcome.

Finally, for the stickler sticklebacks who want to describe how inferior this kind of approach is compared to a properly generated depth map..... I'm not asking for perfection. Also, I don't want to have to buy a Light L16, but I am up for donations of this sort:

Here are some similar posts that didn't explicitly discuss the possibility of exploiting dp.


Your idea seems quite interesting, I think I posted an anaglyph made from a dual-pixed photo some time ago in this forum. You just need to get your hands on some dual-pixel RAWs and start experimenting on your computer. However, I do not see any advantage (and I see many disadvantages) on doing this on camera...