Z-Depth

Started by jsimard01, June 03, 2014, 05:45:22 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

jsimard01

Hi, I'm using the ML 123 on my 5d3 and it's very cool, thanks guys to spend time and time to develop this firmware.
I have a cool feature that I would like to see on this firmware. I don't know if this is feasible.
I'm a 3D artist and when it comes to play with DOP on any 3d software, we use a layer pass, Z-depth, that we compose with the original image in Photoshop or any composition software. In that Z-depth image, we can put the focus point anywhere in the image.
It is composed of a B&W only, what is near the camera is white and more it is far more black it come.
So because the camera can measure the distance, maybe it's possible to produce that B&W image. So the camera take a normal shot and at the same time, the Z-depth image. After we compose that in PS or maybe with a plugin in LR and voilĂ , you can put the focus point exactly where you want.

Here's an example from a 3D software to show you what I mean.

http://www.vincebagna.com/share/DOF/Storm02.jpg
http://www.vincebagna.com/share/DOF/Storm01.jpg
http://www.vincebagna.com/share/DOF/Storm03.jpg

Braga

Can you explain in more detail how the camera would generate a Z-depth image image without resorting to some kind of voodoo??
Actually by being a 3D artist you should know that you can achieve a 3D reconstruction of a scene - and its Z-depth - by taking a stereoscopic
shot and processing with  "Stereo Vision" or similar application.
Making Magic 550D
EFP/ENG Photog/Editor

ItsMeLenny

Quote from: Braga on June 03, 2014, 06:44:16 AM
Can you explain in more detail how the camera would generate a Z-depth image image without resorting to some kind of voodoo??
Actually by being a 3D artist you should know that you can achieve a 3D reconstruction of a scene - and its Z-depth - by taking a stereoscopic
shot and processing with  "Stereo Vision" or similar application.

When will the voodoos be implemented?

dmilligan

Well, theoretically you could sort of do something like this with 70D's dual pixel sensor. Basically you could use the phase detect capability of all the dual sensels to produce a depth map (that's assuming of course that you can read the phase of all of them, record it, and still record regular video at the same time). Of course this would only be possible on the 70D and any other new cameras that Canon comes out with that have this dual pixel technology for every pixel. On the 5D3 and others it would require "voodoo"

jsimard01

Hi, thanks for the reply. I'm french and I do my best to explain this clearly.

Here I don't speak about 3d stereo photo or video, or voodoo ;)

The z-depth, in 3D, it's used to choose where we want the focus point and other stuff but here it's for the DOP.

My thought was to use the capability of any camera to figure the distance between the subject and the sensor "the auto focus" and then produce a gray base image.

Do you think in one shot, the camera can figure out the distance of all object in the scene? It don't need to be very accurate but an average gray gradient representing the scene.

The shot have to be taken with a very small aperture to prevent any DOP in the photo, this can be a problem sometime.

Maybe it's not feasible but I think if it possible, that can help allot of photographer in post process and open a variety of possibility.

I will try to make a real life example and show you the process in PhotoShop.

Thanks for your interest!!


dmilligan

Quote from: jsimard01 on June 03, 2014, 02:25:56 PM
Do you think in one shot, the camera can figure out the distance of all object in the scene?
No, only on the 70D, and only theoretically, read my post. In video mode the traditional phase-detect AF sensors cannot be used (because the mirror is up, blocking them), and also all these sensors are only at single specific points (they cannot determine distances for the entire scene, just specific points). Most cameras use contrast detection autofocus in video or LiveView mode. Contrast-detect AF cannot determine distances, phase-detect can. The 70D essentially has phase-detect sensors on every single pixel, making something like this at least theoretically possible (emphasis on theoretically).

I suggest you read more about how autofocus actually works: http://en.wikipedia.org/wiki/Autofocus


Braga

Quote from: ItsMeLenny on June 03, 2014, 01:23:24 PM
When will the voodoos be implemented?

Perhaps in some dark basement somewhere out there... somebody is developing the 'Voodoo Lantern' fork.
 
Making Magic 550D
EFP/ENG Photog/Editor

Braga

Quote from: jsimard01 on June 03, 2014, 02:25:56 PM

Maybe it's not feasible but I think if it possible, that can help allot of photographer in post process and open a variety of possibility.



You can already do it:

http://sourceforge.net/projects/reconststereo/

https://www.lytro.com/
Making Magic 550D
EFP/ENG Photog/Editor

poromaa

Quote from: dmilligan on June 03, 2014, 03:23:58 PM
The 70D essentially has phase-detect sensors on every single pixel, making something like this at least theoretically possible (emphasis on theoretically).

Cool. If to be solved I think a recording with a bit of movement is an easier approach. By tracking the footage you can re-create the depth. There are several ways of doing this but this explains one way:
http://research.microsoft.com/en-us/um/cambridge/projects/visionimagevideoediting/unwrap/
However, thats for post-production.

Another way would be to use the focus stacking and take x number of images while focus is going from min to max. Take all the pics, analyse depth with edge-detection algoritm (preferably in the computer). Focus stacking is already implemented.

A third way (theoretically) could be using the already implemented focus-point algoritm and build up a rough 3d-data from the edge detection during a short recording (if combined with a value of focus distance this would be even easier), where focus is pulled trough the z-depth. Maybe something one could do this with the scripting engine?


LucianParaian

Great idea. Google has already implemented something similar in their mobile app: https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera

There is a major problem though.
Say you somehow manage to obtain a Z-depth map of what`s in front of the camera.
In order for a selective focus to be successfully applied, you need infinite depth of field (or near) in the captured photos. Which is what you are getting in the rendered image out of the 3D software.
So the problem with Canon DSLRs, you can achieve great depths of field in very few conditions. Because of the size of the sensor.

Same problem goes for the slightly moved camera method. (Works for phones though - huge depth of field)


poromaa

Quote from: LucianParaian on June 04, 2014, 02:57:57 PM
Say you somehow manage to obtain a Z-depth map of what`s in front of the camera.
In order for a selective focus to be successfully applied, you need infinite depth of field (or near) in the captured photos. Which is what you are getting in the rendered image out of the 3D software.
So the problem with Canon DSLRs, you can achieve great depths of field in very few conditions. Because of the size of the sensor.

First of all a DSLR can obtain very deep DOF if low aperture is used (22).
In my theoretical example however you don't want deep DOF. Instead you want to utilise the shallow depth to obtain the object form at the current distance. Then, by sliding the focus form near to far you would "travel" the whole z-depth. At the same time you would record the sharp area with a edge-detection algorithm. This would build up the "3d-object" that later could be mapped on a picture shot at exact same point, but with full DOF (apt. = 22).

In magiclantern: turn on focus assist and pull focus from near to far:
http://magiclantern.wikia.com/wiki/Focus_Assist?file=Focus_assist_experiment

you will see dots. Use these dots to build up a depth-map for the whole focus-range....
Anyway, all this could be done in post anyway. :)

LucianParaian

My main concern and reply was related to the topic idea raised by jsimard01 in the first post.
I`m a 3D artist myself, so I know what he is talking about.
All the other posts are addressing different solutions, around the raised issue, not straight at it.

So yeah, i think the 'voodoo' solution was the nearest :D

@poromaa: Of course you can get great DOF with f22. but can you use that anytime anywhere?

jsimard01

Nice interest for that subject, maybe one day it will be possible. dmilligan have a good explanation why it's not possible for now.

And I'm not talking about a 3d reconstruction of a scene, only do this with one shot.

Thanks




CoresNZ

When i read the first post on this thread i actually laughed..

I too am a 3d artist who has been working in games and VFX for the last 15 years.
I have spent a great deal of time trying to perfect the process of adding DOF to a single beauty pass render in conjunction with a z depth pass. Even in CGI, This is not trivial to do well.

Now lets say even by some voodoo magic you are able to obtain a z depth image for each frame of video you shoot on your humble DSLR.

There are many caveats of doing DOF as a post process using plugins like frischluft LensCare.
The first major issue is that you cannot achieve an accurately natural background blur as your image will not have the information behind the foreground elements which occlude them.

ie. You can not blur what you can't see.
This creates an unnatural halo effect around objects where the information has to be interpolated by the blurring software.
The software has to 'make up' what it thinks is behind the subject.
This is why most 3d rendering is done in separate passes with the DOF applied to each layer separately. This is how professional compositors do it.

there are many other things to take into account to avoid ugly edge artefacting and create a natural looking image.
one of the key things you have overlooked is motion blur.

if you were to have any amount of motion blur in your "voodoo" z pass it would no longer serve as an accurate grey-scale depth map.
Yes, You could shoot your footage at a very high shutter speed and eliminate the motion blur in camera.. Then add that back in post using optical motion flow software..
but unless you are shooting at a high speed and have limited fast moving subjects, this is not a particularly reliable method either, Usually resulting in temporal and spatial warping artefactes.

Not to mention that you are already talking about shooting at a large aperture to eliminate DOF in the 'beauty pass'
Large aperture + high frame rate + High shutter speed = no damn light getting into the camera. you'll have to be shooting in broad daylight.

I am just going to suggest that rather than ask for features that encourage lazy post production reliant photography.. Pick your aperture and focus your camera correctly in the first place.. It's not that hard. and is't that part of the art of photography?..

But maybe there is hope for you yet!
Adobe has been experimenting with crazy new multi lens cameras that allow for re-focusing among other things, since 2007.
Check out this impressive demo - https://www.youtube.com/watch?v=xu31XWUxSkA

Canon even filed a patent for a multi lens camera a few weeks ago -
http://www.canonrumors.com/2014/06/patent-multi-lens-multi-view-camera/

So perhaps this kind of thing is pretty close on the horizon.. I personally hope not ;)

ItsMeLenny

There's also the mechanism of getting a lens filter that is 3 coloured circles, red green & blue.
Putting this on the front shifts the view of each colour channel, and 3D can be reconstructed from that.
However you would then need some sort of processing to fix the chromatic aberration.

It looks essentially like this but with 3 circles.

This would be able to be converted into 3D, however using only 2 colours would give you a 2 strip effect,
it is more for viewing with green magenta glasses.