How to do depth of field in Light tracing?

Practical and theoretical implementation discussion.
Post Reply
Posts: 138
Joined: Sun May 27, 2012 4:42 pm

How to do depth of field in Light tracing?

Post by shiqiu1105 » Fri Dec 06, 2013 3:47 am

I am trying to implement depth of field in my bidirectional path tracer.
I know how to do that when tracing rays from the camera.

But when connecting light path directly to camera, that's like light tracing, where points are projected back directly to pixel positions.
So I tried to purturb the projected pixel coordinates based on the camera lens radius and focus distance, but the images I get still has artifact on the edges of objects.

Anybody done this before?

Posts: 89
Joined: Thu Apr 11, 2013 5:15 pm

Re: How to do depth of field in Light tracing?

Post by friedlinguini » Fri Dec 06, 2013 5:22 am

I haven't tried it myself, but I imagine you pick a random point on your lens, connect to the surface point, and then calculate pixel coordinates based on where the segment formed crosses the focal plane

Posts: 42
Joined: Mon Jul 23, 2012 11:05 am

Re: How to do depth of field in Light tracing?

Post by Zelcious » Fri Dec 06, 2013 1:20 pm

To account for implicit paths you also need to intersect with the lens geometry.

Posts: 75
Joined: Sun Aug 19, 2012 3:24 pm

Re: How to do depth of field in Light tracing?

Post by shocker_0x15 » Tue Dec 10, 2013 1:59 am

I think you do the following process in, for example, path tracing.
1.Select a pixel position in image sensor(perhaps, this is thought of as virtual plane).
This determines a "raw" direction.
2.Select a lens position in lens aperture, and perturb the raw direction, you get the corresponding ray.
3.Transform the ray into the world coordinate.
You can see this process in the PBRT book.

In light tracing(using next event estimation), you just need to do the opposite process.
1.You can determine a world coordinate ray which connects a position in lens to world.
2.Transform the ray into the camera coordinate by using the inverse matrix used in the step 3 on the above.
3.You can find the raw vector from the local ray, this vector means the corresponding pixel position.

Sorry for my poor English ;)

Post Reply