MLT method for light image

Practical and theoretical implementation discussion.
Post Reply
spectral
Posts: 382
Joined: Wed Nov 30, 2011 2:27 pm
Contact:

MLT method for light image

Post by spectral » Tue Feb 19, 2013 9:44 am

Hi everybody,

As far as I remember all the MCMC method (MLT, ERPT, PMC, ...) use the value (mainly luminance) of the resulting pixel to prepare the 'datas' for the next step.

But, in a BDPT the light image is produce by path that always/often target a different pixel.

So, if I want to render caustics by example (and mainly through the light image) what can I do ?

Thanks
Spectral
OMPF 2 global moderator

Thammuz
Posts: 22
Joined: Mon Nov 28, 2011 8:36 am
Location: Stockholm

Re: MLT method for light image

Post by Thammuz » Tue Feb 19, 2013 10:02 am

Look at the Kelemen paper, IIRC they mention something about this.

spectral
Posts: 382
Joined: Wed Nov 30, 2011 2:27 pm
Contact:

Re: MLT method for light image

Post by spectral » Tue Feb 19, 2013 10:46 am

Thanks,

I have already implemented the Kelemen style MLT... and don't remember anything like this, except this :
Bi-directional path tracing produces many path from a single primary sample (for example, in Figure 3 the eye path connects three points and the light path connects two points, which can be combined in six different ways). Thus the definition of the scalar contribution function for such family of paths requires additional considerations. One alternative would be the sum of the luminances of the elementary paths. However, this would prefer long paths since longer
eye and light paths allow more combinations to be made, which would increase the scalar contribution function and consequently the selection probability. Working with long and complex paths is not efficient computationally, thus a better alternative for the scalar contribution function is the maximum of the luminances of the elementary paths.

The proposed method is efficient if a small mutation in the primary sample space corresponds to a small mutation in the path space. However, when Russian roulette changes the length of the eye path, then the coordinates of the eye path might be assigned to the light path or vice versa, which can cause large changes in the path space. To solve this problem, the coordinates assigned to eye and light paths should be separated. For example, coordinates of odd and even indices can be used separately to define the eye path and the light path, respectively (Figure 3).
BTW, several pixels in the light-image can be generated by the same path and so it is difficult to have an optimal set of 'random' values to generate several 'independent' good pixels on the light-image and also keep a good pixel on the camera-image.
Spectral
OMPF 2 global moderator

Dade
Posts: 206
Joined: Fri Dec 02, 2011 8:00 am

Re: MLT method for light image

Post by Dade » Tue Feb 19, 2013 2:52 pm

LuxRender Metropolis sampler uses the sum of all radiance (i.e. light path connections, eye path connections, direct light sampling, etc.) contributed by a sample.

spectral
Posts: 382
Joined: Wed Nov 30, 2011 2:27 pm
Contact:

Re: MLT method for light image

Post by spectral » Tue Feb 19, 2013 3:15 pm

I have think to this too... finally it is like for the camera-image ;-)

Thanks to confirm...
Spectral
OMPF 2 global moderator

jun
Posts: 4
Joined: Wed Jun 05, 2013 7:48 am

Re: MLT method for light image

Post by jun » Thu Jul 18, 2013 8:37 am

The light image is VERY effective for the caustics effect, you can refer to Veach's thesis p.313. Mitsuba has implemented it, I also implemented it with the method similar to mitsuba's.

You need to allocate another image(light image) to store the random contributions(not on current pixel). Recall that when you connect a light path and eye path in bdpt, you compute its contribution as:

L = eye_path.alpha * light_path.alpha * eye_vert.bsdf * light_vert.bsdf * g
eye_vert.bsdf and light_vert.bsdf are the bsdf value for the connecting endpoints on the two subpaths.

In the t=1 case(eye path has only 1 vertex), the eye_vert.bsdf is defined as the directional components of We, namely We(1). The computation of We(1) is the most tricky part. Like mitsuba, I set We(0)=Spectrum(1.f), and We(1) is computed by converting (1.f/film_area) to solid angle measure, you can find explanation of this computation in mitsuba's source code.

After splatting those contributions to light image, you scale the light image by 1.f/spp(just like what you do with eye image), and add it to the eye image.

You also need to handle the t=0 case, and splat the contribution onto light image. But I didn't handle it, I think in general that contribution would be very small because the lens is very small( zero for pinhole camera).

Post Reply