I have been reading the MLT implementation in pbrt and trying to understand the math as much as I can.
One thing that confuses me is that, here in the start up phase, the initial sample X0 is sampled in the following way.
where it says we need to weight all contributions with w.
However, when adding contribution, the w is ignored.. Is this a mistake?
I derived it myself, and it seems like the w should equal to b. Should we multiply another b to all the contributions?
Also, Keleman's paper seems to be using a different weighting scheme for large step and rejection samples.
Compared the approach in pbrt, which one is better?
Practical and theoretical implementation discussion.
1 post • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 1 guest