shiqiu1105 wrote:Anyway, I guess the take home msg for volumetric caustics is I need to at least do a Metropolis sampled BDPT right?

First try basic BPT. It may work well for your scenes. You can then add Metropolis sampling on top.

shiqiu1105 wrote:And, besides sampling methods, I am also quite unsure about how to implement multiple scattering with BDPT or PT either. Any recommended tutorial?

I haven't seen a tutorial, but there was some SmallPT-like code around that included participating media. In a nutshell, you do standard PT/BPT, just that a path can also scatter in media. So the vertices on a path can be either in a medium or on a surface.

friedlinguini wrote:There's the warm-up that I mentioned...

...I think Csaba Kelemen posted some code for a newer version (on the original ompf, in fact) at one point, but with no explanation about its workings. That code still exists at

http://www.hungrycat.hu/MetropolisSampler.html. It's terrifyingly short and supposedly doesn't need the warm-up phase.

I think there are some misconceptions about MLT, warm-up and bias. The problem seems to root in the thesis of Veach, who made MLT unbiased, but did not clearly state that in order to converge to the correct solution you need to

*average over many independent chains*. His warm-up method performs simple resampling of paths to choose the initial element in the chain from a distribution that is closer to the target. He then described an algorithm that uses a single chain for the whole image. This algorithm is unbiased, but does not converge with a single chain due to the chain weighting. The ERPT paper showed that you can indeed go without a warm-up phase and instead sample many independent chains. If Veach had pointed this out more clearly in his thesis, the ERPT paper could have been much harder to get published

There's obviously the trade-off here between adaptation (a few long chains) and stratification (more short chains). We've discussed this topic on the OMPF before. One could argue that a longer chain is better when you have very difficult paths to find, but one should keep in mind that in such scenes the warm-up phase needs to sample a much higher number of initial candidates so as to obtain a reasonably accurate scaling factor.

Edit: OK, Veach's algorithm actually samples a number of chains. But there's no fundamental requirement to do the warm-up. Interestingly, the one-chain algorithm actually corresponds to traditional Metropolis sampling, where the weighting obtained in the warm-up phase is essentially the estimate for the target PDF normalization. So if you're doing one-chain MLT, I'd think it's better to forget about the warm-up and start-up bias, and do it the classic way where you accumulate the target PDF normalization on the side. This way you'll converge to the correct solution.

Sorry for turning the topic into an MLT discussion