Page **1** of **2**

### Population Monte Carlo Sampling

Posted: **Sun Jan 05, 2014 9:23 pm**

by **raider**

Hi all!

Anybody tried "Population Monte Carlo" samplers?

How is it comparing to Metropolis/MIS approaches (in terms of performance, storage requirements, difficult lighting setups)?

Here is what I mean:

http://pages.cs.wisc.edu/~yu-chi/resear ... es/pmc.pdf

### Re: Population Monte Carlo Sampling

Posted: **Mon Jan 06, 2014 1:00 pm**

by **Dietger**

It seems like an interesting method, but i never tried it. Note that the PMC methods in the mentioned paper are not very advanced because the resampling step is completely omitted. Basically, the sampling method is adapted between iterations based on information gathered during previous iterations. As long as the probabilities are computed correctly and never drops to zero for contributing samples this obviously works, but I am sure this has been used before without mentioning PMC. It would be interesting to see if the resampling step could be used for light transport in a constructive way.

A practical issue I see with the methods from this paper is that they store/adapt the mixing weights per pixel. This works fine for adapting the per pixel sampling rate or sampling decisions at the first bounce (usually), but is hard to generalize to other sampling decisions in a path tracer. You could of course try to store the mixing weights in world space instead of screen space (akin to Jensen's "Importance Driven Path Tracing using the Photon Map"), but such sampling magic is tricky to get right and easy to break.

Dietger

### Re: Population Monte Carlo Sampling

Posted: **Mon Jan 06, 2014 2:41 pm**

by **Dade**

I may be wrong but Octane should currently use PMC.

### Re: Population Monte Carlo Sampling

Posted: **Tue Jan 07, 2014 9:39 pm**

by **raider**

Dietger, thanks a lot! Quite explanatory.

### Re: Population Monte Carlo Sampling

Posted: **Tue Jan 07, 2014 10:57 pm**

by **friedlinguini**

I found a somewhat later paper (

http://pages.cs.wisc.edu/~yu-chi/resear ... r-egsr.pdf) to be intriguing, as it tries to combine PMC and ERPT.

### Re: Population Monte Carlo Sampling

Posted: **Wed Jan 08, 2014 9:46 am**

by **Dietger**

As I stated earlier on this forum (

viewtopic.php?f=3&t=789&p=2268#p2268), I have some doubts concerning the PMC-ERPT paper. To me, the paper is skipping over important proofs and details and I am therefore not convinced that it actually makes sense. But I would love to be proven wrong

### Re: Population Monte Carlo Sampling

Posted: **Wed Jan 08, 2014 1:05 pm**

by **Zelcious**

PMC is nothing magically, I'd even argue that is nothing new, it's just importance sampling used in a certain way. There is no extra benefit.

I've been experimenting a lot with different versions importance sampling, including PMC and I've learned one thing.

You have to importance sample **ALL** the peaks otherwise uniform sampling will be more efficient.

Or

You will have to introduce bias and filter away high contributing paths. I think this is what Octane does even though they claime to be physically correct (it's probably an option)

I recently wrote a unbiased method where I built global importance maps for regions in space with accumulated statistics from light tracing.

I used it for unbiased rendering of extremely difficult caustics scenes and it was really efficient.

BUT, the light tracing only picked up 99.99% of the caustics, so once in a while you would hit a path that wasn't directly covered by the importance sampling and would blow up and you get fireflies on your rendering.

A thing that you always have to remember is that variance reduces by a factor of sqrt(N) so it takes forever to sample away fireflies.

It's easy to be fooled because you get a recognizable picture much much faster but those fireflies is really hard to get rid of and uniform sampling will win in the long run unless you importance sample all the peaks. It's all very logical.

A simple example: Say you have 1000 peaks in your sample space and decide to spend half of your samples to importance sample 999 of the peaks and uniform for the rest. Then you spend half as many samples on the last peak and it will take 4 times longer for it to converge. So those 999 peaks will converge really fast but the one that you didn't cover will be worse off and force the whole rendering to take longer unless you are willing to cheat a bit.

I also didn't like the PMC-ERPT paper. I think the modification makes it biased. I think I found a proof of that when I read it.

### Re: Population Monte Carlo Sampling

Posted: **Thu Jan 09, 2014 9:22 am**

by **Dade**

Zelcious, correct me if I'm wrong, but the short version of your post, and the answer to the original thread question, is: Metropolis is superior to PMC (because it is able to sample all the 1000 "peaks").

P.S. I have never tried PMC so I can not argue but I'm not surprised because Metropolis has always worked quite well in my experience.

### Re: Population Monte Carlo Sampling

Posted: **Thu Jan 09, 2014 9:54 am**

by **Dietger**

Dade wrote:Metropolis is superior to PMC (because it is able to sample all the 1000 "peaks").

Unfortunately its slightly more complicated than that. MLT is only as good as its mutation strategies. Sure it samples the peaks proportional to their contribution, but if the mutation strategies cannot effectively explore the peaks the correlation between MLT samples will be massive and thus result in fireflies (even worse, fireflies created at the cost of many samples per firefly instead of just one). Imagine the pathological example of a Kelemen-style MLT implementation with ONLY large step mutations. Yes it will sample perfectly proportional to the radiance function, and yes, it will SUCK!

So roughly speaking, just like MIS (and PMC) can only get rid of all fireflies if they sample ALL peaks effectively, also MLT can only get rid of all fireflies if its mutation strategies can effectively explore ALL peaks.

### Re: Population Monte Carlo Sampling

Posted: **Thu Jan 09, 2014 8:28 pm**

by **raider**

hmm... doe's it mean that **any** adaptive MC algorithm is hopeless in general as it introduces bias? Is it (at least theoretically) possible to have a priori bounded bias in adaptive MC?