A note on PPM/VCM with specrtal rendering and motion blur
A note on PPM/VCM with specrtal rendering and motion blur
It is not a secret that kernel estimation decreases the asymptotic convergence rate (in bigO notation). This is also reflected in VCM as a fact that the MIS weight of PPM decreases with the rate \[O(N^{1/3}),\] indicating that the MSE convergence rate of PPM is slower than the same convergence rate of BDPT (well illustrated in Fig. 7 in VCM paper).
The actual convergence rate depends on how many dimensions participate in the kernel estimation. The MSE formula from statistics says that for ddimensional kernel estimation \[MSE \propto O(N^{\frac{4}{d+4}}).\] That gives us \[O(N^{1})\] for unbiased methods (w/o kernel estimation) and \[O(N^{2/3})\] for the simple PPM (that has a 2D onsurface kernel estimation).
Yet, in the modern production renderers the important requirement is to have also spectral rendering and motion blur. In the context of PPM, that means adding two more dimensions (wavelength and time) to kernel estimation. That leads to the decrease in convergence rate of PPM for d=4: \[MSE \propto O(N^{1/2}),\] which is twice slower than unbiased MC and also leads to twiceasfast decrease of the MIS weight in the VCM algorithm. Iliyan mentioned practical consequences of this problem briefly during our SIGGRAPH course on light transport, but the suggestion was to accumulate more photons, without paying attention to the decreased convergence rate and curse of dimensionality that strikes quickly if we increase the dimensionality of kernel estimation.
Another thought is that the MIS weights in VCM should also change in this case (note that Eq. 7 in VCM paper changes when we add dependence on wavelength and time, leading to changes of weights in Eq. 10!) as well as the optimal boundaries for parameter alpha (the range after Eq. 17 int the paper).
I am curious if anyone has any followup work (in progress) on that?
The actual convergence rate depends on how many dimensions participate in the kernel estimation. The MSE formula from statistics says that for ddimensional kernel estimation \[MSE \propto O(N^{\frac{4}{d+4}}).\] That gives us \[O(N^{1})\] for unbiased methods (w/o kernel estimation) and \[O(N^{2/3})\] for the simple PPM (that has a 2D onsurface kernel estimation).
Yet, in the modern production renderers the important requirement is to have also spectral rendering and motion blur. In the context of PPM, that means adding two more dimensions (wavelength and time) to kernel estimation. That leads to the decrease in convergence rate of PPM for d=4: \[MSE \propto O(N^{1/2}),\] which is twice slower than unbiased MC and also leads to twiceasfast decrease of the MIS weight in the VCM algorithm. Iliyan mentioned practical consequences of this problem briefly during our SIGGRAPH course on light transport, but the suggestion was to accumulate more photons, without paying attention to the decreased convergence rate and curse of dimensionality that strikes quickly if we increase the dimensionality of kernel estimation.
Another thought is that the MIS weights in VCM should also change in this case (note that Eq. 7 in VCM paper changes when we add dependence on wavelength and time, leading to changes of weights in Eq. 10!) as well as the optimal boundaries for parameter alpha (the range after Eq. 17 int the paper).
I am curious if anyone has any followup work (in progress) on that?

Anton
Anton
Re: A note on PPM/VCM with specrtal rendering and motion blu
Thanks for kicking off this discussion, Anton! Indeed, spectral rendering and motion blur add another two dimensions. Frankly, I haven't really put much thought into the implications for the convergence rate, but even without these it is already annoying that the convergence rate of photon mapping / vertex merging suffers from the radius reduction. In my experience, in practice it's best to start off with a small enough radius that you won't need to reduce at all. A small radius means you have less bias, which is not too bad considering that the MIS combination in VCM doesn't take bias into account. Also, while a very small radius could lead to abysmal noise levels in pure (S)PPM, in VCM it's usually not that bad, as the BPT techniques take case of most of the illumination anyway. This is true for most typical scenes, but obviously pathological cases exist.
In the Siggraph course I did mention that the issue with spectral and motion blur rendering can be ameliorated by merging with the photons of the last N frames, this can increase memory consumption quite a bit. It'll be interesting to come up with better solutions.
In the Siggraph course I did mention that the issue with spectral and motion blur rendering can be ameliorated by merging with the photons of the last N frames, this can increase memory consumption quite a bit. It'll be interesting to come up with better solutions.
Click here. You'll thank me later.
Re: A note on PPM/VCM with specrtal rendering and motion blu
Can't you just turn it into two dimension by randomizing a common time and wavelength for each pass? I thought it was standard practice for sppm class of algorithms. I'm probably missing some important details....
Re: A note on PPM/VCM with specrtal rendering and motion blu
That's how I did motion blur in SPPM, and it also works fine for spectral rendering. Those two samples just need to be common across all the eye paths and lights paths for each pass. This approach can be combined with UPS/VCM.Zelcious wrote:Can't you just turn it into two dimension by randomizing a common time and wavelength for each pass? I thought it was standard practice for sppm class of algorithms. I'm probably missing some important details....
# Density estimation over those two dimensions are not very useful in my opinion, unless you have an object that teleports from one place to another or a material with an extremely sharp spectrum distribution.
Re: A note on PPM/VCM with specrtal rendering and motion blu
Indeed, this would work, though it'd be very "interesting" to observe the image converging with this scheme  I'd expect to see the colors of the rainbow flickering for some timetoshiya wrote:That's how I did motion blur in SPPM, and it also works fine for spectral rendering. Those two samples just need to be common across all the eye paths and lights paths for each pass. This approach can be combined with UPS/VCM.Zelcious wrote:Can't you just turn it into two dimension by randomizing a common time and wavelength for each pass? I thought it was standard practice for sppm class of algorithms. I'm probably missing some important details....
Click here. You'll thank me later.
Re: A note on PPM/VCM with specrtal rendering and motion blu
You'd get the whole (noisy) images at fixed wavelengths and fixed shutter time merged with each other. So both motion blur and spectral effects would looks like ghosting of corresponding objects/ illumination features, in spirit of Cook84's distributed ray tracing.
Even though such correlation can be acceptable for simple renderings, I am not sure if it is practical for heavy production scenes, i.e. if you have a luxury of accumulating enough noisy frames to get rid of both Monte Carlo noise and of the ghosting and color noise.
As for materials with sharp spectral response, I believe every refractive material is such.
Even though such correlation can be acceptable for simple renderings, I am not sure if it is practical for heavy production scenes, i.e. if you have a luxury of accumulating enough noisy frames to get rid of both Monte Carlo noise and of the ghosting and color noise.
As for materials with sharp spectral response, I believe every refractive material is such.

Anton
Anton
Re: A note on PPM/VCM with specrtal rendering and motion blu
You can see this approach in action in my gpusppm :>
Anton is right that this approach is prone to banding artifacts. However, with enough samples (to the extent that the purely random approach also converges), I don't necessarily find this correlated approach inferior.
Theoretically, both have the same convergence rate, but they just have different distributions of MC integration errors over the image. I also think that it is a common misconception in graphics that noise is always
more desirable than correlated artifacts (see "Randomized Coherent Sampling for Reducing the Perceptual Error of Rendered Images" on my webpage for a counter example).
It would however be nice if we can use different samples for each pixel/path, which will open options of doing adaptive sampling etc.
I am not sure if all refractive materials have sharp spectral responses. Do you have any reference? Even if it is the case, having a sharp spectral response itself seems already problematic to handle...
Anton is right that this approach is prone to banding artifacts. However, with enough samples (to the extent that the purely random approach also converges), I don't necessarily find this correlated approach inferior.
Theoretically, both have the same convergence rate, but they just have different distributions of MC integration errors over the image. I also think that it is a common misconception in graphics that noise is always
more desirable than correlated artifacts (see "Randomized Coherent Sampling for Reducing the Perceptual Error of Rendered Images" on my webpage for a counter example).
It would however be nice if we can use different samples for each pixel/path, which will open options of doing adaptive sampling etc.
I am not sure if all refractive materials have sharp spectral responses. Do you have any reference? Even if it is the case, having a sharp spectral response itself seems already problematic to handle...
Re: A note on PPM/VCM with specrtal rendering and motion blu
Refractive index of real materials usually varies with wavelength, causing different wavelengths to refract at different angles. In the context of a smooth refractor (glass, water, younameit) that means the path with one wavelength will be invalid if you change the wavelength.toshiya wrote:...I am not sure if all refractive materials have sharp spectral responses. Do you have any reference? Even if it is the case, having a sharp spectral response itself seems already problematic to handle...

Anton
Anton

 Posts: 89
 Joined: Thu Apr 11, 2013 5:15 pm
Re: A note on PPM/VCM with specrtal rendering and motion blu
I don't think the issue is with IOR varying with wavelength (just use a different constant wavelength per PPM pass); it's with small shifts in wavelength creating a large change in the GI solution per wavelength, making density estimation across wavelengths more useful. It is possible to set up exotic optical systems and/or materials that exercise this use case, but it seems unlikely in typical scenes.kaplanyan wrote:Refractive index of real materials usually varies with wavelength, causing different wavelengths to refract at different angles. In the context of a smooth refractor (glass, water, younameit) that means the path with one wavelength will be invalid if you change the wavelength.
Re: A note on PPM/VCM with specrtal rendering and motion blu
Indeed, it's only a question whether the samples for each pixel are correlated or not. With pure (progressive) photon mapping, there is 100% correlation. And so it with instant radiosity (for which there's also the misconception it is a biased algorithm). Arguably, highfrequency noise resulting from decorrelated sampling is perceptually better than the correlated noise/banding (Mitchel's papers?).toshiya wrote:I also think that it is a common misconception in graphics that noise is always more desirable than correlated artifacts (see "Randomized Coherent Sampling for Reducing the Perceptual Error of Rendered Images" on my webpage for a counter example).
How about postponing the evaluation of the photon weight (flux) to the time when it is actually merged with an eye subpath (which will give the wavelength)? This will obviously be somewhat costly, but should do it. Alternatively, one could sample e.g. 10 wavelengths and fix them for the rendering iteration. Then each light subpath will randomly choose one of these, and the eye subpaths will compute their weights for all 10 wavelengths. (Or vice versa, though this will increase the photon storage). Such a scheme will bring some color randomness in the image.toshiya wrote:It would however be nice if we can use different samples for each pixel/path, which will open options of doing adaptive sampling etc.
Click here. You'll thank me later.