torus_in_cube.jpg

I switched to mixtures of Von Mises Fisher distributions and implemented tricks from the "Path Guiding in Production" paper (stochastic filtering and troughput clamping). Together with some parameter tweaking, this helped. Now I can start working on the guiding algo for volume rendering. ## My little path tracer

### Re: My little path tracer

Another over-night render. Much better this time.

I switched to mixtures of Von Mises Fisher distributions and implemented tricks from the "Path Guiding in Production" paper (stochastic filtering and troughput clamping). Together with some parameter tweaking, this helped. Now I can start working on the guiding algo for volume rendering. ### Re: My little path tracer

That looks gorgeous. I'm wondering what would be a memory requirements of those additional data for more or less realistic production scene.

Did you also implemented sample weighting described in that paper?

Did you also implemented sample weighting described in that paper?

### Re: My little path tracer

Thanks

I wasn't very much concerned with memory so far. However, I can tell that this scene uses 2k cells or so. Each of which has to store the coefficients for two mixtures - plus some statistics for the EM algorithm. To be frank, I haven't tested the algorithm on much else than this one scene ... It's still pretty much work in progress.

And regarding the weighting - you mean in the expectance maximization? Yes, I absolutely needed that because I only do forward path tracing. Hence, initially the distribution from which directions are sampled has nothing to do with the actual distribution of incident light. If you want to see the code, it's pretty terrible, but take a look if you like https://github.com/DaWelter/ToyTrace/blob/master/src/distribution_mixture_models.cxx#L407

Edit: Obligatory cornelbox with water. This one is unfortunate as my (poor) implementation of brute force path tracing (but still with NEE) beats my (also poor) implementation of the guided algorithm in an equal-time rendering. The guided algorithm seems more sample-efficient though. P.S. A cell containing the mixtures and other stuff takes about 1.1kB.

I wasn't very much concerned with memory so far. However, I can tell that this scene uses 2k cells or so. Each of which has to store the coefficients for two mixtures - plus some statistics for the EM algorithm. To be frank, I haven't tested the algorithm on much else than this one scene ... It's still pretty much work in progress.

And regarding the weighting - you mean in the expectance maximization? Yes, I absolutely needed that because I only do forward path tracing. Hence, initially the distribution from which directions are sampled has nothing to do with the actual distribution of incident light. If you want to see the code, it's pretty terrible, but take a look if you like https://github.com/DaWelter/ToyTrace/blob/master/src/distribution_mixture_models.cxx#L407

Edit: Obligatory cornelbox with water. This one is unfortunate as my (poor) implementation of brute force path tracing (but still with NEE) beats my (also poor) implementation of the guided algorithm in an equal-time rendering. The guided algorithm seems more sample-efficient though. P.S. A cell containing the mixtures and other stuff takes about 1.1kB.

### Re: My little path tracer

Had some time to also implemented the directional quad-tree representation from Müller et al. (2017) "Practical Path Guiding for Efficient Light-Transport Simulation". Here are some observations:

- Mixtures of Von Mises-Fischer (MoVMF) Distributions don't seem to fit as well to incident radiance samples. Occasionally peaks are too narrow, too broad or point in the wrong directions.
- Plays badly with my Vorba & Krivanek style ADRRS (Russian Roulette) implementation. I use only the learned incident radiance - Li(x,w) - estimate to determine if a path is to be terminated. So the survival probability is essentially Li(x,w)*bsdf(w,x,w')*path_contribution_up_to_x/pixel_intensity_estimate. If the estimate Li(x,w) happens to be a bad fit such that Li(x,w) is much less than the real radiance, then most paths would be terminated, leading to huge variance and splotchy artifacts.
- Quad-trees allow to keep track of variance in each node. Thus the weight window in the ADRRS termination criterium can be scaled by an error estimate stddev(Li(x,w)). In other works, if not sure about the future path contribution, give more wiggle room.
- We can also try to direct samples to regions which are little explored, i.e. nodes with few samples. This is related to the exploitation-exploration dilemma in reinforcement learning. I implemented sort of a upper confidence bound algorithm which would normally be applied to Multi-Armed-Bandit problems. It's very hacky and unscientific.
- To get better MoVMF fits, shuffle the samples before they are fed into the EM routine. My current render loop operates in passes - small number of samples per pixel per pass. After each pass, samples are sorted into the spatial bins of the guiding data structure. Then the directional distributions of each cell are fitted to shuffled samples with the incremental algorithm.
- A variable mixing factor between SD-tree-and BSDF sampling probability is very beneficial. Also known as MIS selection probability. See Sec 10.5 in Siggraph 2019 course "Path Guiding in Production". Because, for very narrow BSDFs, we can imagine that the product of the BSDF and Li(w) looks almost like the BSDF. In the extreme case of perfectly specular reflection/transmission, only the reflected/transmitted direction will yield a non-zero value. For now, I only implemented a hacky shader-dependent mixing. In the case of my "flat earth" scene, where the camera looks through several translucent surfaces, the default 0.5 mix weight works pretty terribly.
- My quad-tree code seems a little faster than the MoVMF code. Gut feeling. Didn't measure. All the evaluations of exponential functions in the MoVMF code seem to kill runtime. That is in spite of using fast exponential approximations and compilation to SIMD instructions.

### Re: My little path tracer

P.s.: Improved visualization. Spheres show the incident radiance. You can see the peaks from the main light. Overall non-zero values from indirect illumination. The boxes show the principal axes of the covariance matrices of the sample positions within the cells of the SD-tree.

### Re: My little path tracer

As the path guiding does not do as well as I'd like, I decide to do something else.

Built quasi monte-carlo sampling into the forward path tracer. I'm using a 21201-dimensional Sobol sequence from Joe & Kuo https://web.maths.unsw.edu.au/~fkuo/sobol/index.html. I tried a 2d base pattern with rotations and xor-scrambling, but I got biased images compared to the random sampler. A fixed number of dimensions is allocated per BSDF sample, NEE light sample, distance sample and so on. Some things are random sampled still, like the BSDF component. Due to the use of delta-tracking, distance sampling can use unbounded number of random numbers. I limit its use of dimensions to 10 or so. After that pseudo-random samples are drawn.

The base-sequence is rotated for each pixel by different amounts based on a blue noise pattern like in the Arnold paper. Correlations don't matter here as each pixel should converge individually.

And it actually works. Some more pics:

* Basic wine glass. Rendered with photon mapping. Nothing fancy

https://www.dropbox.com/s/owyeb63x1zhkms4/wineglass.jpg

* Dragon with more stuff. Forward path tracing with QMC. Using the maximum-roughness trick from Arnold to prevent fireflys. https://www.dropbox.com/s/q2is7g0ub8bxixp/xyzrgb_dragon_extended_qmc1024spp.jpg

Built quasi monte-carlo sampling into the forward path tracer. I'm using a 21201-dimensional Sobol sequence from Joe & Kuo https://web.maths.unsw.edu.au/~fkuo/sobol/index.html. I tried a 2d base pattern with rotations and xor-scrambling, but I got biased images compared to the random sampler. A fixed number of dimensions is allocated per BSDF sample, NEE light sample, distance sample and so on. Some things are random sampled still, like the BSDF component. Due to the use of delta-tracking, distance sampling can use unbounded number of random numbers. I limit its use of dimensions to 10 or so. After that pseudo-random samples are drawn.

The base-sequence is rotated for each pixel by different amounts based on a blue noise pattern like in the Arnold paper. Correlations don't matter here as each pixel should converge individually.

And it actually works. Some more pics:

* Basic wine glass. Rendered with photon mapping. Nothing fancy

https://www.dropbox.com/s/owyeb63x1zhkms4/wineglass.jpg

* Dragon with more stuff. Forward path tracing with QMC. Using the maximum-roughness trick from Arnold to prevent fireflys. https://www.dropbox.com/s/q2is7g0ub8bxixp/xyzrgb_dragon_extended_qmc1024spp.jpg

### Re: My little path tracer

The Dragon looks good. Which Brdf-Model are you using?