### Re: My little path tracer

Posted:

**Wed Mar 13, 2019 1:55 pm**I think the jade Buddha is really amazing!

Page **3** of **4**

Posted: **Wed Mar 13, 2019 1:55 pm**

I think the jade Buddha is really amazing!

Posted: **Fri Mar 15, 2019 7:29 am**

Hi. Thank you both. Glad you like it.

Btw., I forgot to say, I rendered the media with "The Beam Radiance Estimate" by Jarosz (2008). I use Embree to do the beam-point query. This is possible since fairly recent support for ray-aligned-disc intersections! So, if you happen read this, Embree developer, I say thank you very much. This is a very cool feature. *thumbs*

Currently, I'm trying to implement Walter et al.'s "Microfacet Models for Refraction through Rough Surfaces" (2007). It proved to be much more difficult than I thought. Mostly, getting the expression for the density p(wo|wi) right, where wo and wi are given. I need it for BPT MIS. My renderer knows only a monolithic BSDF. It does not allocate component BxDF's like PBRT. Therefore p must include both reflection and transmission. Oh well, I think I finally got it ...

Btw., I forgot to say, I rendered the media with "The Beam Radiance Estimate" by Jarosz (2008). I use Embree to do the beam-point query. This is possible since fairly recent support for ray-aligned-disc intersections! So, if you happen read this, Embree developer, I say thank you very much. This is a very cool feature. *thumbs*

Currently, I'm trying to implement Walter et al.'s "Microfacet Models for Refraction through Rough Surfaces" (2007). It proved to be much more difficult than I thought. Mostly, getting the expression for the density p(wo|wi) right, where wo and wi are given. I need it for BPT MIS. My renderer knows only a monolithic BSDF. It does not allocate component BxDF's like PBRT. Therefore p must include both reflection and transmission. Oh well, I think I finally got it ...

Posted: **Mon Mar 18, 2019 2:44 pm**

He dawelter,

can you say me, on which paper is the budda(Subsurface Scattering) based on? I want also to enter in this topic. If you have questens to walters microfacet, than ask me^^ I have implemented it in my Raytracer. For me was the big problem with numerical issues in the ggx-normaldistribution-function, if you use very little Roughness-Factors and a Theta-Angle, that goes nerly zero (Miconormal==Marconormal)

can you say me, on which paper is the budda(Subsurface Scattering) based on? I want also to enter in this topic. If you have questens to walters microfacet, than ask me^^ I have implemented it in my Raytracer. For me was the big problem with numerical issues in the ggx-normaldistribution-function, if you use very little Roughness-Factors and a Theta-Angle, that goes nerly zero (Miconormal==Marconormal)

Posted: **Tue Mar 19, 2019 9:02 am**

We know since VCM that photon mapping can be seen as a path sampling method. Therefore, I want to first refer to

Raab et al. (2008) "Unbiased Global Illumination with Participating Media"

for the concise path integral formulation of volume rendering.

I use the stochastic progressive variant of photon mapping with the global radius decay from

Knaus & Zwicker (2011) "Progressive Photon Mapping: A Probabilistic Approach"

To generate photon paths I use Woodcock tracking, essentially. In "Spectral and Decomposition Tracking for Rendering Heterogeneous

Volumes", Kutz et al. (2017) developed many extensions of this. I use the "spectral tracking" variant. I put a volume photon on every sampled interaction point.

When tracing eye paths, I obtain volume interaction points by the same tracking methods as used for photon mapping. At these points I look for nearby photons and add their contribution. That is, in the basic variant with Point-Point 3D estimators. In "The Beam Radiance Estimate for Volumetric Photon Mapping", Jarosz et al. (2008) developed a method to gather photons along a beam. I implemented this also. The three pics are rendered with it. It is good for thin media. Actually, I like this paper a lot. Not only presents it the beam thing, in Sec. 3.3 it also has a nice derivation of the photon weights.

In denser media I don't want to look for photons all the way to the next surface intersection. So I use a piecewise-constant stochastic estimate of the transmittance along the query beam. This essentially allows to cut off the query beam after a few mean free path lengths. Inspiration for this comes from Jarosz et al. (2011) "Progressive Photon Beams" Sec. 5.2.1 and Krivanek (2014) "Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation" Sec. 4.2 "Long" and "short" beams.

You can ask me about specifics. I'll try to answer

But to be honest, this is very much*brute force*. If you want to render SSS in such dense media as I took for the Buddha, you might be better off using a fast approximation!

Regarding Walter et al.'s rough transmittance model: I think I finally got it right. Here is a recreation of Figure 1 I "only" implemented the Beckmann NDF with V-Cavity masking & shadowing function. Looks fine and I don't have to implement VNDF sampling to keep the weights low. I also noticed numerical issues with low alpha. But IIRC I get it to 1e-3 with no issue. And at that point the material looks pretty much perfectly specular. I do shading calculations in double precision though.

Btw. Since you mention GGX: Heitz recently released a paper on how to sample the VNDF for GGX more easily.

http://jcgt.org/published/0007/04/01/paper.pdf

https://hal.archives-ouvertes.fr/hal-01509746/document

I thought about implementing it ... For you it is probably worthwhile if you don't have it already.

Raab et al. (2008) "Unbiased Global Illumination with Participating Media"

for the concise path integral formulation of volume rendering.

I use the stochastic progressive variant of photon mapping with the global radius decay from

Knaus & Zwicker (2011) "Progressive Photon Mapping: A Probabilistic Approach"

To generate photon paths I use Woodcock tracking, essentially. In "Spectral and Decomposition Tracking for Rendering Heterogeneous

Volumes", Kutz et al. (2017) developed many extensions of this. I use the "spectral tracking" variant. I put a volume photon on every sampled interaction point.

When tracing eye paths, I obtain volume interaction points by the same tracking methods as used for photon mapping. At these points I look for nearby photons and add their contribution. That is, in the basic variant with Point-Point 3D estimators. In "The Beam Radiance Estimate for Volumetric Photon Mapping", Jarosz et al. (2008) developed a method to gather photons along a beam. I implemented this also. The three pics are rendered with it. It is good for thin media. Actually, I like this paper a lot. Not only presents it the beam thing, in Sec. 3.3 it also has a nice derivation of the photon weights.

In denser media I don't want to look for photons all the way to the next surface intersection. So I use a piecewise-constant stochastic estimate of the transmittance along the query beam. This essentially allows to cut off the query beam after a few mean free path lengths. Inspiration for this comes from Jarosz et al. (2011) "Progressive Photon Beams" Sec. 5.2.1 and Krivanek (2014) "Unifying Points, Beams, and Paths in Volumetric Light Transport Simulation" Sec. 4.2 "Long" and "short" beams.

You can ask me about specifics. I'll try to answer

But to be honest, this is very much

Regarding Walter et al.'s rough transmittance model: I think I finally got it right. Here is a recreation of Figure 1 I "only" implemented the Beckmann NDF with V-Cavity masking & shadowing function. Looks fine and I don't have to implement VNDF sampling to keep the weights low. I also noticed numerical issues with low alpha. But IIRC I get it to 1e-3 with no issue. And at that point the material looks pretty much perfectly specular. I do shading calculations in double precision though.

Btw. Since you mention GGX: Heitz recently released a paper on how to sample the VNDF for GGX more easily.

http://jcgt.org/published/0007/04/01/paper.pdf

https://hal.archives-ouvertes.fr/hal-01509746/document

I thought about implementing it ... For you it is probably worthwhile if you don't have it already.

Posted: **Tue Mar 19, 2019 6:05 pm**

Thanks alot for your detailed answer. This will help to get a good start.

At the moment I use the Sampling-Technik from Eric Heitz descripted in this paper:

https://hal.inria.fr/hal-00996995v1/document

The Paper from your link is then the next step. But at the moment I'm more interested in Subsurface Scattering.

At the moment I use the Sampling-Technik from Eric Heitz descripted in this paper:

https://hal.inria.fr/hal-00996995v1/document

The Paper from your link is then the next step. But at the moment I'm more interested in Subsurface Scattering.

Posted: **Fri Mar 22, 2019 10:53 am**

If you already have the VNDF sampling from the 2014 paper then implementing the new is trivial. It's the same set of samples with the same PDF, so you just have to copy paste the reference sample method and then you'll have faster GGX sampling

Posted: **Sun Mar 24, 2019 10:04 am**

@XMAMan you're welcome.

Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect.

So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have.

Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect.

So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have.

Posted: **Mon Mar 25, 2019 5:30 pm**

Very pretty indeed!dawelter wrote: ↑Sun Mar 24, 2019 10:04 am@XMAMan you're welcome.

Meanwhile, I rendered variations of the Buddha. This time with the new glossy transmissive material. I also added a switch in the material to force a path tracing step instead of getting Li from photons, essentially treating the BSDF as if it was a delta function. No NEE yet. I'm slightly concerned about the black rims, but I just attribute them to the lack of anything to reflect.

So far so good. But my renders take awfully long. After reading through the Arnold and Manuka papers, I come to the conclusion to focus on some basics. Sane light selection, QMC sampling, path guiding, splitting and RR, are things I want to have.

Posted: **Tue Apr 09, 2019 8:44 pm**

Thanks, knightcrawler!

Here is a another 24 h rendering. I had this fun idea to make a flat earth version of the globe figure. The dome is filled with a thin, scattering medium. This creates the glow effect around the sun. The image has other details which I like. Like the subtle shadows cast on the surrounding walls. But it proved difficult to render, i.e. it's still noisy.

Here is a another 24 h rendering. I had this fun idea to make a flat earth version of the globe figure. The dome is filled with a thin, scattering medium. This creates the glow effect around the sun. The image has other details which I like. Like the subtle shadows cast on the surrounding walls. But it proved difficult to render, i.e. it's still noisy.

Posted: **Sat Feb 08, 2020 9:23 am**

Here is a first result from my attempt to implement path guiding following the "Path Guiding in Production" paper.

So far I have surface guiding only. My ingredients are:

* KD-tree inspired by "Practical Path Guiding for Efficient Light-Transport Simulation" by Müller et al.(2017)

* Gaussian Mixture model to represent incident radiance, inspired by "On-line Learning of Parametric Mixture Models for Light Transport Simulation" by Vorba et al. (2014)

* Forward path tracing only.

I rendered my take on the famous torus scene: It is actually only little better than rendering with standard path tracing, which is shown in the following: Same number of samples.

To see that it actually does something sane, I visualize the distributions pertaining to the kd-tree cells. This one is from the final learning pass: "Sampled" means the obvious, it's from were the samples are drawn. It is fixed. "Learned" is the distribution fitted to the iradiance obtained from the drawn samples.

So far I have surface guiding only. My ingredients are:

* KD-tree inspired by "Practical Path Guiding for Efficient Light-Transport Simulation" by Müller et al.(2017)

* Gaussian Mixture model to represent incident radiance, inspired by "On-line Learning of Parametric Mixture Models for Light Transport Simulation" by Vorba et al. (2014)

* Forward path tracing only.

I rendered my take on the famous torus scene: It is actually only little better than rendering with standard path tracing, which is shown in the following: Same number of samples.

To see that it actually does something sane, I visualize the distributions pertaining to the kd-tree cells. This one is from the final learning pass: "Sampled" means the obvious, it's from were the samples are drawn. It is fixed. "Learned" is the distribution fitted to the iradiance obtained from the drawn samples.