Efficient sampling of many lights

Practical and theoretical implementation discussion.
cessen
Posts: 30
Joined: Sun Apr 14, 2013 9:34 pm

Re: Efficient sampling of many lights

Postby cessen » Fri Mar 21, 2014 3:37 pm

friedlinguini wrote:The fact that you don't like it doesn't make it biased. :-)


Ha ha, fair enough. :-) Indeed, my understanding of the term "biased" was incorrect. Lightcuts is unbiased.

Nevertheless, the lightcuts paper presents an algorithm that without modification will never converge to a correct solution (even modulo caustics from perfect reflectors/refractors), and results in visually objectionable flickering artifacts in animation. And one of the main benefit of lightcuts, as I understand it, is precisely that smooth structured error, which in still images is not at all obvious as error. From an applications and usability standpoint, that puts lightcuts in the same camp as e.g. photon mapping and radiance caching in my mind. So that's where I was coming from.

But, again, still unbiased. I understand that now.

ingenious
Posts: 273
Joined: Mon Nov 28, 2011 11:11 pm
Location: London, UK
Contact:

Re: Efficient sampling of many lights

Postby ingenious » Fri Mar 21, 2014 6:13 pm

cessen wrote:Nevertheless, the lightcuts paper presents an algorithm that without modification will never converge to a correct solution (even modulo caustics from perfect reflectors/refractors), and results in visually objectionable flickering artifacts in animation. And one of the main benefit of lightcuts, as I understand it, is precisely that smooth structured error, which in still images is not at all obvious as error. From an applications and usability standpoint, that puts lightcuts in the same camp as e.g. photon mapping and radiance caching in my mind. So that's where I was coming from.


Since all sampling in Lightcuts is unbiased, as you average the results of many independently rendered images, you will converge to the correct solution. What's bothering you is the correlated noise, which is the same noise as in path tracing, just that some of the random numbers drawn are shared for all pixels, hence the visual patterns.

What do you think about Coherent Path Tracing then:
http://graphics.ucsd.edu/~iman/coherent ... racing.php

Is it unbiased according to your understanding? :) It's exactly as unbiased as Keller's original instant radiosity (minus the VPL clamping).

There is a difference between correlated sampling and bias. In fact, the "artifacts" you see in (progressive) photon mapping are 99% due to correlated sampling in typical scenes with lots of flat surfaces. They are not bias. For example, you can generate 10 independent photon maps and for each pixel choose a random one to do your query. This will decorrelate the photon sampling to some degree, though it will lower its efficiency.
Image Click here. You'll thank me later.

cessen
Posts: 30
Joined: Sun Apr 14, 2013 9:34 pm

Re: Efficient sampling of many lights

Postby cessen » Fri Mar 21, 2014 11:38 pm

ingenious wrote:What do you think about Coherent Path Tracing then:
http://graphics.ucsd.edu/~iman/coherent ... racing.php

Is it unbiased according to your understanding? :)


So, first I'll make a distinction between my current understanding of the term "unbiased", and my understanding of it prior to this discussion.

My prior (incorrect) understanding of the term was that it essentially meant, "Given infinite time, this algorithm will converge to the correct solution." Under that (incorrect) definition, the paper you reference above is, indeed, unbiased. But lightcuts as presented in the original paper still is not.

But, interestingly, neither is the algorithm I presented in the OP when taken on its own. The function pseudo-code I presented does not iterate to find infinitely many lights over time, it only selects a single light. For it to converge to a correct solution, it needs to be used by another algorithm that _does_ iterate. This is analogous to lightcuts.

My new understanding of the term "unbiased" after this discussion includes both lightcuts and the function psuedo-code I presented (at least, if you replace the input variable of the function with a random number source), because the choices they make are not biased in any particular direction, even if those choices are only made once.

Is my new understanding closer to correct? Am I still missing something?

friedlinguini
Posts: 79
Joined: Thu Apr 11, 2013 5:15 pm

Re: Efficient sampling of many lights

Postby friedlinguini » Sat Mar 22, 2014 1:10 am

cessen wrote:Is my new understanding closer to correct? Am I still missing something?


There's probably a FAQ somewhere about this.

"Unbiased" means that the expected value for a random sample equals the correct value. I.e., if you look at every possible sample you might generate with a rendering algorithm, weight each of those samples by the probability of generating it, add all of these up, and get the correct answer, then the algorithm is unbiased.
"Consistent" means that given enough time, an algorithm will converge to the correct value. I think this is what you were thinking of as unbiased.

These two notions are orthogonal to one another:
Unbiased + consistent: Path tracing, metropolis light transport, lightcuts with repeated VPL generation.
Biased + consistent: Progressive photon mapping, VCM, path space regularization.
Unbiased + inconsistent: Lightcuts with a fixed number of VPLs, path tracing with the same random number seed at every iteration.
Biased + inconsistent: Photon mapping, irradiance caching.

friedlinguini
Posts: 79
Joined: Thu Apr 11, 2013 5:15 pm

Re: Efficient sampling of many lights

Postby friedlinguini » Sat Mar 22, 2014 1:34 am

Googled a bit. Before ingenious yells at me I should point out this paper: http://cs.au.dk/~toshiya/misc.pdf

It explains things more thoroughly than I did and also points out start-up bias in metropolis light transport.

ingenious
Posts: 273
Joined: Mon Nov 28, 2011 11:11 pm
Location: London, UK
Contact:

Re: Efficient sampling of many lights

Postby ingenious » Sat Mar 22, 2014 12:45 pm

cessen wrote:...Under that (incorrect) definition, the paper you reference above is, indeed, unbiased. But lightcuts as presented in the original paper still is not.


You keep saying this, but I still don't know what your reasoning is.

friedlinguini wrote:Googled a bit. Before ingenious yells at me I should point out this paper: http://cs.au.dk/~toshiya/misc.pdf

It explains things more thoroughly than I did and also points out start-up bias in metropolis light transport.


Yeah, I wonder why Toshiya hasn't put this report on his web page. It does indeed nicely summarize some very common misconceptions. Though it doesn't touch on correlated sampling.
Image Click here. You'll thank me later.

stefan
Posts: 42
Joined: Wed Dec 21, 2011 8:57 pm

Re: Efficient sampling of many lights

Postby stefan » Sat Mar 22, 2014 4:22 pm

friedlinguini wrote:These two notions are orthogonal to one another:
Unbiased + consistent: Path tracing, metropolis light transport, lightcuts with repeated VPL generation.
Biased + consistent: Progressive photon mapping, VCM, path space regularization.
Unbiased + inconsistent: Lightcuts with a fixed number of VPLs, path tracing with the same random number seed at every iteration.
Biased + inconsistent: Photon mapping, irradiance caching.

I'm not so sure about the latter two. I believe Mr Jensen refers to photon mapping as consistent (= trow an infinite number of photons at it, and it will converge to the correct result). Irradiance caching with a very high number rays per cache entry and a very low max error gets to to fewer one cache entry per pixel, therefore converging to an equivalent path traced result. I haven't seen anyone implement progressive irradiance caching yet (I've been meaning to try), but it would be an interesting experiment.

stefan
Posts: 42
Joined: Wed Dec 21, 2011 8:57 pm

Re: Efficient sampling of many lights

Postby stefan » Sat Mar 22, 2014 4:29 pm

friedlinguini wrote:Googled a bit. Before ingenious yells at me I should point out this paper: http://cs.au.dk/~toshiya/misc.pdf

It explains things more thoroughly than I did and also points out start-up bias in metropolis light transport.

Thanks, that is a great paper!

cessen
Posts: 30
Joined: Sun Apr 14, 2013 9:34 pm

Re: Efficient sampling of many lights

Postby cessen » Sat Mar 22, 2014 7:14 pm

friedlinguini wrote:
cessen wrote:Is my new understanding closer to correct? Am I still missing something?


"Unbiased" means that the expected value for a random sample equals the correct value. I.e., if you look at every possible sample you might generate with a rendering algorithm, weight each of those samples by the probability of generating it, add all of these up, and get the correct answer, then the algorithm is unbiased.


Yeah, after looking up the term "expected value", that's exactly what I meant. I was imagining, for example, rolling a die, and it converging on 3.5 (apparently the "expected value"). But even with a single roll it is still unbiased, even though the result might be 2, because the die is fair and the error introduced is also thus "fair" in some sense. (Of course, this analogy doesn't account for pdf's, etc. But it represents my intuitive understanding of the concept right now.)

friedlinguini wrote:"Consistent" means that given enough time, an algorithm will converge to the correct value. I think this is what you were thinking of as unbiased.


I think my idea of unbiased was actually "unbiased + consistent". I can see now how it is useful to separate those concepts. Indeed, I'm much more concerned about consistent than unbiased. I really appreciate you taking the time to help explain these concepts to me!

Incidentally, I'm not sure if this is a property of consistency or not, but since my target use-case is rendering animations, it is strictly important to me that rendering of animations be flicker-free. Or, perhaps put another way, it's important to me that the continuity of the rendered result across multiple frames accurately represents the continuity of the scene description across multiple frames.

friedlinguini wrote:Googled a bit. Before ingenious yells at me I should point out this paper: http://cs.au.dk/~toshiya/misc.pdf


Awesome! Thanks so much! That definitely helps. It's also comforting to see him use a similar die analogy as I was thinking of.

ingenious wrote:
cessen wrote:...Under that (incorrect) definition, the paper you reference above is, indeed, unbiased. But lightcuts as presented in the original paper still is not.


You keep saying this, but I still don't know what your reasoning is.


Lightcuts, as presented in the original paper, builds the point lights and the light tree once, and proceeds to do the rest of the rendering process with those lights and with that tree. To use the die analogy again, it rolls the die once, and just uses that result. No matter how long it runs, it never re-rolls the die. Thus no matter how long it runs, it will never converge on the correct solution. Unless I misunderstood the paper, or am missing something else...

That's not to say that you couldn't build a different algorithm that runs the original lightcuts algorithm in a loop with new random numbers each time, and combines the results. That would then converge. But the method as presented in the original paper would not. Unless, again, I'm missing something.

ingenious
Posts: 273
Joined: Mon Nov 28, 2011 11:11 pm
Location: London, UK
Contact:

Re: Efficient sampling of many lights

Postby ingenious » Sat Mar 22, 2014 9:47 pm

cessen wrote:Lightcuts, as presented in the original paper, builds the point lights and the light tree once, and proceeds to do the rest of the rendering process with those lights and with that tree. To use the die analogy again, it rolls the die once, and just uses that result. No matter how long it runs, it never re-rolls the die. Thus no matter how long it runs, it will never converge on the correct solution. Unless I misunderstood the paper, or am missing something else...


No matter how long what runs? The algorithm, as presented in the paper, always runs a finite amount of time for a fixed number of initial virtual point lights. If you start increasing this number, then the light tree would become bigger. In order to converge to the correct solution, the light cut for each pixel should also increase with the size of the tree. Depending on the heuristics used in the cut constructions, this may happen or not. The original description in the Lightcuts paper bounds the size of the cut, as far as I remember. If this is the case, it won't converge as you increase the number of light samples.

cessen wrote:That's not to say that you couldn't build a different algorithm that runs the original lightcuts algorithm in a loop with new random numbers each time, and combines the results. That would then converge. But the method as presented in the original paper would not. Unless, again, I'm missing something.


Every unbiased method has been originally described as a single-pass finite-sample algorithm. Progressive rendering with an "outer loop" came much later, so in that regard Lightcuts is on the same "level" as path tracing.

cessen wrote:Incidentally, I'm not sure if this is a property of consistency or not, but since my target use-case is rendering animations, it is strictly important to me that rendering of animations be flicker-free. Or, perhaps put another way, it's important to me that the continuity of the rendered result across multiple frames accurately represents the continuity of the scene description across multiple frames.


In this case lightcuts, or any algorithm that heavily relies on inter-pixel correlated sampling, is not very suitable.
Image Click here. You'll thank me later.


Return to “General Development”

Who is online

Users browsing this forum: No registered users and 5 guests