friedlinguini wrote:cessen wrote:Is my new understanding closer to correct? Am I still missing something?

"Unbiased" means that the expected value for a random sample equals the correct value. I.e., if you look at every possible sample you might generate with a rendering algorithm, weight each of those samples by the probability of generating it, add all of these up, and get the correct answer, then the algorithm is unbiased.

Yeah, after looking up the term "expected value", that's exactly what I meant. I was imagining, for example, rolling a die, and it converging on 3.5 (apparently the "expected value"). But even with a single roll it is still unbiased, even though the result might be 2, because the die is fair and the error introduced is also thus "fair" in some sense. (Of course, this analogy doesn't account for pdf's, etc. But it represents my intuitive understanding of the concept right now.)

friedlinguini wrote:"Consistent" means that given enough time, an algorithm will converge to the correct value. I think this is what you were thinking of as unbiased.

I think my idea of unbiased was actually "unbiased + consistent". I can see now how it is useful to separate those concepts. Indeed, I'm much more concerned about consistent than unbiased. I really appreciate you taking the time to help explain these concepts to me!

Incidentally, I'm not sure if this is a property of consistency or not, but since my target use-case is rendering animations, it is strictly important to me that rendering of animations be flicker-free. Or, perhaps put another way, it's important to me that the continuity of the rendered result across multiple frames accurately represents the continuity of the scene description across multiple frames.

Awesome! Thanks so much! That definitely helps. It's also comforting to see him use a similar die analogy as I was thinking of.

ingenious wrote:cessen wrote:...Under that (incorrect) definition, the paper you reference above is, indeed, unbiased. But lightcuts as presented in the original paper still is not.

You keep saying this, but I still don't know what your reasoning is.

Lightcuts, as presented in the original paper, builds the point lights and the light tree once, and proceeds to do the rest of the rendering process with those lights and with that tree. To use the die analogy again, it rolls the die once, and just uses that result. No matter how long it runs, it never re-rolls the die. Thus no matter how long it runs, it will never converge on the correct solution. Unless I misunderstood the paper, or am missing something else...

That's not to say that you couldn't build a different algorithm that runs the original lightcuts algorithm in a loop with new random numbers each time, and combines the results. That would then converge. But the method as presented in the original paper would not. Unless, again, I'm missing something.