Many in graphics seem to think that unbiasedness offers some great advantage over consistency.

Well, it's not really the case - let me explain why in very simple terms.

All that

*unbiased*means in practice is that, thanks to the Central Limit Theorem,

an

*infinite sum*of finite runs of an algorithm will converge to zero error.

(it also means that each run has an expected error of zero over

*infinitely many*realizations,

but that is basically equivalent to the above, and in practical terms has no value at all since

we are always dealing with individual runs).

Which is pretty much the same definition of

*consistency*: the limit value of the algorithm's

output has zero error.

So in the real world, all that matters is (a) having a convergent algorithm, and (b) being able to run the

algorithm indefinitely (e.g. the algorithm's resource usage shouldn't grow with time).

Once these two points are satisfied, the

*only important factor*is

*convergence speed*.

What unbiasedness often gives compared to pure consistency is just a mathematical way to

reason about the convergence speed in terms of probability theory - but if you can get that by

other means, there's no advantage at all.

Convergence speed is what dictates the robustness of any consistent algorithm, whether it's

unbiased or not.

p.s.

interestingly, some unbiased algorithms are not even consistent, as would be the case for a

basic Metropolis without restarts. So unbiased in a sense is less strong a property than consistent,

in terms of practical convergence properties.