Dissapointing building times from lbvh gpu based builder

 Posts: 11
 Joined: Fri Jan 13, 2012 4:44 pm
Re: Dissapointing building times from lbvh gpu based builder
Sure  some care is needed in everything one does.
Re: Dissapointing building times from lbvh gpu based builder
dr_eck wrote:An unbiased ray tracer will give a result that is statistically accurate, even if a "small" number of rays is traced.
"Statistically accurate"? And what does that mean? I can easily give you an example where a biased estimator is by all statistical means more accurate than an unbiased estimator of the same value with the same number of samples
Actually there's no need for me to give you an example  on many scenes irradiance caching produces a solution that is closer to the true solution than a path tracer with either the same number of samples or same rendering time, you choose. It is plain wrong to claim that a random realization of one estimator will be closer to the true solution than a random realization of another estimator, just because the expected value of the first one is equal to the true value. Come on, people! Not to mention that the central limit theorem only works for a "sufficiently large number of samples".
Now, consistency is very important indeed, and I agree with outofspace and graphicsMan. If your estimator is not consistent, then there's a good chance in that future, when the number of samples taken increases due to better hardware, you method will become useless, as it doesn't give a better solution with more samples. And that's what matters more.
Click here. You'll thank me later.

 Posts: 11
 Joined: Fri Jan 13, 2012 4:44 pm
Re: Dissapointing building times from lbvh gpu based builder
Yes  this is a fine point, that many in graphics don't seem to get.
All that unbiased means in practice is that, thanks to the Central Limit Theorem,
an infinite sum of finite runs will converge to zero error.
(it also means that each run has an expected error of zero over infinitely many realizations,
but this in practical terms has no value at all  all that matters is the above).
Which is pretty much the same definition of consistency: the limit value of the algorithm's
output has zero error.
So in the real world, all that matters is (a) having a convergent algorithm, (b) being able to run algorithm
indefintely (e.g. the algorithm's resource usage shouldn't grow with time).
Once these two points are satisfied, the only important factor is convergence speed.
What unbiasedness often gives compared to pure consistency is just a mathematical way to
reason about the convergence speed in terms of probability theory  but if you can get that by
other means, there's no advantage at all.
All that unbiased means in practice is that, thanks to the Central Limit Theorem,
an infinite sum of finite runs will converge to zero error.
(it also means that each run has an expected error of zero over infinitely many realizations,
but this in practical terms has no value at all  all that matters is the above).
Which is pretty much the same definition of consistency: the limit value of the algorithm's
output has zero error.
So in the real world, all that matters is (a) having a convergent algorithm, (b) being able to run algorithm
indefintely (e.g. the algorithm's resource usage shouldn't grow with time).
Once these two points are satisfied, the only important factor is convergence speed.
What unbiasedness often gives compared to pure consistency is just a mathematical way to
reason about the convergence speed in terms of probability theory  but if you can get that by
other means, there's no advantage at all.
Re: Dissapointing building times from lbvh gpu based builder
outofspace wrote: From a purely computational point of view, I suspect it will take at least
10 years before we'll have the raw compute power needed to run fully realistic light transport
in realtime (and that's without taking into account increasing display resolution).
Possibly 15 or 20.
That is also what I fear. There have been a few gpu path tracing demo's on very simple scenes which fit in constant memory that more or less run in realtime but other than that the prospects are rather grim. Initially, I got quite excited by gpu path tracing but I see nowhere near the 10  100x times speedup as promised by some gpu manufacturers except for simple scenes or scenes with a very limited number of bounces per ray.
Re: Dissapointing building times from lbvh gpu based builder
Hi guys, i would agree 320 ms for a high quality is pretty great.
I used quite some time implementing the first hlbvh from Jacopos initial paper. And my head almost exploded during the codeing .. head to node.. segment heads and so on.
But we managed to make it run pretty well both in cuda and opencl, and dont get me wrong it was a really good paper and we use it for a lot of projects.
And i just started working on the work queue version and it seems to be a lot simpler and i will report some timings when i get there.
I am also looking forward to review the nih framework it is a great inspiration for us mortals.
I used quite some time implementing the first hlbvh from Jacopos initial paper. And my head almost exploded during the codeing .. head to node.. segment heads and so on.
But we managed to make it run pretty well both in cuda and opencl, and dont get me wrong it was a really good paper and we use it for a lot of projects.
And i just started working on the work queue version and it seems to be a lot simpler and i will report some timings when i get there.
I am also looking forward to review the nih framework it is a great inspiration for us mortals.
Re: Dissapointing building times from lbvh gpu based builder
outofspace wrote:All that unbiased means in practice is that, thanks to the Central Limit Theorem,
an infinite sum of finite runs will converge to zero error.
(it also means that each run has an expected error of zero over infinitely many realizations,
but this in practical terms has no value at all  all that matters is the above).
This can be quite an important distinction if you think of it in terms of a render farm  with an unbiased algorithm, you just have each node separately render the scene at a lower quality and then average the results. With an unbiased (but nonetheless consistent) renderer, it's more complicated.
Re: Dissapointing building times from lbvh gpu based builder
It seems interesting... I will give a try too implement a high quality BVH builder in OpenCL too.
You say that you have implemented it in OpenCL and CUDA, why both ? Do you have encounter some performance difference between CUDA and OpenCL ? Have you been able to use the HLBVH on the CPU too ?
Thx
You say that you have implemented it in OpenCL and CUDA, why both ? Do you have encounter some performance difference between CUDA and OpenCL ? Have you been able to use the HLBVH on the CPU too ?
Thx
Spectral
OMPF 2 global moderator
OMPF 2 global moderator
Re: Dissapointing building times from lbvh gpu based builder
Optix now has a GPU based HLBVH builder, as of version 2.5. Has anyone tried to benchmark it yet?
Re: Dissapointing building times from lbvh gpu based builder
Optix now has a GPU based HLBVH builder, as of version 2.5. Has anyone tried to benchmark it yet?
Does anyone know if it is work queue based or original?
Re: Dissapointing building times from lbvh gpu based builder
We just had a visit from David Macallister, (nice chap) and i think he mentioned that the bvh builder in current optix is in fact the work queue version.
Who is online
Users browsing this forum: No registered users and 2 guests