Lowdiscrepancy samplers

 Posts: 16
 Joined: Mon Oct 29, 2012 6:02 am
Re: Lowdiscrepancy samplers
Thanks to everyone for the insightful replies, it's very kind of you. It finally starts to make a bit more sense to me!
Re: Lowdiscrepancy samplers
BTW,
I have also play a lot with QMC on the GPU (Notice that I can't use it in my renderer due to patents... but...) I have really notice any improvement.
QMC give good stratification, etc etc... but ...
1) After 1000 samples it is better to go with uniform random numbers
2) On the GPU my renderer is slower with QMC (surely due to incoherent memory accesses etc..) and so, even for first frame... I got better results with uniform RNG
So, honestly, I have never see where the QMC helps on the GPU ! Does someone notice this too ?
I have also play a lot with QMC on the GPU (Notice that I can't use it in my renderer due to patents... but...) I have really notice any improvement.
QMC give good stratification, etc etc... but ...
1) After 1000 samples it is better to go with uniform random numbers
2) On the GPU my renderer is slower with QMC (surely due to incoherent memory accesses etc..) and so, even for first frame... I got better results with uniform RNG
So, honestly, I have never see where the QMC helps on the GPU ! Does someone notice this too ?
Spectral
OMPF 2 global moderator
OMPF 2 global moderator
Re: Lowdiscrepancy samplers
Ehm yes, I did some qualitative comparisons yesterday, and I noticed both things: it is slower (despite the calculations being lighter), and it seems to give poorer results... I would love to see some insight on this from the QMC gurus.

 Posts: 16
 Joined: Mon Oct 29, 2012 6:02 am
Re: Lowdiscrepancy samplers
This is quite surprising to me  what patents? How were they allowed to patent it? I presume that e.g. Halton sequence was invented by a guy named Halton (and not, say Nvidia?). Perhaps the patent was issued because they were the first to use QMC on GPU?spectral wrote:Notice that I can't use it in my renderer due to patents...
Re: Lowdiscrepancy samplers
They use QMC for rendering... simply !
They invent nothing... they just use some of the well know "integration" tools... and apply it to rendering (it is integration too) !!!
Maybe they were the first to use it... looks strange to me... it is like patenting the number "0 1 2 3 4 5 6 7 8 9" in rendering
BTW: NVIDIA/Mental & Pixar have a lot of patents... be careful
They invent nothing... they just use some of the well know "integration" tools... and apply it to rendering (it is integration too) !!!
Maybe they were the first to use it... looks strange to me... it is like patenting the number "0 1 2 3 4 5 6 7 8 9" in rendering
BTW: NVIDIA/Mental & Pixar have a lot of patents... be careful
Spectral
OMPF 2 global moderator
OMPF 2 global moderator
Re: Lowdiscrepancy samplers
I'm not a QMC guru in any way, but after working hard for quite a while I actually did manage to implement a sobol sampler that does improve convergence by quite a lot.
My implementation might not be most academically correct one, but it works, and I've put quite a lot of hours into measuring convergence vs other samplers, and it beats MT, Halton and Fauré (although they're pretty close). And all this with not too objectionable correlation patterns during rendering, and no hashing or scrambling whatsoever, just the pure ouput of the sobol sample generator.
The 1thread scenario is simple (oh, and I don't generate my samples ahead, I just draw them as long as the path continues (I'm using the Joe & Kuo data so I can go up to dimension 21201)
1. Move to next sobol sample index
2. (sample the image plane) dimension 0 & dimension 1 for generating the pixel index
(basically: pixelX = width * dim[0]; pixelY = height * dim[1]
3. dimension 2 & dimension 3 for jittering the sample within the pixel
4. dimension 4 & dimension 5 for lens sampling (I haven't implemented motion blur so I don't sample time atm)
5. (direct light sampling) dimension 6 for choosing a light source, dimension 7 & 8 for sampling the chosen light source
6. dimension 9 & 10 for direction and dimension 11 for choosing a bsdf component to sample
7. dimension 12 for RR
8. go to #5 and repeat until max_path_length (incrementing the dimension all the time, obviously) then go to #1
As I said, the 1thread scenario is simple as you just jump along the sequence incrementing the sobol sample index for every new pixel sample. My biggest problem was doing it across many threads. I tried scrambling (using a unique scramble value for every thread), CP rotation and a global/shared index counter (for #1 in my description above). I found that scrambling reduced convergence somewhat and also produced objectionable correlation patterns, CP rotation gave less correlation patterns but they were still objectionable. Using a global (shared by all threads) sobol index counter gave me the nicest result and also the best convergence. I know this is a lousy way to do it, but speed was not my objective. Gruenschloss published a paper about exactly this, http://gruenschloss.org/parqmc/parqmc.pdf, but to be honest the math (section 3.3) is a bit too dense for me so It'll be a while before I'll be able to implement it.
My implementation might not be most academically correct one, but it works, and I've put quite a lot of hours into measuring convergence vs other samplers, and it beats MT, Halton and Fauré (although they're pretty close). And all this with not too objectionable correlation patterns during rendering, and no hashing or scrambling whatsoever, just the pure ouput of the sobol sample generator.
The 1thread scenario is simple (oh, and I don't generate my samples ahead, I just draw them as long as the path continues (I'm using the Joe & Kuo data so I can go up to dimension 21201)
1. Move to next sobol sample index
2. (sample the image plane) dimension 0 & dimension 1 for generating the pixel index
(basically: pixelX = width * dim[0]; pixelY = height * dim[1]
3. dimension 2 & dimension 3 for jittering the sample within the pixel
4. dimension 4 & dimension 5 for lens sampling (I haven't implemented motion blur so I don't sample time atm)
5. (direct light sampling) dimension 6 for choosing a light source, dimension 7 & 8 for sampling the chosen light source
6. dimension 9 & 10 for direction and dimension 11 for choosing a bsdf component to sample
7. dimension 12 for RR
8. go to #5 and repeat until max_path_length (incrementing the dimension all the time, obviously) then go to #1
As I said, the 1thread scenario is simple as you just jump along the sequence incrementing the sobol sample index for every new pixel sample. My biggest problem was doing it across many threads. I tried scrambling (using a unique scramble value for every thread), CP rotation and a global/shared index counter (for #1 in my description above). I found that scrambling reduced convergence somewhat and also produced objectionable correlation patterns, CP rotation gave less correlation patterns but they were still objectionable. Using a global (shared by all threads) sobol index counter gave me the nicest result and also the best convergence. I know this is a lousy way to do it, but speed was not my objective. Gruenschloss published a paper about exactly this, http://gruenschloss.org/parqmc/parqmc.pdf, but to be honest the math (section 3.3) is a bit too dense for me so It'll be a while before I'll be able to implement it.

 Posts: 86
 Joined: Thu Apr 11, 2013 5:15 pm
Re: Lowdiscrepancy samplers
Why do you need two steps for this? Why notThammuz wrote:2. (sample the image plane) dimension 0 & dimension 1 for generating the pixel index
(basically: pixelX = width * dim[0]; pixelY = height * dim[1]
3. dimension 2 & dimension 3 for jittering the sample within the pixel
float fPixelX = width * dim[0]; pixelX = int(fPixelX); jitterX = fPixelX  pixelX;?
Seeing as how the image plane is a continuous domain and all. Granted, the model falls down a bit if you're not using a box filter, but jitterX would still be a "random" number in [0,1) that could be used for any other 2D filter.
Re: Lowdiscrepancy samplers
You're absolutely right, of course. This was exactly what I did up until recently (although I never separated the position and the jitter, i just generated floating point coordinates with the first pair of dimensions). I guess it's a matter of taste, as the correlation patterns were a bit more fine grained and gave a more uniform impression when I generated the jitter explicitly with a new pair of dimensions. I liked what I saw and I stuck with it