Any pointers to those improvements?It's a good read, but I think there have been improvements in the basic approach since it was published.
Measure the convergence speed
Re: Measure the convergence speed

 Posts: 89
 Joined: Thu Apr 11, 2013 5:15 pm
Re: Measure the convergence speed
http://www.cgg.unibe.ch/publications/20 ... nimizationTristan wrote:Any pointers to those improvements?It's a good read, but I think there have been improvements in the basic approach since it was published.
http://www.cgg.unibe.ch/publications/20 ... filtering
http://www.cmlab.csie.ntu.edu.tw/project/sbf/
http://www.ece.ucsb.edu/~psen/Papers/EG ... oising.pdf
Same basic algorithm in eachrender a noisy image, apply some kind of denoising filter, estimate the perpixel error, drive more samples to reduce the error, rinse, lather, repeat.
Re: Measure the convergence speed
Thanks for the pointers to those documents. Now on my toread list.
Re: Measure the convergence speed
Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results. You can find a description of the work here: http://www.luxrender.net/forum/viewtopi ... =8&t=10955
And a demo video here: https://www.youtube.com/watch?v=P_QmdpnKTW4
And a demo video here: https://www.youtube.com/watch?v=P_QmdpnKTW4
Re: Measure the convergence speed
hhmm, open source renderers catch up. not bad.
A+
A+
Re: Measure the convergence speed
Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y)  OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y)  OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.Dade wrote:Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results.

 Posts: 89
 Joined: Thu Apr 11, 2013 5:15 pm
Re: Measure the convergence speed
The two are equivalent, other than a constant scale factor (All = Odd/2 + Even/2 => All  Even = Odd/2  Even/2). If the firefly comes from a single evennumbered sample, then it should be twice as bright in the even image, since there are half as many total samples. Perhaps you were taking the difference after tone mapping?ypoissant wrote:Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y)  OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y)  OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.
Re: Measure the convergence speed
Yes. DIfference after tone mapping. That is what the authors of the "... Lightweight Local Error Estimation" article do because the error is computed in the perceptual color space so to speak. This seemed to make sense because then several pixels which saturate to white don't need to be refined and several pixels that compress toward white because of the tone mapping are quicker to reach their low variance state.friedlinguini wrote:The two are equivalent, other than a constant scale factor (All = Odd/2 + Even/2 => All  Even = Odd/2  Even/2). If the firefly comes from a single evennumbered sample, then it should be twice as bright in the even image, since there are half as many total samples. Perhaps you were taking the difference after tone mapping?
Now, even when using (Odd  Even)^2 there are rare situations where fireflies happen in the same pixel in both odd and even buffers and then they don't get refined. So the idea of computing variance after tone mapping is probably not such a good idea after all. Only one firefly is one too many.
Re: Measure the convergence speed
DISCLAIMER : this is not a "look at my marvelous publications" post, I'm not in public research anymore so my impact factor is something I don't care about at all It's just that the two publications I list are simple yet effective and robusts solutions (I do not like nonrobusts algorithms).
I tried quite a bit of approaches for adaptive sampling, and honestly the simplest method I could think of gave the best results, and for a simple reason : for a correct variance estimation, you have to make the "variance estimation" variance decrease itself, and it seems that most methods do not ensure it. This gave a maybe not optimal method samplewise but ensured to converge anyway, and highly robust in all my tests : interleave uniform and adaptive sampling passes. This way, the variance estimation itself is ensured to have its own variance go to zero, and thus the estimated error is ensured to be correct, except in highly pathological cases where no sample reaches the "createafirefly" case.
All is in this poster : https://www.researchgate.net/publicatio ... ev=prf_pub
For fireflies, I tried many things as well, from imagespace to samplespace...As long as the number of fireflies on the image is not tremendous, I found that robust imagebased methods could give surprisingly good results. All the imagebased methods I knew of introduced blur, so I made a simple imagebased technic which has the great advantage to not introduce any blur neither evident artifacts, based on detection and """smart""" reconstruction of the detected fireflies on HDR images (NOT bilateral filtering which relies on nonrobust statistics). Although I had a hard time admitting it because I don't like imagebased methods (bias, beurk...), it gave impressive results on my tests scenes. All infos in this 2pages paper : https://www.researchgate.net/publicatio ... ev=prf_pub, just take a look at the top row of images.
I tried quite a bit of approaches for adaptive sampling, and honestly the simplest method I could think of gave the best results, and for a simple reason : for a correct variance estimation, you have to make the "variance estimation" variance decrease itself, and it seems that most methods do not ensure it. This gave a maybe not optimal method samplewise but ensured to converge anyway, and highly robust in all my tests : interleave uniform and adaptive sampling passes. This way, the variance estimation itself is ensured to have its own variance go to zero, and thus the estimated error is ensured to be correct, except in highly pathological cases where no sample reaches the "createafirefly" case.
All is in this poster : https://www.researchgate.net/publicatio ... ev=prf_pub
For fireflies, I tried many things as well, from imagespace to samplespace...As long as the number of fireflies on the image is not tremendous, I found that robust imagebased methods could give surprisingly good results. All the imagebased methods I knew of introduced blur, so I made a simple imagebased technic which has the great advantage to not introduce any blur neither evident artifacts, based on detection and """smart""" reconstruction of the detected fireflies on HDR images (NOT bilateral filtering which relies on nonrobust statistics). Although I had a hard time admitting it because I don't like imagebased methods (bias, beurk...), it gave impressive results on my tests scenes. All infos in this 2pages paper : https://www.researchgate.net/publicatio ... ev=prf_pub, just take a look at the top row of images.
Re: Measure the convergence speed
Are you using the average of all estimated pixel variances in the tile (like in the papers) or the max. ? I'm using the max., it gives better result to me. I like to be more "consistent and robust" than "optimal and sometime wrong".ypoissant wrote:Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y)  OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y)  OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.Dade wrote:Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results.
The idea is that the estimated variance can be wrong for one pixel but being wrong for all 32x32 pixels is practically impossible.
P.S. thanks Tarlack, going to read them.