Measure the convergence speed

Practical and theoretical implementation discussion.
Tristan
Posts: 10
Joined: Fri Jun 29, 2012 1:27 pm

Re: Measure the convergence speed

Post by Tristan » Fri Mar 07, 2014 3:33 pm

It's a good read, but I think there have been improvements in the basic approach since it was published.
Any pointers to those improvements? :)

friedlinguini
Posts: 89
Joined: Thu Apr 11, 2013 5:15 pm

Re: Measure the convergence speed

Post by friedlinguini » Fri Mar 07, 2014 4:40 pm

Tristan wrote:
It's a good read, but I think there have been improvements in the basic approach since it was published.
Any pointers to those improvements? :)
http://www.cgg.unibe.ch/publications/20 ... nimization
http://www.cgg.unibe.ch/publications/20 ... -filtering
http://www.cmlab.csie.ntu.edu.tw/project/sbf/
http://www.ece.ucsb.edu/~psen/Papers/EG ... oising.pdf

Same basic algorithm in each--render a noisy image, apply some kind of denoising filter, estimate the per-pixel error, drive more samples to reduce the error, rinse, lather, repeat.

ypoissant
Posts: 97
Joined: Wed Nov 30, 2011 12:44 pm

Re: Measure the convergence speed

Post by ypoissant » Fri Mar 07, 2014 11:03 pm

Thanks for the pointers to those documents. Now on my to-read list.

Dade
Posts: 206
Joined: Fri Dec 02, 2011 8:00 am

Re: Measure the convergence speed

Post by Dade » Thu Apr 03, 2014 6:57 pm

Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results. You can find a description of the work here: http://www.luxrender.net/forum/viewtopi ... =8&t=10955

And a demo video here: https://www.youtube.com/watch?v=P_QmdpnKTW4

mpeterson
Posts: 59
Joined: Fri Jan 06, 2012 3:09 pm

Re: Measure the convergence speed

Post by mpeterson » Fri Apr 04, 2014 12:07 pm

hhmm, open source renderers catch up. not bad.

A+

ypoissant
Posts: 97
Joined: Wed Nov 30, 2011 12:44 pm

Re: Measure the convergence speed

Post by ypoissant » Sun Apr 06, 2014 8:53 pm

Dade wrote:Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results.
Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.

friedlinguini
Posts: 89
Joined: Thu Apr 11, 2013 5:15 pm

Re: Measure the convergence speed

Post by friedlinguini » Sun Apr 06, 2014 10:30 pm

ypoissant wrote:Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.
The two are equivalent, other than a constant scale factor (All = Odd/2 + Even/2 => All - Even = Odd/2 - Even/2). If the firefly comes from a single even-numbered sample, then it should be twice as bright in the even image, since there are half as many total samples. Perhaps you were taking the difference after tone mapping?

ypoissant
Posts: 97
Joined: Wed Nov 30, 2011 12:44 pm

Re: Measure the convergence speed

Post by ypoissant » Mon Apr 07, 2014 3:04 am

friedlinguini wrote:The two are equivalent, other than a constant scale factor (All = Odd/2 + Even/2 => All - Even = Odd/2 - Even/2). If the firefly comes from a single even-numbered sample, then it should be twice as bright in the even image, since there are half as many total samples. Perhaps you were taking the difference after tone mapping?
Yes. DIfference after tone mapping. That is what the authors of the "... Lightweight Local Error Estimation" article do because the error is computed in the perceptual color space so to speak. This seemed to make sense because then several pixels which saturate to white don't need to be refined and several pixels that compress toward white because of the tone mapping are quicker to reach their low variance state.

Now, even when using (Odd - Even)^2 there are rare situations where fireflies happen in the same pixel in both odd and even buffers and then they don't get refined. So the idea of computing variance after tone mapping is probably not such a good idea after all. Only one firefly is one too many.

tarlack
Posts: 27
Joined: Mon Feb 10, 2014 7:48 am

Re: Measure the convergence speed

Post by tarlack » Mon Apr 07, 2014 7:30 am

DISCLAIMER : this is not a "look at my marvelous publications" post, I'm not in public research anymore so my impact factor is something I don't care about at all :mrgreen: It's just that the two publications I list are simple yet effective and robusts solutions (I do not like non-robusts algorithms).

I tried quite a bit of approaches for adaptive sampling, and honestly the simplest method I could think of gave the best results, and for a simple reason : for a correct variance estimation, you have to make the "variance estimation" variance decrease itself, and it seems that most methods do not ensure it. This gave a maybe not optimal method sample-wise but ensured to converge anyway, and highly robust in all my tests : interleave uniform and adaptive sampling passes. This way, the variance estimation itself is ensured to have its own variance go to zero, and thus the estimated error is ensured to be correct, except in highly pathological cases where no sample reaches the "create-a-firefly" case.

All is in this poster : https://www.researchgate.net/publicatio ... ev=prf_pub

For fireflies, I tried many things as well, from image-space to sample-space...As long as the number of fireflies on the image is not tremendous, I found that robust image-based methods could give surprisingly good results. All the image-based methods I knew of introduced blur, so I made a simple image-based technic which has the great advantage to not introduce any blur neither evident artifacts, based on detection and """smart""" reconstruction of the detected fireflies on HDR images (NOT bilateral filtering which relies on non-robust statistics). Although I had a hard time admitting it because I don't like image-based methods (bias, beurk...), it gave impressive results on my tests scenes. All infos in this 2-pages paper : https://www.researchgate.net/publicatio ... ev=prf_pub, just take a look at the top row of images.

Dade
Posts: 206
Joined: Fri Dec 02, 2011 8:00 am

Re: Measure the convergence speed

Post by Dade » Mon Apr 07, 2014 8:20 am

ypoissant wrote:
Dade wrote:Recently, I did some work on this topic and based on some of the papers listed in this thread. I'm very happy of the results.
Following your post, I also implemented the algorithm as outlined in "Progressive Path Tracing with Lightweight Local Error Estimation". However, I found that using (AllSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance didn't work too well. The issue I had is when a firefly appears in an even pass, then this firefly will be present in both AllSampledImage and OnlyEvenSampledImage and then is not detected as variance. In order to get good results with fireflies, I had to use (OnlyOddSampledImage(x,y) - OnlyEvenSampledImage(x,y))^2 to compute the variance. This works much better. I have to average both Odd and Even buffers to get the final render result though.
Are you using the average of all estimated pixel variances in the tile (like in the papers) or the max. ? I'm using the max., it gives better result to me. I like to be more "consistent and robust" than "optimal and sometime wrong".

The idea is that the estimated variance can be wrong for one pixel but being wrong for all 32x32 pixels is practically impossible.

P.S. thanks Tarlack, going to read them.

Post Reply