we recently published a new method for Monte-Carlo-based numerical integration, named "Globally adaptive control-variate" (GACV). It has been published in the SISC journal (Society for industrial and applied mathematics - scientific computing). It's an hybrid between cubature rules and Monte-Carlo integration. As this journal may not be known by the computer graphics community, I point the preprint directly here.

https://www.researchgate.net/publication/264719192_Globally_Adaptive_Control_Variate_for_Robust_Numerical_Integration?ev=prf_pub

Its strong points are:

- empirically, it exhibits a linear behavior with respect to variance (instead of quadratic for standard MC).

- numerically highly robust and accurate, fast, low memory footprint, highly stable in memory footprint and computation time. We show in the paper that it was not the case for the existing state of the art methods.

- C++ code is freely available for direct use ! http://www.irit.fr/~Loic.Barthe/transfer.php#Free . You just have to code a simple functor evaluating the function at any point in the definition domain, call a function with this functor, and you're done. It's all templates so you do not need any specific compilation process, and an example is provided.

- can be optimally combined with importance sampling.

- simple (no need for aspirin to read and understand the paper, no complex equations).

Its weak points are :

- like all MC methods, not well fitted for highly-oscillatory non-positive functions with means close to zero (we tried it for electromagnetic scattering computation, and it behaved really badly).

- In practice, limited to dimensions at most 8.

Its possible extensions are :

- handle correlated integrals, to apply it directly to image rendering with 1 or 2 bounces, depth of field or motion blur.

- find a way to extend to arbitrary number of dimensions, or find a way to let some of the dimensions purely stochastic. The main problem is how to compute an accurate error in this case.

Here is the abstract :

Many methods in computer graphics require the integration of functions on low-to-middle-dimensional spaces. However, no available method can handle all the possible integrands accurately and rapidly. This paper presents a robust numerical integration method, able to handle arbitrary non-singular scalar or vector-valued functions de�fined on low-to-middle-dimensional spaces. Our method combines control variate, globally adaptive subdivision and Monte-Carlo estimation to achieve fast and accurate computations of any non-singular integral. The runtime is linear with respect to standard deviation while standard Monte-Carlo methods are quadratic. We additionally show through numerical tests that our method is extremely stable from a computation time and memory footprint point-of-view, assessing its robustness. We demonstrate our method on a participating media voxelization application, which requires the computation of several millions integrals for complex media.

Any comment welcome, I would be glad discussing it with you !

Statistics: Posted by tarlack — Tue Aug 19, 2014 6:00 pm

]]>

Having said that, spectral rendering comes with its bunch of issues as well. First of all performance suffers. Depending on how accurate you want to go this might turn out to be not too bad if you decide to not consider dispersion and stuff like that. We are usually seeing a 2x-3x lower performance with spectral rendering of 40 wavelengths compared to RGB so it is not unusable but the performance impact is there. The bigger issue is: where do you get the spectral information from? Getting light spectral information is usually not too difficult, many manufacturers provide these. Material spectra are much harder to get so most of the time you end up using RGB for the anyways. There also is not a single spectral texture format publicly available.

So if you are interested in just creating pretty images, I would say just forget about spectral raytracing, there is not much to be gained from it. But if you need to have accurate results there is no way around it.

If you want to do some comparisons yourself, get a demo version of Autodesk VRED Pro, it allows you to switch between RGB and spectral raytracing instantly so you can compare it directly.

Statistics: Posted by Serendipity — Tue Aug 19, 2014 3:37 pm

]]>

tomasdavid wrote:

For the extra spectral noise you definitely want to check out Alex Wilkie's EGSR 2014 paper on Hero Wavelength...

For the extra spectral noise you definitely want to check out Alex Wilkie's EGSR 2014 paper on Hero Wavelength...

Thanks, a very interesting reading.

Statistics: Posted by Dade — Sun Aug 17, 2014 8:20 pm

]]>

]]>

it contains a good MIS implementation, + some good integrators

Statistics: Posted by MohamedSakr — Sun Aug 17, 2014 1:51 am

]]>

Certainly spectral rendering is going to be expensive which is why I'm interested in doing some comparisons to see if it actually makes a difference for "normal" scenes rather than specially constructed ones that are designed to show the difference, especially where the input is not spectral (i.e. normal painted textures, user-selected colours etc).

Statistics: Posted by andersll — Sat Aug 16, 2014 2:38 pm

]]>

]]>

For other stuff.. skin can be more realistic, you can probably do better hair (both have spectral models more or less readily available), but I don't think there is any rigorous perceptual study. Mostly because both still look different enough from the actual thing, that it would be tough to find the right question to ask.

As for "usual" materials (rocks, stones, wood), it might be tricky to get spectral data and the difference from "realism" probably won't be more than could be ascribed to natual variations in the given material. It could be interesting to try to match a larger object, but you'd actually get spectral textures for the object, as artists won't be painting spectral in photoshop anytime soon.

Statistics: Posted by tomasdavid — Sat Aug 16, 2014 2:51 am

]]>

Statistics: Posted by MohamedSakr — Fri Aug 15, 2014 7:07 am

]]>

]]>

thanks a lot Dade

Statistics: Posted by MohamedSakr — Thu Aug 14, 2014 8:12 am

]]>

MohamedSakr wrote:

what about camera parameters? like ISO

what about camera parameters? like ISO

If you mean tone mapping (like film sensitive), it is done after the merging so it is not a problem. It is not even part of the rendering process (for instance, save in .exr and than apply to the file any kind of tone mapping you want).

Statistics: Posted by Dade — Thu Aug 14, 2014 7:17 am

]]>

http://www.ci.i.u-tokyo.ac.jp/~hachisuka/misc.pdf

Statistics: Posted by MohamedSakr — Thu Aug 14, 2014 4:23 am

]]>

]]>

Geometrian wrote:

--If the sensor and light are the same distance from the lens, the results agree

--If the light is moved closer, it appears darker in the light tracer render

--If the light is moved farther, it appears brighter in the light tracer render

is that at case 2 and 3 the camera path tracer gives the same brightness, while the light tracer gives darker/brighter results

this makes sense, assuming you are doing a random sampling, if the light is closer to the sensor, this means more samples will get missed (so only samples that hits the sensor will be counted) which means darker result, and if it is further then more samples will hit the sensor so brighter results

Statistics: Posted by MohamedSakr — Thu Aug 14, 2014 3:39 am

]]>