You can find a download link (Windows and Linux, 64-bit & binaries only) on the website, http://personal.inet.fi/muoti/eimuoti/ifu/. A sample cloud is included in the download for quick test rendering.

Note that the program is in alpha, has been tested more thoroughly on Linux than on Windows, and may have a number of bugs etc. In rendering, the clouds tend to be best viewed so that the sun is somewhat behind the camera.

The simulator runs a Navier-Stokes fluid solver and does a bunch of other stuff. Not 100% realistic, but decent. The renderer is a normal path tracer, the clouds being rendered with a rough approximation of Mie scattering (non-spectral). There's currently no in situ tonemapping; renderings can be exported into PFM.

Both the simulator and renderer are CPU-only, and heavy on the computations. They run fairly well on a Xeon E3-1230 v3, but not too quickly on an Athlon 64 X2 6000+. The simulator doesn't benefit much from multicore - memory access bottleneck. The renderer should scale up reasonably well with many cores, though odd things may happen if you have a few hundred.

Statistics: Posted by snwy_ — Mon Sep 01, 2014 10:51 pm

]]>

Statistics: Posted by manycores — Sun Aug 24, 2014 7:46 am

]]>

@papaboo I remember that embree has good hair tracing algorithm, but I forgot why I didn't use it, after reading sriravic reply I remembered now "because it is only for written for Xeon Phi"

@sriravic thanks a lot for the reply and the link, I'm interested to know the factor of computation of "ray-curve vs ray-triangle intersections"

also did you try other accelaration structures? "like OcTree?"

Statistics: Posted by MohamedSakr — Fri Aug 22, 2014 6:07 pm

]]>

With regards to hair rendering, using curves directly over tessellation has its own advantages and disadvantages.

1. Reduced memory consumption. Typically curves are modelled as splines or bezier curves which are just a bunch of control vertices. Hence we use very less memory compared to tessellating the curves and storing per triangle vertices. The savings can actually be very very large if the tessellation rates are very high (used to accurately represent curves with triangles). Using the curve form naturally provides high quality curves and hair quality is determined only by ray sampling rate and not tessellation rate.

2. On the flip side, ray curve intersection tests are costly compared to triangle tests. You can look at the recent implementation we did with appleseed renderer (https://github.com/appleseedhq/applesee ... iercurve.h) which supports degree 1,2,3 bezier curves as of now.

3. Using SBVH with curves directly is a bit tricky though. If you were to tessellate the curves, then its straightforward to feed the resulting triangles into the SBVH tree. No changes are required. But the tree can become very large due to a large number of triangles (once again memory issues). But using curves directly within a bvh (not sbvh) is straight forward as you use individual curves for primitives and their BVHs. But defining a spatial split with curves would be tricky I guess (we've already added bvh support for curves in appleseed and it works very good I guess). We are yet to understand how SBVH would fit with curves. Another issue is that when using curves, you end up having two different hierarchies to traverse (one for triangles and one for curves). This is another issue that has to be solved (there is a good paper from Intel in this year's HPG that talks about how to construct better BVH for hair using hair similarity) which is integrated in embree. Sadly the code available is only for Xeon Phi architecture and not for CPUs.

4. For illumination which should be fast I guess Kay-Kajiya model should work fine. But the industry standard is the Marschener model I guess. I'll have to look further into it I suppose.

Cheers.

Statistics: Posted by sriravic — Fri Aug 22, 2014 12:10 pm

]]>

I would take a look at Embree. A lot of their 2.x.y relaese notes are concerned with hair, so clearly the guys have invested time in solving this issue. You might need to tweak their solution a bit in order to fit it onto a GPU, but Embree is written for SIMD architectures, so I'm guessing it shouldn't be too much of an effort.

Statistics: Posted by papaboo — Fri Aug 22, 2014 6:37 am

]]>

http://research.lighttransport.com/distance-aware-ray-tracing-for-curves/asset/abstract.pdf

Statistics: Posted by joulsoun — Thu Aug 21, 2014 10:32 am

]]>

some questions though:

1- which is better(storing hairs as splines or as triangles) "for memory usage and speed, how it may suit GPU"

2- how it would be integrated in SBVH

3- a fast and method to estimate direct illumination on hair "if there is 1 for indirect this would be appreciated also "

Statistics: Posted by MohamedSakr — Thu Aug 21, 2014 3:01 am

]]>

A reflectometer setup for spectral BTF measurement - Lyssi [2009]

http://cg.cs.uni-bonn.de/en/publications/paper-details/lyssi-2009-btfspectral/

It has very intuitive examples comparing RGB and spectral rendering.

In my opinion, spectral rendering is most apparent when you apply measured spectral data to both light sources and materials, it is in the product of those two that interesting things start to happen. In an artist-driven scene, the light source spectra are more usually derived from RGB sliders in one way or another, leading to very smooth spectra, where it is more difficult to spot actual visual differences between RGB and spectral rendering.

I've been doing quite a lot of experiments with Mitsuba, down to 10 nm and 1 nm precision just for fun, and RGB performs very good for typical non-measured scenes, where it's hard to pinpoint any difference. On the other hand, spectral rendering does not incur such a big cost on render times either, as intersection testing far outweighs the cost of the spectrum calculations.

Statistics: Posted by joulsoun — Wed Aug 20, 2014 8:21 am

]]>

we recently published a new method for Monte-Carlo-based numerical integration, named "Globally adaptive control-variate" (GACV). It has been published in the SISC journal (Society for industrial and applied mathematics - scientific computing). It's an hybrid between cubature rules and Monte-Carlo integration. As this journal may not be known by the computer graphics community, I point the preprint directly here.

https://www.researchgate.net/publication/264719192_Globally_Adaptive_Control_Variate_for_Robust_Numerical_Integration?ev=prf_pub

Its strong points are:

- empirically, it exhibits a linear behavior with respect to variance (instead of quadratic for standard MC).

- numerically highly robust and accurate, fast, low memory footprint, highly stable in memory footprint and computation time. We show in the paper that it was not the case for the existing state of the art methods.

- C++ code is freely available for direct use ! http://www.irit.fr/~Loic.Barthe/transfer.php#Free . You just have to code a simple functor evaluating the function at any point in the definition domain, call a function with this functor, and you're done. It's in majority templates, no external dependencies, only a few cpp files have to be added to your compilation process or you can use the provided Makefile to produce a shared library. An example is provided.

- can be optimally combined with importance sampling.

- simple (no need for aspirin to read and understand the paper, no complex equations).

Its weak points are :

- like all MC methods, not well fitted for highly-oscillatory non-positive functions with means close to zero (we tried it for electromagnetic scattering computation, and it behaved really badly).

- In practice, limited to dimensions at most 8.

Its possible extensions are :

- handle correlated integrals, to apply it directly to image rendering with 1 or 2 bounces, depth of field or motion blur.

- find a way to extend to arbitrary number of dimensions, or find a way to let some of the dimensions purely stochastic. The main problem is how to compute an accurate error in this case.

Here is the abstract :

Many methods in computer graphics require the integration of functions on low-to-middle-dimensional spaces. However, no available method can handle all the possible integrands accurately and rapidly. This paper presents a robust numerical integration method, able to handle arbitrary non-singular scalar or vector-valued functions de�fined on low-to-middle-dimensional spaces. Our method combines control variate, globally adaptive subdivision and Monte-Carlo estimation to achieve fast and accurate computations of any non-singular integral. The runtime is linear with respect to standard deviation while standard Monte-Carlo methods are quadratic. We additionally show through numerical tests that our method is extremely stable from a computation time and memory footprint point-of-view, assessing its robustness. We demonstrate our method on a participating media voxelization application, which requires the computation of several millions integrals for complex media.

Any comment welcome, I would be glad discussing it with you !

Statistics: Posted by tarlack — Tue Aug 19, 2014 6:00 pm

]]>

Having said that, spectral rendering comes with its bunch of issues as well. First of all performance suffers. Depending on how accurate you want to go this might turn out to be not too bad if you decide to not consider dispersion and stuff like that. We are usually seeing a 2x-3x lower performance with spectral rendering of 40 wavelengths compared to RGB so it is not unusable but the performance impact is there. The bigger issue is: where do you get the spectral information from? Getting light spectral information is usually not too difficult, many manufacturers provide these. Material spectra are much harder to get so most of the time you end up using RGB for the anyways. There also is not a single spectral texture format publicly available.

So if you are interested in just creating pretty images, I would say just forget about spectral raytracing, there is not much to be gained from it. But if you need to have accurate results there is no way around it.

If you want to do some comparisons yourself, get a demo version of Autodesk VRED Pro, it allows you to switch between RGB and spectral raytracing instantly so you can compare it directly.

Statistics: Posted by Serendipity — Tue Aug 19, 2014 3:37 pm

]]>

tomasdavid wrote:

For the extra spectral noise you definitely want to check out Alex Wilkie's EGSR 2014 paper on Hero Wavelength...

For the extra spectral noise you definitely want to check out Alex Wilkie's EGSR 2014 paper on Hero Wavelength...

Thanks, a very interesting reading.

Statistics: Posted by Dade — Sun Aug 17, 2014 8:20 pm

]]>

]]>

it contains a good MIS implementation, + some good integrators

Statistics: Posted by MohamedSakr — Sun Aug 17, 2014 1:51 am

]]>

Certainly spectral rendering is going to be expensive which is why I'm interested in doing some comparisons to see if it actually makes a difference for "normal" scenes rather than specially constructed ones that are designed to show the difference, especially where the input is not spectral (i.e. normal painted textures, user-selected colours etc).

Statistics: Posted by andersll — Sat Aug 16, 2014 2:38 pm

]]>

]]>