Visionaray Ray Tracing Framework

Show-off, reference material & tools.
szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Visionaray Ray Tracing Framework

Postby szellmann » Sat Mar 21, 2015 12:47 pm

Hi there,

I'd like to take this opportunity to advertise Visionaray, which is (yet another) ray tracing framework.

https://github.com/szellmann/visionaray
https://github.com/szellmann/visionaray/wiki

Visionaray is based on generic programming with C++. In contrast to other frameworks, its main goal is to achieve platform independence: write "kernels" in C++, have "schedulers" (CUDA, TBB, C++-Threads) execute kernels on the hardware you desire. Users write their own kernels, though the framework provides a few custom ones (primary rays only, whitted, path tracing, ...).

Visionaray is the preliminary result of a hobby project of mine that I mainly pursue in my spare time, and for a few weeks now partially during my work at the University of Cologne. Visionaray is open source (MIT license). At its current state, I consider it not yet mature enough to be any useful in a real project, but we (at this time it's basically a student of mine and me) are working on it.

Companion projects are concerned with ray tracing in VR (a plugin for the VR renderer OpenCOVER (https://github.com/hlrs-vis/covise) comes with Visionaray) and ray tracing on FPGAs (we're working on a Xilinx Vivado HLS scheduler and porting code to fixed point :-) ).

Best,
Stefan

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Mon Jun 29, 2015 7:46 pm

I started to write a series of Visionaray tutorials on medium.com:
https://medium.com/tag/visionaray

Check back for updates, there are going to be more tutorials soon!

papaboo
Posts: 36
Joined: Fri Jun 21, 2013 10:02 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby papaboo » Wed Jul 01, 2015 12:07 pm

Sounds really cool.

Slightly off-topic: In the integration with VR do you do any GI effects or is it Whitted only? If you do GI (which I of course hope :) ) can you then share any thoughts on the subject? PT would be the obvious quick and dirty solution, but it produces so much noise that it can't work in all scenarios. Some form of baking would be better, but that can reduce interactivity with the scene.

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Wed Jul 01, 2015 1:08 pm

It's so far only Whitted, but we're eager to get GI running in VR eventually.

Noise reduction (MIS) isn't there yet, but up on the TODO list! I haven't given BDPT on GPUs a real shot, but I surmise it's not a good match (for GPUs), because of the diverging data path of the light and eye paths.

The combination of the latter will of course not totally eliminate noise (as you said).

There's this video of the upcoming Brigade 3, and I believe they go in the cloud for this. We have an MPI scheduler in the works, and we're collaborating with the Computing Centre of the Univesity of Cologne. I'm eager to see how stupid, massive parallelization (sort-first, all nodes have the scene data) can help :)

Then I've read about foveated ray tracing: http://research.lighttransport.com/fove ... y-headset/ - I hope that this may help to some degree when using a VR headset.

We're also investigating FPGA ray tracing. We are not yet experts. However, being able to customize the data path sounds really promising - the biggest problem with GI is diverging data paths, I believe. The learning curve with FPGAs is however quite steep :)

Baking is only valid for diffuse reflection and static lights, am I right? Or would you call methods like photon mapping baking? The latter is something we consider to incorporate, but in general (because we do this mostly for "researchy" stuff :) ) we'd rather prefer unbiased methods.

papaboo
Posts: 36
Joined: Fri Jun 21, 2013 10:02 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby papaboo » Thu Jul 02, 2015 9:51 am

Thanks for feedback and the link to foveated ray tracing. Haven't read that yet, so now I have an excuse for sitting outside in the sun and do some reading. :)

I would consider photon mapping a 'soft' baking approach, since there is a preprocess step, but it's really quick. A rebake is increadibly fast, but of course a first frame rendered using only PM lacks a lot of information and is either splotchy or incredibly biased. But otherwise yes, I think most realtime baking approaches only work well for diffuse surfaces and static lights, but I haven't dug to much into it, since so far my focus has been on methods that actually converges to the correct result.

You can do BDPT reasonably effective on the GPU actually by sharing light paths between rays in a warp. Take a look at Dietger van Antwerpens thesis. I have no clue though if it's fast enough for interactive purposes though. My guess would be 'no' and PM is probably preferable.

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Sat Sep 26, 2015 11:18 am

Visionaray is now also on facebook.
Visit https://www.facebook.com/visionaray

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Wed Oct 21, 2015 1:09 pm

Some more shameless self-promotion :)

I wanted to post some progress from the virtual reality front. Here's some shots where you can see (with your red-cyan glasses :) ) how distracting noise from not converged, naive path tracing really is in VR. Problem in VR is that the user moves his/her head all the time, so there's no opportunity for those images to converge. We're at the very beginning with this, so this is how it looks w/o any optimizations or further thought. Next in line is probably Multi-GPU / cluster parallelization to present e.g. ten or so blended frames at once. Sampling may also be improved, it's simple stratified sampling so far, LD would probably be better.

Here is the video (best viewed with "HD" activated!):

https://www.facebook.com/visionaray/vid ... =2&theater

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Thu Mar 17, 2016 11:11 am

Take a look at the new Visionaray multi-volume rendering example:

https://youtu.be/aMRb3LJzgXs
https://github.com/szellmann/visionaray ... lti_volume
https://github.com/szellmann/visionaray ... e-examples

The example program shows a more complex kernel - "SciVis" direct volume rendering with local illumination and correct compositing even if the datasets arbitrarily overlap. The example also demonstrates how to write Visionaray kernels that are compatible with x86 and CUDA.

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Mon Jun 06, 2016 9:35 am

Visionaray now supports multi-hit ray/object traversal. Check out the example program:

https://youtu.be/wv8ZkVoHtDw
https://github.com/szellmann/visionaray ... e-examples
https://github.com/szellmann/visionaray ... /multi_hit

The multi-hit feature is of course not restricted to alpha compositing (as is the example program). I'm currently working on a direct volume rendering application for medical imaging where I need to clip CT/MR data with some arbitrary opaque geometry. Going to use multi-hit to build up clip intervals that I will feed into my ray marcher afterwards.

szellmann
Posts: 44
Joined: Fri Oct 10, 2014 9:15 am
Contact:

Re: Visionaray Ray Tracing Framework

Postby szellmann » Fri Dec 30, 2016 1:05 am

It's christmas holidays, and so I had a little fun compiling Visionaray for my new Raspberry PI 3: https://youtu.be/7bJTEmdlT2Y

I haven't done anything special but applying some tiny compile fixes for this architecture, so there's no optimizations yet. I'm just porting the SIMD math lib to ARM NEON. Excited to see which performance difference SoA packets make for coherent workloads on the tiny CPU.


Return to “Tools, Demos & Sources”

Who is online

Users browsing this forum: No registered users and 1 guest