Linux?
-
- Posts: 167
- Joined: Mon Nov 28, 2011 7:28 pm
Linux?
How much of the code is Windows specific? And are there plans to make the project work in Linux?
Re: Linux?
There's barely any Windows-specific code: the thing runs on GLFW, and CUDA, OptiX and Embree all are available for Linux as well. All platform-specific code can be found in the 'platform' module, which is quite minimal.
The only problem I see is the handling of the render cores, which are dlls. I am not sure a similar system exists on Linux?
There are no plans for a Linux port, but obviously it would be great if someone would be interested in doing that.
The only problem I see is the handling of the render cores, which are dlls. I am not sure a similar system exists on Linux?
There are no plans for a Linux port, but obviously it would be great if someone would be interested in doing that.
-
- Posts: 167
- Joined: Mon Nov 28, 2011 7:28 pm
Re: Linux?
Cool, that's good to hear. What do you mean by render core dlls? Do you mean runtime loaded library (basically deferring algorithm selection)? If so, it is easy to do that in Linux with .so libraries using dlopen.
-
- Posts: 24
- Joined: Thu Dec 01, 2011 9:45 pm
- Location: Switzerland
- Contact:
Re: Linux?
Glad to see a new real-time path tracing engine after all those years 
We are also interested in a Linux version for real-time ray traced scientific visualization on a semicylindrical display wall (which measures 8x3 meters and is powered by eight 4K projectors). Lighthouse seems very interesting for our use case as it manages animation, materials and lights out of the box.
We're currently rendering with OptiX 5 on a Linux cluster with 4 nodes and 16 V100 Volta GPUs (which yields 60 fps with primary rays and simple shading only), but we estimate that an 8 GPU Quadro RTX server with Optix 7 should give substantially better performance (if the numbers provided by Nvidia should be believed), and allow secondary rays for shadows, reflections, GI etc.

We are also interested in a Linux version for real-time ray traced scientific visualization on a semicylindrical display wall (which measures 8x3 meters and is powered by eight 4K projectors). Lighthouse seems very interesting for our use case as it manages animation, materials and lights out of the box.
We're currently rendering with OptiX 5 on a Linux cluster with 4 nodes and 16 V100 Volta GPUs (which yields 60 fps with primary rays and simple shading only), but we estimate that an 8 GPU Quadro RTX server with Optix 7 should give substantially better performance (if the numbers provided by Nvidia should be believed), and allow secondary rays for shadows, reflections, GI etc.
Re: Linux?
Animation is in the early stages. glTF support so far is limited to the scene graph and rigid animation. Morph targets will be done soon (in progress), skinned animations may take a bit longer as these are quite complex.
-
- Posts: 24
- Joined: Thu Dec 01, 2011 9:45 pm
- Location: Switzerland
- Contact:
Re: Linux?
Cool, looking forward to see that.
Rigid animation should be enough for now. The aim is to animate lots of particles and molecules, but also streamlines to visualize the paths of the particles. Nvidia has implemented something similar in Paraview with RTX path tracing, but Paraview seems a bit too heavyweight for a real-time application
and
Rigid animation should be enough for now. The aim is to animate lots of particles and molecules, but also streamlines to visualize the paths of the particles. Nvidia has implemented something similar in Paraview with RTX path tracing, but Paraview seems a bit too heavyweight for a real-time application
and
Re: Linux?
How many instances / meshes / polygons do you expect to use?
I'm asking because currently LH2 encodes the trace result to a 128-bit value (single float4 write = fast), by storing the instance index in the top 12 bits of the primitive index. This limits the primitive index to 2^20-1= ~1M, and the instance index to 2^12-1=4095. I plan to adjust this: barycentrics are full 32-bit floats right now but can probably be represented as 24bit fixed point numbers, which would free up 2x8bit for the instance index, for 65536 instances, while tris can go to 4G. Is that enough for your use case?
I'm asking because currently LH2 encodes the trace result to a 128-bit value (single float4 write = fast), by storing the instance index in the top 12 bits of the primitive index. This limits the primitive index to 2^20-1= ~1M, and the instance index to 2^12-1=4095. I plan to adjust this: barycentrics are full 32-bit floats right now but can probably be represented as 24bit fixed point numbers, which would free up 2x8bit for the instance index, for 65536 instances, while tris can go to 4G. Is that enough for your use case?
-
- Posts: 24
- Joined: Thu Dec 01, 2011 9:45 pm
- Location: Switzerland
- Contact:
Re: Linux?
If each particle can be represented by a single primitive, 4G is definitely more than enough. And 65K individual instances/meshes should also suffice for our real-time use case (although we have exceeded that number with Ospray/Embree, but those engines are not exactly optimised for real-time).
-
- Posts: 167
- Joined: Mon Nov 28, 2011 7:28 pm
Re: Linux?
If you have complex materials, etc... I would not expect an 8 GPU RTX to give substantially better perf than a 16 GPU Volta machine. I'd actually expect about equal. If most of the work you're doing is tracing rays, and you have only triangles to intersect, then yeah, it might outperform the Volta machine by a small factor (perhaps 2x).straaljager wrote: ↑Fri Sep 06, 2019 8:47 amGlad to see a new real-time path tracing engine after all those years
We are also interested in a Linux version for real-time ray traced scientific visualization on a semicylindrical display wall (which measures 8x3 meters and is powered by eight 4K projectors). Lighthouse seems very interesting for our use case as it manages animation, materials and lights out of the box.
We're currently rendering with OptiX 5 on a Linux cluster with 4 nodes and 16 V100 Volta GPUs (which yields 60 fps with primary rays and simple shading only), but we estimate that an 8 GPU Quadro RTX server with Optix 7 should give substantially better performance (if the numbers provided by Nvidia should be believed), and allow secondary rays for shadows, reflections, GI etc.
-
- Posts: 24
- Joined: Thu Dec 01, 2011 9:45 pm
- Location: Switzerland
- Contact:
Re: Linux?
During a test with a single Turing GPU, we observed that using the RTX shaders on the Turing GPU can provide a 5x speedup when tracing AO rays only (compared to not using the RTX shaders), so we're cautiously optimistic about our estimates.graphicsMan wrote: ↑Fri Sep 06, 2019 4:25 pmIf you have complex materials, etc... I would not expect an 8 GPU RTX to give substantially better perf than a 16 GPU Volta machine. I'd actually expect about equal. If most of the work you're doing is tracing rays, and you have only triangles to intersect, then yeah, it might outperform the Volta machine by a small factor (perhaps 2x).
We probably won't be able to do full path tracing with multiple bounces at 60 fps right now, but we're hoping that frameworks like Optix, Lighthouse and Nvidia drivers keep getting better at a fast pace
