Descriptive bug reports are encouraged.

A reminder to Linux users to set execute permission for RENDERER/sso-pathtrace as well - saves you an error message later.

Statistics: Posted by snwy_ — Tue Sep 09, 2014 3:58 am

]]>

MohamedSakr wrote:

looks very cool , just 2 questions, which hardware you are using? (example GTX 780) , and what is the memory consumption for 10 million triangles?

looks very cool , just 2 questions, which hardware you are using? (example GTX 780) , and what is the memory consumption for 10 million triangles?

I am using a NVIDIA GTX Titan. The memory consumption for the grid acceleration structure for the 1 million triangle Buddha scene is 25 MB (uncompressed) or 14 MB (compressed with row-displacement-compression).

The triangles are stored with 4*3*3 bytes each so its like 36 MB per million triangles for the geometry.

You can roughly linearly extrapolate the numbers to 10 million triangles from that.

Statistics: Posted by ziu — Mon Sep 08, 2014 12:58 pm

]]>

Statistics: Posted by MohamedSakr — Sat Sep 06, 2014 5:29 pm

]]>

I have been working on rectilinear grid spatial subdivision for quite some time. I got a single-threaded CPU algorithm published at WSCG 2011. The algorithm was devised to be easily portable to parallel machines. Since that time I did a lot of work on creating an OpenCL demo engine based around it. I had a poster about it at EGSR 2014. Optimizing this has been quite a headache really. I spent the last year just optimizing the pipeline for rendering secondary rays and things like that.

Regardless here is one current result for the Fairy Forest scene:

regular grid 141x37x141:

rectilinear grid 114x31x115:

1024x1024 resolution, 16 ambient occlusion samples per pixel, diffuse shading. one color per triangle, per pixel lighting normals.

This acceleration structure basically can nearly double the framerate for grids on irregular scenes while having the same (good) performance as grids on scanned scenes with regular geometry.

It is not always faster than a well built BVH with SAH on some scenes but it gets closer to it than regular grids do while being also stupendously simpler and faster to build. Its performance parameter space on build time/render time lies inbetween grids and BVHs.

I am posting this here because I finally managed to optimize it well enough to get in the hundred Mray/s range.

Statistics: Posted by ziu — Sat Sep 06, 2014 3:03 pm

]]>

what I have is Throughput and Distance

newThroughput = f(OldThroughput, Distance);

what is the function?

Statistics: Posted by MohamedSakr — Sat Sep 06, 2014 1:36 pm

]]>

Statistics: Posted by MohamedSakr — Sat Sep 06, 2014 1:31 pm

]]>

for some reason, the light vertices which are closer to camera appear brighter!!, and the whole image is not balanced

here are some results:(note: this is just the BDPT, vertex merging part is not in the results)

LVtoCameraLens.jpg

otherConnections_correct.jpg

room-photon.jpg

Statistics: Posted by MohamedSakr — Sat Sep 06, 2014 8:56 am

]]>

ingenious wrote:

You could probably also apply Kelemen-style mutations of the (x,y) position inside each pixel, in order to better explore the highlights inside it, instead of doing brute-force sampling.

You could probably also apply Kelemen-style mutations of the (x,y) position inside each pixel, in order to better explore the highlights inside it, instead of doing brute-force sampling.

yea I'm trying to avoid the brute force for this kind of problems

Statistics: Posted by MohamedSakr — Fri Sep 05, 2014 1:19 pm

]]>

]]>

kaplanyan wrote:

Hi Mohamed,

A very important question indeed. The general problem of finding pure specular paths is tough, as it is equivalent to an arbitrary constraint satisfaction problem, which is proven to be NP-complete (http://en.wikipedia.org/wiki/Complexity ... tisfaction). For a practical scene with detailed enough geometry this might mean that the researchers should try to focus only on an approximate or stochastic search algorithms for such paths.

My work on regularization of such paths can be one of such options: http://cg.ivd.kit.edu/PSR.php It can work with PT and BDPT as well, so running MLT is not a necessity for this method. It does simulated annealing (stochastic "probability-1" search) by "tempering" the roughness of the materials (turning specular materials into slightly rough). Basically, that means that it finds not a precise specular path, but a glossy path, which is quite close to the specular path. Afterwards, some differential geometry + Neutonian machinery can be applied to turn such an approximate "glossy" path into the specular path, in spirit of manifold exploration (again, this Newtonian algorithm is independent of MLT).

Also you can find some prior work if you look up the references in the PSR paper. However, in PSR I only "temper" the materials. The geometry can be "tempered" the same way to improve the search. Imagine a highly tessellated geometry with high-frequency displacement map. This would be a horrible case for finding all the specular highlights on it. However, if you start with some simple smooth geometry and start gradually "developing" the displacement map on it, this way you can quickly find many highlights using some annealing search in this domain.

Generally, the problem of finding distinct illumination features is orthogonal to the problem of exploring them. And for specular materials it can quickly get up to impractical complexity if you try to find them precisely in a brute-force manner.

Hope that helps,

Anton

Hi Mohamed,

A very important question indeed. The general problem of finding pure specular paths is tough, as it is equivalent to an arbitrary constraint satisfaction problem, which is proven to be NP-complete (http://en.wikipedia.org/wiki/Complexity ... tisfaction). For a practical scene with detailed enough geometry this might mean that the researchers should try to focus only on an approximate or stochastic search algorithms for such paths.

My work on regularization of such paths can be one of such options: http://cg.ivd.kit.edu/PSR.php It can work with PT and BDPT as well, so running MLT is not a necessity for this method. It does simulated annealing (stochastic "probability-1" search) by "tempering" the roughness of the materials (turning specular materials into slightly rough). Basically, that means that it finds not a precise specular path, but a glossy path, which is quite close to the specular path. Afterwards, some differential geometry + Neutonian machinery can be applied to turn such an approximate "glossy" path into the specular path, in spirit of manifold exploration (again, this Newtonian algorithm is independent of MLT).

Also you can find some prior work if you look up the references in the PSR paper. However, in PSR I only "temper" the materials. The geometry can be "tempered" the same way to improve the search. Imagine a highly tessellated geometry with high-frequency displacement map. This would be a horrible case for finding all the specular highlights on it. However, if you start with some simple smooth geometry and start gradually "developing" the displacement map on it, this way you can quickly find many highlights using some annealing search in this domain.

Generally, the problem of finding distinct illumination features is orthogonal to the problem of exploring them. And for specular materials it can quickly get up to impractical complexity if you try to find them precisely in a brute-force manner.

Hope that helps,

Anton

thanks a lot for clarification

Statistics: Posted by MohamedSakr — Wed Sep 03, 2014 1:35 pm

]]>

A very important question indeed. The general problem of finding pure specular paths is tough, as it is equivalent to an arbitrary constraint satisfaction problem, which is proven to be NP-complete (http://en.wikipedia.org/wiki/Complexity ... tisfaction). For a practical scene with detailed enough geometry this might mean that the researchers should try to focus only on an approximate or stochastic search algorithms for such paths.

My work on regularization of such paths can be one of such options: http://cg.ivd.kit.edu/PSR.php It can work with PT and BDPT as well, so running MLT is not a necessity for this method. It does simulated annealing (stochastic "probability-1" search) by "tempering" the roughness of the materials (turning specular materials into slightly rough). Basically, that means that it finds not a precise specular path, but a glossy path, which is quite close to the specular path. Afterwards, some differential geometry + Neutonian machinery can be applied to turn such an approximate "glossy" path into the specular path, in spirit of manifold exploration (again, this Newtonian algorithm is independent of MLT).

Also you can find some prior work if you look up the references in the PSR paper. However, in PSR I only "temper" the materials. The geometry can be "tempered" the same way to improve the search. Imagine a highly tessellated geometry with high-frequency displacement map. This would be a horrible case for finding all the specular highlights on it. However, if you start with some simple smooth geometry and start gradually "developing" the displacement map on it, this way you can quickly find many highlights using some annealing search in this domain.

Generally, the problem of finding distinct illumination features is orthogonal to the problem of exploring them. And for specular materials it can quickly get up to impractical complexity if you try to find them precisely in a brute-force manner.

Hope that helps,

Anton

Statistics: Posted by kaplanyan — Wed Sep 03, 2014 12:59 pm

]]>

Statistics: Posted by tarlack — Wed Sep 03, 2014 6:43 am

]]>

so what I have, a sampler which will give all paths except pure specular paths, what I want to add is another sampler which will only catch pure specular paths

any ideas?

Statistics: Posted by MohamedSakr — Wed Sep 03, 2014 4:41 am

]]>

]]>

]]>