i dont need more features, so i stopped developing the whole engine, including of course the 3d renderer.

i achieved every goal with this engine.

of course, in the future, i will fix all the bugs still left in the code. also there is an exception on the sprite renderer, which i will use in different projects too - that need some features i havent implemented before.

there is also exception on the platform file, which is needed to compile the project on new platforms (that part are well separated from the engine itself). (i really need to finish the input for android sometimes.)

-i will possibly update the two games using this engine soon (the chess and the dark tower).

-somebody requested the engine as a standalone dll for his project, i will write a minimal documentation for him

-i will update my music software too

-my 3d modeler does not need any updates, that have a relatively new version. i forgot to show this:

http://www.cute3dmodeler.tk/

Statistics: Posted by Geri — Sun Feb 01, 2015 3:34 am

]]>

Krakatoa algorithm for pre-integration could be more advanced.

My very preliminary test on first splatting particles to a shadow map for pre-integration (stored to particles),

then splatting them to the framebuffer. I haven't optimized for particle placement so it may not follow the input density well.

snapshot_linear_gamma_0.663934_maximum_2.21107_minimum_0.png

A seemingly limitation is that the shadow map has a very limited solid angle w.r.t. the light source.

Statistics: Posted by citadel — Thu Jan 22, 2015 5:44 pm

]]>

ingenious wrote:

The joint importance sampling techniques weren't actually used there (couldn't do it on time), but they can certainly be added to improve sampling.

shiqiu1105 wrote:I understand that BDPT is one of the estimators used in your latest UPBP, so is this joint sampling used in that framework as well?

The joint importance sampling techniques weren't actually used there (couldn't do it on time), but they can certainly be added to improve sampling.

Thanks for the info.

Okay I will try to implement the analytic sampling first to do monte carlo subsurface scattering, hopefully post some results.

Been reading about dipole diffuse approximation and just couldn't understand it

Statistics: Posted by shiqiu1105 — Thu Jan 22, 2015 1:31 am

]]>

shiqiu1105 wrote:

I understand that BDPT is one of the estimators used in your latest UPBP, so is this joint sampling used in that framework as well?

I understand that BDPT is one of the estimators used in your latest UPBP, so is this joint sampling used in that framework as well?

The joint importance sampling techniques weren't actually used there (couldn't do it on time), but they can certainly be added to improve sampling.

Statistics: Posted by ingenious — Wed Jan 21, 2015 2:11 am

]]>

ingenious wrote:

Unfortunately, there is no public implementation available, due to potential legal problems (the work was done at Disney Research). However, implementing the analytical importance sampling routines is actually quite easy. For this what you really need are the boxed equations in section 5. The tabulated importance sampling routines are a little more involved, but the analytical ones will get you a long way.

Unfortunately, there is no public implementation available, due to potential legal problems (the work was done at Disney Research). However, implementing the analytical importance sampling routines is actually quite easy. For this what you really need are the boxed equations in section 5. The tabulated importance sampling routines are a little more involved, but the analytical ones will get you a long way.

Thank you for the hint ingenious! I will look into it.

One more quick question, I understand that BDPT is one of the estimators used in your latest UPBP, so is this joint sampling used in that framework as well?

Statistics: Posted by shiqiu1105 — Wed Jan 21, 2015 1:12 am

]]>

]]>

Statistics: Posted by shiqiu1105 — Tue Jan 20, 2015 10:20 pm

]]>

It seems to reduce the variance of volume rendering a lot.

But it looks quite complicated to implement, especially when considering the tabulation in anisotropic phase function.

So is this implemented anywhere, so I could use as reference?

Thanks,

Statistics: Posted by shiqiu1105 — Tue Jan 20, 2015 10:09 pm

]]>

PerspectiveCamera constructor accepts a transform of the form camToWorld whereas the lookat computes worldToCam as the main transform. I guess you have to invert this transform and send it to PerspectiveCamera.

Statistics: Posted by sriravic — Tue Jan 20, 2015 2:33 pm

]]>

I tried this:

- Code:
`BoxFilter b(1,1);`

float screen[4];

screen[0] = -1.f;

screen[1] = 1.f;

screen[2] = -1.f;

screen[3] = 1.f;

ImageFilm film(100, 100, &b, screen, "muh.exr",false);

auto t = LookAt(

Point(0, 0, -5),

Point(0, 0, 0),

Vector(0, 1, 0));

AnimatedTransform cam2world(

&t,

0.0f,

&t,

10.0f

);

PerspectiveCamera p(cam2world, screen,

0, 0, 0, 0, 55.0f, &film);

CameraSample s;

s.imageX = 50;

s.imageY = 50;

s.time = 0;

s.lensU = 0;

s.lensV = 0;

Ray r;

p.GenerateRay(s, &r);

The ray looks like this

r.orgin = o = {x=0.000000000 y=0.000000000 z=5.00000000 }

r.direction = d = {x=0.000000000 y=0.000000000 z=1.00000000 }

Actually I expected the ray origin at Point(0, 0, -5) and not (0, 0, 5) since in world space the ray should start at z = -5. Any ideas what I am doing wrong?

Statistics: Posted by Julian — Mon Jan 19, 2015 10:13 am

]]>

Job Description/Qualifications:

Join our team of GPU programming and graphics experts in a senior position to redesign and renovate OptiX, the industry's leading ray tracing engine. Help design and implement the just-in-time compiler at the core of OptiX and other core OptiX functionality. Optimize, debug, and implement OptiX features and sample code. Interact with NVIDIA's CUDA, Iray, architecture and other teams to improve our entire rendering platform. Collaborate with customers and partners to facilitate their use of OptiX.

NVIDIA OptiX is built and maintained by a distributed team, with team members in Salt Lake City, Berlin, Moscow, and elsewhere, and we are open to hiring in all these locations. We pride ourselves on being able to work independently and in teams, tackling very complex software projects, and being highly creative and productive. Some travel to customer sites, remote offices, and conferences will be expected.

REQUIREMENTS:

At least one domain expertise required:

- Strong knowledge of parallel programming, especially GPU programming, preferably CUDA-based

- Strong knowledge of modern high-performance ray tracing, especially physically-based rendering

- Strong knowledge of compiler algorithms, architecture and implementation, preferably LLVM-based

SKILLS:

- Good software design and implementation skills required, especially in C++

- Good debugging skills required

- Good communication and teamwork skills required

- Understanding of current GPU architectures (graphics/compute pipelines) helpful

- Assembly language programming skills helpful

- Experience with OpenGL or DirectX helpful

(please contact me personally via PM if you have more questions or would like to apply)

Statistics: Posted by toxie — Wed Jan 14, 2015 4:05 pm

]]>

raider wrote:

Thank's for the link, I plan to implement some microfacet BRDFs as well as a next step. I thought of Phong as about quite simple model to implement for test purposes. But surprisingly it is not so simple.

However, I don't understand what you mean by "You don't have to guess/hack about sampling directions and pdf with this BRDF"... You mean simple cosine weighted distribution is the best option for complex BxDFs?

Thank's for the link, I plan to implement some microfacet BRDFs as well as a next step. I thought of Phong as about quite simple model to implement for test purposes. But surprisingly it is not so simple.

However, I don't understand what you mean by "You don't have to guess/hack about sampling directions and pdf with this BRDF"... You mean simple cosine weighted distribution is the best option for complex BxDFs?

I mean that both the distribution and its pdf are part of the model and are well defined. You don't have to do tricks like rotating the sampling lobe around the reflection direction. And the distribution is not a simple regular lobe around the reflection vector. It more naturally models the elongation of the samples as the reflection vector tends toward the grazing angle.

Microfacet BRDF usually don't normalize to one (that is the albedo or directional-hemispherical reflectance) for all incident directions and all roughnesses. I actually don't know of any BRDF that correctly normalizes to one for every incident and roughnesses cases. But this is in part because of the energy loss due to multiple bouncing on microfacets that those models don't take into account. It is relatively easy to write a re-normalizing function that sort of restore this lost energy.

Concerning samples that goes below the macrosurface. you also get this with the microfacet BRDF. Sometimes, you cannot select any of the available microfacet normals that wouldn't do that and there are no ways to avoid that. But once the albedo is re-normalized, the importance sampling of the BRDF still has a nice normalized albedo everywhere.

Statistics: Posted by ypoissant — Tue Jan 13, 2015 12:49 am

]]>

raider wrote:

That's correct only for distribution around normal, but this is wrong when you generate lobe around specular direction (which is the case in SmallVCM code too), and they just pass specular direction as the first argument instead of normal when calling that function. For directions below tangent plane, they just simply return zero. That PDF do not normalize to one except for normal incident direction. As a result they should get brighter reflection at gazing angles, as they divide by lower values of PFD when integrating. Or I miss something obvious?

That's correct only for distribution around normal, but this is wrong when you generate lobe around specular direction (which is the case in SmallVCM code too), and they just pass specular direction as the first argument instead of normal when calling that function. For directions below tangent plane, they just simply return zero. That PDF do not normalize to one except for normal incident direction. As a result they should get brighter reflection at gazing angles, as they divide by lower values of PFD when integrating. Or I miss something obvious?

Indeed, since the sampled ray directions are subsequently rotated to be centered around the reflection vector, some of them go below the surface. The contribution of such directions is zero so they are simply discarded without evaluation. But the sampling distribution is still correct and properly normalized, and as long as those directions are accounted as samples in the Monte Carlo estimator, it's all good. The distribution is simply slightly inefficient, because a small fraction of the directions it generates point below the tangent plane. There's no bias introduced, you can use it.

Statistics: Posted by ingenious — Sat Jan 10, 2015 7:05 am

]]>

I've checked SmallVCM code. They just use the fallowing code for evaluating PDF

- Code:
`float PowerCosHemispherePdfW( const Vec3f &aNormal, const Vec3f &aDirection, const float aPower)`

{

const float cosTheta = std::max(0.f, Dot(aNormal, aDirection));

return (aPower + 1.f) * std::pow(cosTheta, aPower) * (INV_PI_F * 0.5f);

}

That's correct only for distribution around normal, but this is wrong when you generate lobe around specular direction (which is the case in SmallVCM code too), and they just pass specular direction as the first argument instead of normal when calling that function. For directions below tangent plane, they just simply return zero. That PDF do not normalize to one except for normal incident direction. As a result they should get brighter reflection at gazing angles, as they divide by lower values of PFD when integrating. Or I miss something obvious?

Statistics: Posted by raider — Sat Jan 10, 2015 12:19 am

]]>

You don't have to guess/hack about sampling directions and pdf with this BRDF.

Sorry, it's been so long when I used Phong for sampling that I would not know how to directly answer your question. But you might want to look at SmallVCM code. It does sample reflections using the Phong function.

Statistics: Posted by ypoissant — Fri Jan 09, 2015 12:43 am

]]>