For legacy architectures or with OpenCL I'd probably go with a texture atlas and references into that. I however decided to limit my ray tracing lib to Kepler+ just because of the availability of texture objects.

Statistics: Posted by szellmann — Thu Dec 08, 2016 10:37 am

]]>

If we create a global illumination renderer like path tracing, using a dedicated texture type seems not plausible because we cannot know which texture is used and a lot of textures should be bound into kernel.

In my OpenCL renderer, I used a single uchar* argument of a kernel as a pointer to texture storage.

The renderer samples a texture at an index specified by a material descriptor (passed as another argument).

In this way, it is impossible to use HW accelerated texture filtering (but I think view dependent texture filtering makes the renderer biased/inconsistent, so simple bilinear filtering is only valid).

Additionally I don't know if this approach is good at performance (maybe not so good).

However this approach seems the most generic way.

Statistics: Posted by shocker_0x15 — Thu Dec 08, 2016 1:44 am

]]>

As for you light source issue. Yes, point lights are physically incorrect since the area is 0. But just think of them as an infinitely small area light and use as is. You can even derive a normalization term, such that the overall light emitted from a point/sphere light is radius invariant, even if the radius is 0.

Statistics: Posted by papaboo — Wed Dec 07, 2016 7:15 am

]]>

]]>

I see what you mean. Basically, point lights and other delta lights use units of irradiance, instead of the more conventional area lights that use radiance. As such, delta lights cannot be plugged into the direct lighting equation because they don't obey the correct units. As you suggest, the problem would be solved if I gave the point light a small differential area dA, for instance I could make it be an infinitesimal disk oriented along the wi direction so that the geometry factor would be dA/r^2, instead of just 1/r^2. The infinitesimal dA would then attenuate the near infinity of the brdf near the mirror direction.

This basically corresponds to the first solution I initially proposed. Point lights and spot lights are ubiquitous in production environments and they're not likely to go away. A rendering implementation that internally converts delta lights into micro area lights could be interesting to study and it would be energy conserving.

My second solution about normalising brdfs in the range black <= brdf <= white for delta lights, on second thought, may not be so interesting because it would break convergence in the limit of an area light that gradually shrinks towards a delta - the moment the area became exactly zero, there would be a discontinuity in the lighting.

Statistics: Posted by mgamito — Tue Dec 06, 2016 9:25 am

]]>

Statistics: Posted by danielthompson — Fri Dec 02, 2016 4:31 pm

]]>

mgamito wrote:

Consider also a point light above the ground with an intenty Li. The reflected radiance Lr on a point x, on the ground, is:

Lr(wo) = brdf(wi,wo)*Li/r^2, where r is the distance from x to the point light and wi points towards the light.

That formula can't be correct, because it doesn't stand up to dimensional analysis. Lr and Li have the same units, and a BRDF is dimensionless. That leaves units of length^-2 unbalanced. I assume that you have a 1/r^2 factor because you're calculating a direct lighting estimate by integrating over the surface of all lights. That means you need to have a surface area factor in there somewhere, which would make the units come out right. The surface area of a point light approaches zero, and multiplying that into your formula cancels out the infinity that's bothering you. If you were instead dealing with an area light, the brdf would spike at some infinitesimal portion of the light surface, but it would be balanced by near-zero values everywhere else.

Statistics: Posted by friedlinguini — Fri Dec 02, 2016 4:17 pm

]]>

I've been puzzled by a lighting problem that is giving non-intuitive results. I'm sure the answer must be very simple because this is really basic stuff.

Consider a ground plane with a near-specular brdf. The brdf is going to return extremely high values close to the mirror direction that drop off to zero quickly away from that direction. In the limit, a specular mirror would have a delta distribution with a single infinite value along the mirror direction and zero everywhere else.

Consider also a point light above the ground with an intenty Li. The reflected radiance Lr on a point x, on the ground, is:

Lr(wo) = brdf(wi,wo)*Li/r^2, where r is the distance from x to the point light and wi points towards the light.

Now, if the incoming direction wi is nearly aligned with the mirror direction, the brdf term will be huge and we have Lr > Li, which is not energy-conserving. How can this be?

The only thing I can think of is that point lights are not real physical lights, whereas a near-specular brdf is physically correct. The two solutions to the problem would then either be:

1- Give the point light a small radius and treat it as a spherical light.

2- If I insist on using the point light, I need to normalise the brdf to make sure it is always less than white.

Statistics: Posted by mgamito — Fri Dec 02, 2016 2:10 pm

]]>

Which is the smarter method for speed and memory?

Statistics: Posted by atlas — Fri Dec 02, 2016 11:29 am

]]>

You can keep up with development here: https://www.twitter.com/rove3d

Statistics: Posted by atlas — Sun Nov 27, 2016 9:27 am

]]>

So what seems to be happening is that with a -0 direction component, then that dimension's NextCrossingT ends up being negative infinity, and in turn in the ray stepping loop, that dimension is always chosen (wrongly), then it bogus-ly steps along that axis, which is hopeless, since the ray doesn't actually pass through those voxels, and then valid intersections are missed?

I'm curious whether changing

Code:

` if (ray.d[axis] >= 0) {`

to

Code:

` if (ray.d[axis] >= 0 && ray.d[axis] != -0) {`

makes a difference? (Alternatively, do you have a test case you can share?)

Thanks,

Matt

Statistics: Posted by mattpharr — Tue Nov 22, 2016 1:37 am

]]>

My understanding from the publisher is that it will be (finally) shipping on Nov 25. Sorry for the delay. Hope you enjoy it!

Regards,

Matt

Statistics: Posted by mattpharr — Tue Nov 22, 2016 1:17 am

]]>

This model is a low poly version of the Stanford dragon model, around 100k polys or so.

This is running at 1fps, 1spp and 256x256 resolution, all on a gtx760 2gb.

Amp has really bad issues when work loads become large it doesn't like to run well. (bad scaling issues).

I also rendered this out on a work station that my friend has. NVIDIA Quadro K600 1spp 1fps

This is a WIP and sorry about the long post in here .

Statistics: Posted by johnshadow23 — Sat Nov 19, 2016 7:13 am

]]>

bachi wrote:

I think this is the answer. Given an endpoint on the eye subpath one can employ proper importance sampling on the light source for NEE (for example for spherical lights or environment map it is possible cull the part of the light source where the eye subpath can't see). It also reduces the correlation with almost no cost.

I think this is the answer. Given an endpoint on the eye subpath one can employ proper importance sampling on the light source for NEE (for example for spherical lights or environment map it is possible cull the part of the light source where the eye subpath can't see). It also reduces the correlation with almost no cost.

Indeed, the next-event estimation with a new independently sampled light source vertex is not strictly necessary but is typically done in bidirectional path tracing (BPT) implementations. The reason is to allow for better importance sampling for direct illumination (as you pointed out the spherical light example) but also to reduce sampling correlation. Recall that the way BPT traditionally performs connections — every eye subpath vertex to every light subpath vertex — produces a large number of full paths from only two subpaths. These full paths share vertices, which introduces sampling correlation. This correlation in turn increases the variance of the estimator. Ideally you want to sample the full paths completely independently, but the correlation technique is cheap and in practice gives you better efficiency which is 1 / (variance * sampling_effort). Still, for next-event connections (i.e. the technique that uses only one light subpath vertex), sampling a new vertex on the light source is typically cheap enough. Now, if sampling the light source is for some reason very expensive, you may be better off reusing the first vertex on the light subpath for every eye vertex.

Statistics: Posted by ingenious — Fri Nov 11, 2016 11:03 pm

]]>

koiava wrote:

As I understand the way you calculate microfacet pdf stays the same, you just call pdf function for light subpath where wi and wo are swapped.

Inverse pdf calculation should be same as you do in bdpt so you can take pbrt as reference.https://github.com/mmp/pbrt-v3/blob/master/src/integrators/bdpt.cpp#L198

As I understand the way you calculate microfacet pdf stays the same, you just call pdf function for light subpath where wi and wo are swapped.

Inverse pdf calculation should be same as you do in bdpt so you can take pbrt as reference.https://github.com/mmp/pbrt-v3/blob/master/src/integrators/bdpt.cpp#L198

Okay, that's what I originally thought. I found my mistake; turns out, I forgot to treat the microfacet bsdf as a glossy event and was treating it as a pure specular event, which screws up all kinds of things. Stupid mistake that I completely overlooked, and subsequently drove me mad for a few days to find.

Thanks for confirming that I'm not crazy!

Statistics: Posted by yiningkarlli — Wed Nov 02, 2016 9:51 am

]]>