friedlinguini wrote:

I'm not sure thresholding would help. MLT already favors bright paths, and allowing mutations to dim paths would reduce the probability of getting stuck in a small region of path space. I think discontinuities would be noticeable as well.

I'm not sure thresholding would help. MLT already favors bright paths, and allowing mutations to dim paths would reduce the probability of getting stuck in a small region of path space. I think discontinuities would be noticeable as well.

I think clamping the geometry-BRDF product as in the VPL rendering methods, and applying MLT only on the residuals could be helpful (since these residuals are supposed to be those paths that are not importance sampled well). The "stucking" problem can be alleviated by replica exchange/parallel tempering or something like that. Has someone tried this before?

Statistics: Posted by bachi — Thu Sep 03, 2015 3:19 pm

]]>

]]>

Statistics: Posted by Paleos — Wed Sep 02, 2015 11:16 pm

]]>

- Code:
`Assert(centroidBounds.pMax[dim] != centroidBounds.pMin[dim]);`

https://github.com/mmp/pbrt-v3/blob/mas ... h.cpp#L527

So I changed the code in BVHAccel::buildUpperSAH to be like this:

- Code:
`int mid;`

if (centroidBounds.pMax[dim] != centroidBounds.pMin[dim]) {

// do the fancy split position finding method.

mid = pmid - treelet_roots;

} else {

mid = start + (end - start)/2;

}

I know its a bit of a hack, but at least it can still find a split point somewhere and generate a viable BVH.

Hope it helps,

Statistics: Posted by ziu — Wed Sep 02, 2015 11:09 am

]]>

I almost finish to convert a board into raytracing shapes (i.e: polygons, triangles, cylinders, round segments, rings, disks, CSG, etc..)

Still no shading, the attached images are colored with normals.

No statistics yet, but a board like the one in the image can have around 100K objects.

I used a lot of opensource code pieces in the source, they are all credited.

Thanks also for this group forum and your inspiration!

Statistics: Posted by mrluzeiro — Tue Sep 01, 2015 8:40 am

]]>

]]>

Statistics: Posted by ultimatemau — Wed Aug 26, 2015 5:39 pm

]]>

Statistics: Posted by ingenious — Wed Aug 26, 2015 2:26 pm

]]>

Heckbert devised a classification of light scattering events along the path the light traveled from light source to eye, which has the form of "L(S|D)*E" (Luminaire, Specular, Diffuse, Eye). Veach added Glossy and Transmission.

I read that Kajiya's path-tracing is classified as "E[(D|G|S)+(D|G)]L". But then I read in the Siggraph 2001 course "State of the Art in Monte Carlo Ray Tracing for Realistic Image Synthesis" the following: "Distributed ray tracing and path tracing includes multiple bounces involving non-specular scattering such as E(D|G)*L. However, even these methods ignore paths of the form E(D|G)S*L; that is, multiple specular bounces from the light source as in a caustic."

On the other hand, even the most simple path-tracing like "smallpt" seem to render caustics correctly and IIRC there was a picture many years ago rendered with distributed ray tracing showing a metallic ring with a concentrated light spot inside of it (kind of like a lens).

So my question is: the path-tracing classification and the Siggraph course seem to contradict each other, what is correct take on this?

PS: I see that apparently "E[(D|G|S)+(D|G)]L" cannot consume "EDSSSL" because of the last "(D|G)" group is missing, but EDSSSDL seems to describe multiple specular events just well.

PPS: "[]" means "one of", "+" means "one or more". Do it even do anything to stick "+" inside "[]"? I mean, "one of" of "one or more" is always just "one", isn't it?

Statistics: Posted by Ecir Hana — Wed Aug 26, 2015 9:49 am

]]>

Hope this helps!

Statistics: Posted by akalin — Wed Aug 26, 2015 8:27 am

]]>

yuriks wrote:

akalin wrote:Ahh. I had realized that this changes the distribution of rays, but thought that as long as the PDF was correct this wouldn't be a problem. Now that you explained it I can see why biasing towards the edges is incorrect: if, for example, an object is eclipsing the center of the light as seen from the surface, my method would produce a brighter image since the (exposed) edge would be sampled with a higher frequency than the (obscured) center. My test scene has only diffuse and unoccluded lights, so this problem isn't visible in it.

Thanks for the explanation!

No problem! One more slight correction: the correct method (from the outside) is biased towards the edges of the visible region, but your method (from the inside) samples the visible region uniformly. So your method would produce a *dimmer* image, with the center occluded, since the exposed edge would be sampled with a lower frequency.

Edit:

See my next post, which talks about how your method can be made to work!

Statistics: Posted by akalin — Wed Aug 26, 2015 7:33 am

]]>

akalin wrote:

Hi Yurika,

Ah, you're right, I misunderstood your post! I understand your method now. However, I think there's still a subtle problem with it. (I ran into this, too!) The problem is that sampling the visible area from the inside of the sphere samples it uniformly, but sampling it from outside of the sphere does *not* sample it uniformly, so you can't substitute the former for the latter.

You can see this geometrically, by looking at your diagram and considering what happens to a point on the visible arc when the interior angle is changed vs. the exterior angle. A small change in the interior angle results in a proportional change in the arclength, but a small change in the exterior angle results in a larger change when the change is towards the edge of the cone. In other words, sampling the exterior angle uniformly results in a distribution that is heavier towards the edges of the visible region than the center.

You can also see this algebraically; if uniformly sampling the exterior angle also resulted in uniformly sampling the visible region, then the function that converts the exterior angle to the interior angle should be a trivial scaling function. But, looking at the formula I derive in my previous post, it is not a scaling function, and is instead some complicated function.

I'm not surprised that the images you produced still look right -- I don't think the difference would be easily noticeable except in test scenes designed to expose it.

Hope this helps!

Hi Yurika,

Ah, you're right, I misunderstood your post! I understand your method now. However, I think there's still a subtle problem with it. (I ran into this, too!) The problem is that sampling the visible area from the inside of the sphere samples it uniformly, but sampling it from outside of the sphere does *not* sample it uniformly, so you can't substitute the former for the latter.

You can see this geometrically, by looking at your diagram and considering what happens to a point on the visible arc when the interior angle is changed vs. the exterior angle. A small change in the interior angle results in a proportional change in the arclength, but a small change in the exterior angle results in a larger change when the change is towards the edge of the cone. In other words, sampling the exterior angle uniformly results in a distribution that is heavier towards the edges of the visible region than the center.

You can also see this algebraically; if uniformly sampling the exterior angle also resulted in uniformly sampling the visible region, then the function that converts the exterior angle to the interior angle should be a trivial scaling function. But, looking at the formula I derive in my previous post, it is not a scaling function, and is instead some complicated function.

I'm not surprised that the images you produced still look right -- I don't think the difference would be easily noticeable except in test scenes designed to expose it.

Hope this helps!

Ahh. I had realized that this changes the distribution of rays, but thought that as long as the PDF was correct this wouldn't be a problem. Now that you explained it I can see why biasing towards the edges is incorrect: if, for example, an object is eclipsing the center of the light as seen from the surface, my method would produce a brighter image since the (exposed) edge would be sampled with a higher frequency than the (obscured) center. My test scene has only diffuse and unoccluded lights, so this problem isn't visible in it.

Thanks for the explanation!

Statistics: Posted by yuriks — Wed Aug 26, 2015 4:17 am

]]>

yuriks wrote:

Thanks for the reply akalin,

With my method I propose turning the sampling "inside out", so rather than sampling the sphere from the ray origin, what I'm trying to do there is sample a direction from the cone inside the sphere that results in a point that's visible from our origin. So the tangent lines there are the "horizon", beyond which the sphere surface isn't visible from the ray origin anymore. So with the angle of that cone in hand I sample and use the PDF from the inner cone.

I'm using it in my renderer (even though I haven't seriously worked on it in ages now) and it seems to be working fine for me. (At the time I compared results with the code from PBRT and mine, and couldn't see any difference.) Some time ago I recreated my test scene in Mitsuba and the results seem to agree too:

Thanks for the reply akalin,

With my method I propose turning the sampling "inside out", so rather than sampling the sphere from the ray origin, what I'm trying to do there is sample a direction from the cone inside the sphere that results in a point that's visible from our origin. So the tangent lines there are the "horizon", beyond which the sphere surface isn't visible from the ray origin anymore. So with the angle of that cone in hand I sample and use the PDF from the inner cone.

I'm using it in my renderer (even though I haven't seriously worked on it in ages now) and it seems to be working fine for me. (At the time I compared results with the code from PBRT and mine, and couldn't see any difference.) Some time ago I recreated my test scene in Mitsuba and the results seem to agree too:

Hi Yuriks,

Ah, you're right, I misunderstood your post! I understand your method now. However, I think there's still a subtle problem with it. (I ran into this, too!) The problem is that sampling the visible area from the inside of the sphere samples it uniformly, but sampling it from outside of the sphere does *not* sample it uniformly, so you can't substitute the former for the latter.

You can see this geometrically, by looking at your diagram and considering what happens to a point on the visible arc when the interior angle is changed vs. the exterior angle. A small change in the interior angle results in a proportional change in the arclength, but a small change in the exterior angle results in a larger change when the change is towards the edge of the cone. In other words, sampling the exterior angle uniformly results in a distribution that is heavier towards the edges of the visible region than the center.

You can also see this algebraically; if uniformly sampling the exterior angle also resulted in uniformly sampling the visible region, then the function that converts the exterior angle to the interior angle should be a trivial scaling function. But, looking at the formula I derive in my previous post, it is not a scaling function, and is instead some complicated function.

I'm not surprised that the images you produced still look right -- I don't think the difference would be easily noticeable except in test scenes designed to expose it.

Hope this helps!

Statistics: Posted by akalin — Wed Aug 26, 2015 4:06 am

]]>

akalin wrote:

Hi yuriks,

Sorry for the late reply, but I think your method doesn't quite work...I ran into the same problem also! The problem is that the point of intersection is in general not tangent to the sphere, so the triangle formed by it, the center of the sphere, and the origin of the ray isn't a right triangle. If you look at my diagram, you can see that the right angle formed by the ray and the center of the sphere produces a point inside the sphere (in general).

-- Fred

Hi yuriks,

Sorry for the late reply, but I think your method doesn't quite work...I ran into the same problem also! The problem is that the point of intersection is in general not tangent to the sphere, so the triangle formed by it, the center of the sphere, and the origin of the ray isn't a right triangle. If you look at my diagram, you can see that the right angle formed by the ray and the center of the sphere produces a point inside the sphere (in general).

-- Fred

Thanks for the reply akalin,

With my method I propose turning the sampling "inside out", so rather than sampling the sphere from the ray origin, what I'm trying to do there is sample a direction from the cone inside the sphere that results in a point that's visible from our origin. So the tangent lines there are the "horizon", beyond which the sphere surface isn't visible from the ray origin anymore. So with the angle of that cone in hand I sample and use the PDF from the inner cone.

I'm using it in my renderer (even though I haven't seriously worked on it in ages now) and it seems to be working fine for me. (At the time I compared results with the code from PBRT and mine, and couldn't see any difference.) Some time ago I recreated my test scene in Mitsuba and the results seem to agree too:

Statistics: Posted by yuriks — Tue Aug 25, 2015 10:56 pm

]]>

yuriks wrote:

I've had a similar confusion while reading PBRT. I like to derive all algorithms myself from first principles, so while deriving the sphere sampling one I arrived at an (in my opinion) simpler solution, and couldn't find any drawbacks with it. This is the code and the rendered documentation for it: (If anyone knows of a system that will render such comments inline with the code, I'd be glad to know about it! I don't really care for doxygen.)

}[/code]

I've had a similar confusion while reading PBRT. I like to derive all algorithms myself from first principles, so while deriving the sphere sampling one I arrived at an (in my opinion) simpler solution, and couldn't find any drawbacks with it. This is the code and the rendered documentation for it: (If anyone knows of a system that will render such comments inline with the code, I'd be glad to know about it! I don't really care for doxygen.)

}[/code]

Hi yuriks,

Sorry for the late reply, but I think your method doesn't quite work...I ran into the same problem also! The problem is that the point of intersection is in general not tangent to the sphere, so the triangle formed by it, the center of the sphere, and the origin of the ray isn't a right triangle. If you look at my diagram, you can see that the right angle formed by the ray and the center of the sphere produces a point inside the sphere (in general).

-- Fred

Edit: I misunderstood yuriks' post. See later posts below.

Statistics: Posted by akalin — Tue Aug 25, 2015 9:33 pm

]]>