Statistics: Posted by toxie — Thu Jul 17, 2014 1:36 pm

]]>

Generating a Sierpinski Tri by accident:

Filtering gone wrong:

Killing precision:

Statistics: Posted by toxie — Thu Jul 17, 2014 1:31 pm

]]>

Thank you for your insight. A form of rejection sampling as one option or reject the path if necessary. Let me do some test and see.

Statistics: Posted by joedizzle — Mon Jul 14, 2014 8:39 am

]]>

]]>

I've been grappling with a problem in which I would like your resourceful help.

When doing volumetric scattering of a particle in a medium in which boundaries are involved, for example... outside is void and inside is a material, to check if the ray is within the volume, ray-intersection test will be important. When ray intersects the first boundary like a sphere, the ray will get into the volume and when the ray intersects the second boundary, the particle will get out of the volume. When the scattered particle is too close to the boundary, ray-intersection test would fail to detect the intersection and regard the ray still within the volume, even though the ray already got out of the volume. All this is due to the epsilon value issue that is required in ray-intersection with purpose of preventing self-intersection. No matter how small the epsilon value I use, I'm still grappled with the problem described above.

What are the possible solutions to such?

Thank you.

Statistics: Posted by joedizzle — Sun Jul 13, 2014 1:05 pm

]]>

This post intends to be a summary of all the issues I am currently working on. Any input anyone can provide would be welcome. I am currently restricting to area light sources and finite-aperture lenses. The camera model I am using is a circular thin lens with an image plane behind it. The image plane's sensor is a fixed size, but the resolution it represents is not.

- Generating Rays Away From the Camera

Given a point on the image plane, we need to shoot a ray at the lens. I am currently doing this by uniformly sampling the lens, and then sending a ray from the image plane through that point. This is incorrect, because the distribution of rays is not even in angular space.

The other issue is that as the image plane is moved closer to/farther from the lens, the amount of light it receives varies (chiefly because incoming light refracts to points off the sensor).

Fixing the first problem will fix the second problem, I think. - Generating Rays Toward the Camera

This is another sampling problem. I am currently sampling the projection of the disk's bounding sphere, which I think is a reasonable solution. - Weighting Incoming Rays

Incoming rays (from e.g. light tracing) that hit the sensor need to be weighted by some factor and added to the appropriate pixels in some way. In general, I feel like the ray should contribute to four pixels (lerped).

For a certain number N of light paths, I feel like each one can be considered to be "carrying away" 1/N of the light source's total radiance. However, when these rays hit the image plane, they can only contribute to a few pixels. In the limit N->+inf, is that okay?

How do I convert a single, infinitely thin ray carrying radiance into an accumulated energy over a finite size pixel? - Effect of Pixel Size

Pixel area factors into all this too. The higher the resolution, the less energy accumulates in each pixel. The converse is also true. I feel like the effect should be a simple scaling based on pixel area.

Forward path tracers I have seen completely ignore this effect. Implicitly, they are scaling the sensor sensitivity to compensate. In real life, this leads to noise.

Statistics: Posted by Geometrian — Fri Jul 11, 2014 6:42 pm

]]>

]]>

I have implemented a light tracer: random light paths are generated, and bounce around until they hit the camera lens, at which point they refract deterministically onto the image plane.

I do not know how to weight these paths properly. I understand that one should divide by the number of light paths traced, but what about the effect of image resolution? Anything else?

I found this recent thread, but there wasn't a complete answer. It looks like SmallVCM implements light tracing, and their light path weighting is done in vertexcm.hxx in the ConnectToCamera method (~line 863). What are they doing here?

-G

Statistics: Posted by Geometrian — Wed Jul 09, 2014 5:24 pm

]]>

]]>

Dade wrote:

There should be an example of how to obtain overlapped transfers inside the AMD OpenCL SDK (checking ... in samples/opencl/cl/TransferOverlap directory).

There should be an example of how to obtain overlapped transfers inside the AMD OpenCL SDK (checking ... in samples/opencl/cl/TransferOverlap directory).

Thank you again Dade. I somehow missed the samples from the AMD APP SDK. "TrasnferOverlap" is a very interesting sample. They use a "Zero-copy buffer flag" I didn't known about. Its AMD specific by the looks of it, and the sample runs way faster than the flags I've been using. I'll do some more testing.

Statistics: Posted by AranHase — Tue Jul 08, 2014 1:08 am

]]>

In other words, is there any renderer that can create raw images suitable for processing in lightroom/rawtherapee/whatever-you-like ?

Statistics: Posted by tarlack — Mon Jul 07, 2014 3:17 pm

]]>

Statistics: Posted by tarlack — Mon Jul 07, 2014 3:12 pm

]]>

To figure out the total power hitting a point on the image sensor, you need to figure out the integral of the radiance hitting it. For Monte Carlo, this is just your radiance estimate (one ray from image sensor to light) divided by the pdf (for evenly sampled lens, 1/lens_area). For a circular lens/aperture, this works out to be a final estimate of the power of: 2*\pi*aperture*estimate. Note that this has the correct behavior. For a large aperture, the power is larger. For a small aperture, it is smaller--and a pinhole camera gets zero.

To figure out total energy, you need to integrate this power over time. In fact, aperture can be rewritten with the shutter as a function of time. But the point is that integrating the power will give you energy.

The energy gets plugged into the sensor's response curve, thus giving a "measured value", and therefore a color.

To answer my other question: if you don't take this into account, you're doing it wrong. E.g., SmallPT effectively is using a circular aperture with radius \sqrt(1/\pi) (corresponding to an area of 1 square meter) but ignoring the massive depth of field this would produce, and is integrating over one second with a square-wave shutter. Alternately, a smaller aperture is possible with a longer exposure time (for example an aperture of radius 1cm, and exposure time ~318.3 seconds).

Statistics: Posted by Geometrian — Mon Jul 07, 2014 2:49 pm

]]>

AranHase wrote:

Thank you Dade. I thought it was impossible on AMD hardware because the queue is always in-order, but it seems it may be possible to do it using two command queues (my google-fu returning mixed results). I'll try it later and see how the code deals with two queues.

Thank you Dade. I thought it was impossible on AMD hardware because the queue is always in-order, but it seems it may be possible to do it using two command queues (my google-fu returning mixed results). I'll try it later and see how the code deals with two queues.

There should be an example of how to obtain overlapped transfers inside the AMD OpenCL SDK (checking ... in samples/opencl/cl/TransferOverlap directory).

Statistics: Posted by Dade — Mon Jul 07, 2014 8:23 am

]]>

The effect on brightness only comes from processing the collected energy to get the sensor response (with some kind of sensor saturation model, maybe temporal for more accuracy) + tone mapping (close to raw processing in photo), with the famous exposure compensation setting.

Statistics: Posted by tarlack — Mon Jul 07, 2014 7:00 am

]]>