Correct Weighting for all Camera Effects

Practical and theoretical implementation discussion.
Posts: 29
Joined: Sun Jul 28, 2013 6:53 pm

Correct Weighting for all Camera Effects

Postby Geometrian » Fri Jul 11, 2014 6:42 pm


This post intends to be a summary of all the issues I am currently working on. Any input anyone can provide would be welcome. I am currently restricting to area light sources and finite-aperture lenses. The camera model I am using is a circular thin lens with an image plane behind it. The image plane's sensor is a fixed size, but the resolution it represents is not.

  • Generating Rays Away From the Camera
    Given a point on the image plane, we need to shoot a ray at the lens. I am currently doing this by uniformly sampling the lens, and then sending a ray from the image plane through that point. This is incorrect, because the distribution of rays is not even in angular space.

    The other issue is that as the image plane is moved closer to/farther from the lens, the amount of light it receives varies (chiefly because incoming light refracts to points off the sensor).

    Fixing the first problem will fix the second problem, I think.
  • Generating Rays Toward the Camera
    This is another sampling problem. I am currently sampling the projection of the disk's bounding sphere, which I think is a reasonable solution.
  • Weighting Incoming Rays
    Incoming rays (from e.g. light tracing) that hit the sensor need to be weighted by some factor and added to the appropriate pixels in some way. In general, I feel like the ray should contribute to four pixels (lerped).

    For a certain number N of light paths, I feel like each one can be considered to be "carrying away" 1/N of the light source's total radiance. However, when these rays hit the image plane, they can only contribute to a few pixels. In the limit N->+inf, is that okay?

    How do I convert a single, infinitely thin ray carrying radiance into an accumulated energy over a finite size pixel?
  • Effect of Pixel Size
    Pixel area factors into all this too. The higher the resolution, the less energy accumulates in each pixel. The converse is also true. I feel like the effect should be a simple scaling based on pixel area.

    Forward path tracers I have seen completely ignore this effect. Implicitly, they are scaling the sensor sensitivity to compensate. In real life, this leads to noise.

Return to “General Development”

Who is online

Users browsing this forum: No registered users and 1 guest