Hi all,

Based on http://graphics.tudelft.nl/~dietger/ survey report, I cannot comprehend on how equation 3.15 was formulated based figure 3.5. Can anyone please expound on the probabilities formulation/derivation.

I have also noted some inconsistencies on equation 3.18 and 3.19 whereby inserting equation 3.18 to the modified sensor sensitivity function it does not result to equation 3.19.

Kind regards.

## modified sensor sensitivity equation

### Re: modified sensor sensitivity equation

Ok,

When playing with a rendering algorithm you have to play with a camera model. Each kind of camera can handle the light in a very different way.

The Dietger camera model is a simple pinhole camera... so another formula has to be used for other camera model. But for now, get back to the pinhole model.

When doing path tracing, you have nothing to do... why ? Because by chance, the pdf wrt dA = 1 (The pixel area = 1).

When doing light tracing, it is another story...

You have several way to handle the pinhole camera...

1) When tracing a ray between the light-vertex and the camera-origin, you have to compute the hit point on the image-plane. The real pixel is located on the image-plane and not at the camera origin. So, if simply, you use the intersection point between this ray and the image-plane (to convert wrt dA) it will be enough.

The difference is that the 'distance' (used during dA convertion ie. cos/distance²) will be different.

2) You can trace the ray up to the camera origin and apply the dA convertion formula, then once it is done you have to re-project this value to the image plane.

So, either you use trigonometry either you use units convertion and you will got the Dietger formula

distance(P, camera) = sqrt( Px² + Py² + 1² )

cos(theta) = fl / distance(P, camera)

=> distance(P, camera) = fl / cos(theta)

The apply the dA (differential area) to Watts convertion (distance² / cos_theta) :

distance² / cos(theta) = ( fl / cos(theta) ) ² / cos(theta) = fl / cos(theta)^3

When playing with a rendering algorithm you have to play with a camera model. Each kind of camera can handle the light in a very different way.

The Dietger camera model is a simple pinhole camera... so another formula has to be used for other camera model. But for now, get back to the pinhole model.

When doing path tracing, you have nothing to do... why ? Because by chance, the pdf wrt dA = 1 (The pixel area = 1).

When doing light tracing, it is another story...

You have several way to handle the pinhole camera...

1) When tracing a ray between the light-vertex and the camera-origin, you have to compute the hit point on the image-plane. The real pixel is located on the image-plane and not at the camera origin. So, if simply, you use the intersection point between this ray and the image-plane (to convert wrt dA) it will be enough.

The difference is that the 'distance' (used during dA convertion ie. cos/distance²) will be different.

2) You can trace the ray up to the camera origin and apply the dA convertion formula, then once it is done you have to re-project this value to the image plane.

So, either you use trigonometry either you use units convertion and you will got the Dietger formula

distance(P, camera) = sqrt( Px² + Py² + 1² )

cos(theta) = fl / distance(P, camera)

=> distance(P, camera) = fl / cos(theta)

The apply the dA (differential area) to Watts convertion (distance² / cos_theta) :

distance² / cos(theta) = ( fl / cos(theta) ) ² / cos(theta) = fl / cos(theta)^3

**Spectral**

**OMPF 2 global moderator**

### Re: modified sensor sensitivity equation

Thank you Spectral,

Your derivation wasn't making sense in the essence on whether the P was the pixel location on the image plane or whether it belonged to the surface geometry represented by a light vertex. But it is the pixel location which makes sense in the relation of the camera point (eye) and image plane.

Furthermore, the dA conversion as illustrated on your last equation should have a "focal length squared" on its derivation.

But how are the conversions made between \[\textbf{P}v = \textbf{P}\sigma = P_{A}\]

as shown in equation 3.15.

Your derivation wasn't making sense in the essence on whether the P was the pixel location on the image plane or whether it belonged to the surface geometry represented by a light vertex. But it is the pixel location which makes sense in the relation of the camera point (eye) and image plane.

Furthermore, the dA conversion as illustrated on your last equation should have a "focal length squared" on its derivation.

But how are the conversions made between \[\textbf{P}v = \textbf{P}\sigma = P_{A}\]

as shown in equation 3.15.