I'm curious about whether using ratio tracking for estimating Monte Carlo constribution (f(x) / p(x), not transmittance) with delta tracking for heterogeneous media is unbiased.

We can get a MC contribution of free path sampling by delta tracking for a wavelength used for the procedure.

There is a demand to know MC contributions for other wavelengths simultaneously,

but we don't have the transmittance and PDF as separated values.

Therefore it's not possible to apply delta tracking to estimate MC contributions for multiple wavelengths.

On the other hand, estimating transmittance for a given segment in participating media for multiple wavelengths in unbiased fashion is possible by performing ratio tracking for every wavelengths.

I consider things like following:

perform delta tracking to sample free path for the wavelength wl_i and get the distance d.

PDF of the procedure is

p(d, wl_i) = sigma_e(d, wl_i) * exp(-int_0^d sigma_e(s, wl_i) ds)

Transmittance for another wavelength wl_j is

T(d, wl_j) = exp(-int_0^d sigma_e(s, wl_j) ds)

Therefore, MC contribution for the wavelength wl_j is

T(d, wl_j) / p(d, wl_i) = 1 / sigma_e(d, wl_i) * exp(-int_0^d (sigma_e(s, wl_j) - sigma_e(s, wl_i)) ds)

sigma_e(s, wl_j) - sigma_e(s, wl_i) can be a negative value, but ratio tracking is able to handle correctly.

Does this become unbiased estimate when I integrate this procedure into a light transport algorithm?

Thanks,

Statistics: Posted by shocker_0x15 — Wed Feb 22, 2017 5:15 am

]]>

Statistics: Posted by toxie — Fri Feb 10, 2017 2:50 pm

]]>

-knus-

Statistics: Posted by knus — Fri Feb 10, 2017 2:09 pm

]]>

I'm not aware of any generally accepted performance test. Performance depends on a lot of factors, BVH building times and quality, traversal speed, resolution, camera transformation, shading complexity, selection / coherence of secondary rays. If you just want to improve one of those, then you don't need a general reference, you can just use your own system and time things before and after.

Statistics: Posted by papaboo — Fri Feb 10, 2017 1:28 pm

]]>

]]>

Is there a commonly used dataset for benchmarking ray tracing? I know about the BART for dynamic tracing, but is there something widely used for static scenes? Preferably a model-only, not compileable code to run on 'any' machine. The reason I ask is that I am (for a hobby) developing a dedicated hardware ray tracer in an FPGA. It is definitely not real time, and there is no API to program it yet. It does BVH tree iteration in hardware, intersects triangles and shading (shadow, reflection, refraction etc). Currently no texturing but that is on the to-do list.

The current prototype runs at 33 MHz, and is able to trace the 'Fairyforest' scene at about 1 FPS in 256x256 resolution (diffuse lighting, 1 light, 1 shadow ray per pixel).

-knus-

Statistics: Posted by knus — Fri Feb 10, 2017 10:20 am

]]>

Statistics: Posted by toxie — Thu Feb 09, 2017 4:22 pm

]]>

]]>

Statistics: Posted by jbikker — Thu Feb 09, 2017 11:32 am

]]>

EDIT: maybe also due to the semi-magical weighting function, smaller tiles are better? if thats the case then one could also have different small tiles that are then used "randomly" over the screen to get rid of the tiling patterns.

Statistics: Posted by toxie — Thu Feb 09, 2017 10:51 am

]]>

we just submitted an article on the combination of machine learning

and light transport simulation, see https://arxiv.org/abs/1701.07403

Best regards,

Catalytic

Statistics: Posted by catalytic — Wed Feb 08, 2017 2:41 pm

]]>

- Applying the method to direct light sampling yields the results presented in the paper.

- Applying the method to the first diffuse bounce yields no perceivable improvement in quality.

In general, the number of dimensions is a problem: I tried 6 dimensions (sampling direct light on first diffuse surface, then the first diffuse bounce and finally direct light on the second diffuse surface) but this already seems to decrease the quality of the penumbras compared to using just 4 dimensions. This would suggest that just using 2 dimensions could yield the best quality; this way additional dimensions do not affect the quality of the distribution of the first two. It could be that slightly more converged tiles yield better results; I had 128x128 / d=10 tiles of high quality and produced 32x32 / d=6 in just a few minutes (didn't expect to need them).

So that's pretty much what everyone expected.

That being said, the method obviously improves image quality for the first couple of samples, it's straight-forward to implement and should have only a tiny impact on performance.

- Jacco.

EDIT: slightly more converged tile. Method applied to 6 dimensions (NEE-1, diff bounce, NEE-2). Obvious tiling pattern due to small tiles; this disappears for larger tiles.

Statistics: Posted by jbikker — Wed Feb 08, 2017 12:41 pm

]]>

Statistics: Posted by stefan — Tue Feb 07, 2017 9:39 pm

]]>

Here are the averages for the table data (https://www.rit.edu/cos/colorscience/rc_useful_data.php) and the single lobe approximation (http://jcgt.org/published/0002/02/01/) for the wavelength to linear sRGB conversion:

Average Linear sRGB D65 (360-780 nm at 5 nm steps) = {0.30276, 0.23830, 0.22843}, Max = {2.51679, 1.50899, 1.89752}, Min = {-0.92539, -0.22180, -0.16958}

Average Linear sRGB D65 (360-780 nm at 0.5 nm steps) = {0.30884, 0.23873, 0.22871}, Max = {2.49697, 1.51144, 1.96021}, Min = {-0.80496, -0.25638, -0.18061}

Average Linear sRGB D50 (360-780 nm at 0.5 nm steps) = {0.26359, 0.24440, 0.31418}, Max = {2.34045, 1.54728, 2.60592}, Min = {-0.87620, -0.27376, -0.19881}

Thank you,

Ryan

Statistics: Posted by rheniser — Tue Feb 07, 2017 9:50 am

]]>

rheniser wrote:

According to most sources, including https://en.wikipedia.org/wiki/SRGB, the linear RGB values are usually clipped to [0.0, 1.0], with display white represented as (1.0, 1.0, 1.0).

According to most sources, including https://en.wikipedia.org/wiki/SRGB, the linear RGB values are usually clipped to [0.0, 1.0], with display white represented as (1.0, 1.0, 1.0).

It's not entirely clear, but it sounds like you're talking about clamping individual samples. Don't do that. Just average the unclamped sample values and then clamp the average. Yes, there's still potential energy loss and color shifting, but that's the usual result of tone mapping to a low dynamic range display.

Statistics: Posted by friedlinguini — Mon Feb 06, 2017 2:41 pm

]]>