Vertex merging paper and SmallVCM

Must read and other references.
Zelcious
Posts: 42
Joined: Mon Jul 23, 2012 11:05 am

Re: Vertex merging paper and SmallVCM

Post by Zelcious » Sun Oct 14, 2012 9:30 am

The last scene, with the two spheres and the blue and yellow walls, how are the lights in the roof constructed? The path tracing image looks really off to me but I guess there is a natural explanation to that.
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?

If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)

keldor314
Posts: 10
Joined: Tue Jan 10, 2012 6:56 pm

Re: Vertex merging paper and SmallVCM

Post by keldor314 » Sun Oct 14, 2012 8:35 pm

PPM (and VCM) can be made to be unbiased simply by having each photon store information about the previous hit, rather than the current hit. Then when the eye path lands close to a photon, you simply shoot a ray toward the photon's source (be it directly at a light or simply the last thing the photon bounced off of). This allows you to use an arbitrarily large search radius for nearby photons, at the cost of tracing one extra ray per sample. Does anyone see any flaws with this idea?

You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?). This should give some of the benefits of MLT.

ingenious
Posts: 282
Joined: Mon Nov 28, 2011 11:11 pm
Location: London, UK
Contact:

Re: Vertex merging paper and SmallVCM

Post by ingenious » Sun Oct 14, 2012 8:39 pm

madd wrote:Are any of the test scenes available somewhere?
The Mirror balls scene is available on Toshiya Hachisuka's web page. The Car scene I downloaded from here and tweaked materials and shading normals a bit. If there's interest, I can release that. The Living room and Bathroom cannot be released, unfortunately.
Dietger wrote:Let me try to clarify what I meant here. Of course you are right. The initial noise will be less, the added bias will go away and the algorithm does inherit BDPT's high asymptotic performance. But in practice nobody waits that long! If you would be willing to wait that long, you might as well have started with vanilla BDPT to begin with! After all, the whole reason we use PPM in the first place was because BDPT is too slow for some stuff.
You sometimes actually wait that long. It can easily happen that some caustics look a bit blurry initially, then you wait for some time for them to get sharper (and also to get rid of noise on the diffuse surfaces), and in the end the image looks overall much better than it would look with BPT. Such scenes are shown in the PPM and SPPM papers, and you can also see plenty on the LuxRender forum.
Dietger wrote: In practice, we render only for a limited time (probably way too short for the asymptotic properties to have a significant impact). In that finite render time, PPM starts off doing things badly (bias artifacts around small objects) and then hopes to fix it later. The more PPM screws up initially, the longer it will take to fix this later. If PPM would just have started with an appropriately small but fixed radius for each pixel, it could realize a better quality/bias trade off in the same render time. Unfortunately we usually don't know this optimal radius (too big => bias, too small => noise), thats why we are stuck with this reducing radius. The reducing radius should free is from having to guess the optimal radius, at the cost of some extra bias within the same render time. Unfortunately, it turns out that the extra bias/noise of using a too big/small initial radius can still be pretty significant, even when rendering for reasonably long time. This is why the initial radius is still such an important parameter for the PPM algorithm, PPM's theoretical consistency notwithstanding.
I completely agree -- the choice of filter support in (P)PM is crucial for the quality. And in VCM it plays an even more interesting role - it directly controls the relative weight of vertex connection and merging techniques. A smaller radius means more weight to vertex connections. Arguably, with VCM the choice of radius is less crucial than in PPM, because it does not impact the quality as directly as in PPM. We actually argue for choosing a smaller initial radius than for PPM, as this results in less bias, and diffuse transport (which suffers most from small radii) is well handled by vertex connection. Having all that said, adaptive bandwidth choice is definitely an interesting, and also orthogonal, problem. It is not addressed in that paper and can additionally improve quality.
Dietger wrote:But than again, Kelemen-MLT did manage to find them. So assuming that Veach-MLT has a reasonable probability of throwing away the first few eye vertices, it should have a decent probability of finding these caustics at least once in a while. And when MLT finds them, it will hang on! So that's why I expected at least some ugly bright splotches here and there :)
More worrying, in the 'Living room' scene, PT and BDPT have no problem at all sampling the direct light on the table in the mirror, so a properly set up Veach-MLT mutation should also have no problem there. Unless the mutation selection probabilities are set up exceptionally bad, I think something is off. Anyhow, I shouldn't complain, as this is probably the only half-decent Veach-MLT implementation out there :)
As I said, we will rerun the Mitsuba MLT tests with the latest version. Actually, Wenzel was also suspicious about the correctness of the preliminary Veach MLT implementation we used, which he was so kind to provide well before the release of Mitsuba 0.4, which reportedly includes numerous fixes.

ingenious
Posts: 282
Joined: Mon Nov 28, 2011 11:11 pm
Location: London, UK
Contact:

Re: Vertex merging paper and SmallVCM

Post by ingenious » Sun Oct 14, 2012 8:59 pm

Zelcious wrote:The last scene, with the two spheres and the blue and yellow walls, how are the lights in the roof constructed? The path tracing image looks really off to me but I guess there is a natural explanation to that.
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?

If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)
Each lamp is constructed like this: The tube is reflective. There is a lens at the bottom of the tube, and the top is actually the diffuse ceiling. And yes, it doesn't cover the whole tube, hence the nice circles with direct illumination in the PT image. Inside the tube there is a very small (floating) horizontal area light with emission downwards. The bright light source reflections that are only seen in the PPM and VCM images is the ceiling inside the lamps being strongly illuminated via light reflected off the lens back inside.

Making a portal as the lens will probably help for BPT, but I'm not sure how much.
keldor314 wrote:PPM (and VCM) can be made to be unbiased simply by having each photon store information about the previous hit, rather than the current hit. Then when the eye path lands close to a photon, you simply shoot a ray toward the photon's source (be it directly at a light or simply the last thing the photon bounced off of). This allows you to use an arbitrarily large search radius for nearby photons, at the cost of tracing one extra ray per sample. Does anyone see any flaws with this idea?
Indeed you can reduce bias this way, and this has actually been tried before. See Ph. Bekaert's tech report "A Custom Designed Density Estimator for Light Transport". But you cannot make it fully unbiased because you need to compute the probability that a "photon" falls inside your search range, i.e. the acceptance probability integral that appears in the path pdf of the vertex merging formulation. And this integral cannot be computed analytically in general, because it depends on the visibility function, which in turn depends on the scene geometry, i.e. it's arbitrary. Analytic computation may be possible in certain cases, but I doubt it will be worth the pain/overhead in practice.
keldor314 wrote:You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?). This should give some of the benefits of MLT.
This sounds interesting, but I'm not immediately sure this will be very useful. For example this will blur sharp caustics/shadows a lot.

dbz
Posts: 46
Joined: Wed Jan 11, 2012 10:16 pm
Location: the Netherlands

Re: Vertex merging paper and SmallVCM

Post by dbz » Mon Oct 15, 2012 9:21 am

Interesting work. How would the different methods perform on outdoor scenes?

beason
Posts: 52
Joined: Sat Dec 10, 2011 1:58 am
Location: Los Angeles, CA

Re: Vertex merging paper and SmallVCM

Post by beason » Mon Oct 15, 2012 7:04 pm

Wow, very cool! The paper looks great, I love the javascript image comparison, plus SmallVCM looks really neat and comprehensive! I look forward to examining it further. Congrats and great work!

I am curious though. The SmallVCM webpage makes a reference to "SimplePT", but I cannot find what this is on Google. I have an idea what you may be referring to, but I'm not sure :)

tomasdavid
Posts: 22
Joined: Wed Oct 10, 2012 12:41 pm

Re: Vertex merging paper and SmallVCM

Post by tomasdavid » Mon Oct 15, 2012 8:47 pm

Oops, that would be typo on my side.
Kinda kept confusing the two words throughout the project, as SmallVCM was becoming less and less small (and, arguably, less and less simple). :-D

Should be fixed (to smallpt) now.

ingenious
Posts: 282
Joined: Mon Nov 28, 2011 11:11 pm
Location: London, UK
Contact:

Re: Vertex merging paper and SmallVCM

Post by ingenious » Mon Oct 15, 2012 8:52 pm

dbz wrote:Interesting work. How would the different methods perform on outdoor scenes?
Well, it depends on the scene, illumination and viewpoint. On most typical scenes, VCM will most likely look like BPT, since PPM is not really good in those. In general, it should take the best from a BPT image and a PPM image.

beason
Posts: 52
Joined: Sat Dec 10, 2011 1:58 am
Location: Los Angeles, CA

Re: Vertex merging paper and SmallVCM

Post by beason » Mon Oct 15, 2012 9:50 pm

tomasdavid wrote:Should be fixed (to smallpt) now.
Ah, thank you!

In case anyone else is interested, here are line counts:

Code: Select all

    67 ray.hxx
    68 materials.hxx
    72 renderer.hxx
    81 eyelight.hxx
    81 frame.hxx
   128 camera.hxx
   192 rng.hxx
   216 hashgrid.hxx
   238 pathtracer.hxx
   261 framebuffer.hxx
   261 utils.hxx
   268 geometry.hxx
   388 config.hxx
   396 html_writer.hxx
   421 math.hxx
   488 scene.hxx
   512 lights.hxx
   578 bsdf.hxx
   948 vertexcm.hxx
  5664 total
Thanks for sharing a sample implementation of several techniques, including your new one!

Dade
Posts: 206
Joined: Fri Dec 02, 2011 8:00 am

Re: Vertex merging paper and SmallVCM

Post by Dade » Tue Nov 20, 2012 8:46 am

Yesterday I have rendered my very first image with BiDir Vertex Merging. It is pretty much a straight port of SmallVCM code plus some of the old code used for SPPM. This is a 30secs rendering with Metropolis sampler + BiDir on CPU:
bidir.jpg
And this a 30secs rendering with Metropolis sampler + BiDir with VM on CPU:
bidir-vm.jpg
Notice the classic SDS paths with caustics reflected on the mirror. I have still an half millions of things to tune and fix (for instance, caustics look over-bright, probably something wrong with MIS weights) however it allows me to start to collect some experience with BiDir with VM on the field:

1) VM is very easy to implement on top of an existing BiDir as an option. So easy that you may want to implement it even if it isn't strictly required by the kind of images you are going to render. It is just an useful option available for the users.

2) Not surprising, it shares with SPPM some of the implementation critical points (time to spent to create the k-NN accelerator with light vertices, lookup time of k-NN vertices, memory usage of the k-NN accelerator, etc.).

3) It shares with SPPM also some rendering characteristics. For instance, large initial search radius lead to a large initial bias (i.e. blurred caustics), good for previews (i.e. less perceived high frequency noise in the early stage of the rendering), etc.
Last edited by Dade on Tue Nov 20, 2012 3:05 pm, edited 1 time in total.

Post Reply