Vertex merging paper and SmallVCM
Re: Vertex merging paper and SmallVCM
The last scene, with the two spheres and the blue and yellow walls, how are the lights in the roof constructed? The path tracing image looks really off to me but I guess there is a natural explanation to that.
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?
If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?
If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)
Re: Vertex merging paper and SmallVCM
PPM (and VCM) can be made to be unbiased simply by having each photon store information about the previous hit, rather than the current hit. Then when the eye path lands close to a photon, you simply shoot a ray toward the photon's source (be it directly at a light or simply the last thing the photon bounced off of). This allows you to use an arbitrarily large search radius for nearby photons, at the cost of tracing one extra ray per sample. Does anyone see any flaws with this idea?
You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?). This should give some of the benefits of MLT.
You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?). This should give some of the benefits of MLT.
Re: Vertex merging paper and SmallVCM
The Mirror balls scene is available on Toshiya Hachisuka's web page. The Car scene I downloaded from here and tweaked materials and shading normals a bit. If there's interest, I can release that. The Living room and Bathroom cannot be released, unfortunately.madd wrote:Are any of the test scenes available somewhere?
You sometimes actually wait that long. It can easily happen that some caustics look a bit blurry initially, then you wait for some time for them to get sharper (and also to get rid of noise on the diffuse surfaces), and in the end the image looks overall much better than it would look with BPT. Such scenes are shown in the PPM and SPPM papers, and you can also see plenty on the LuxRender forum.Dietger wrote:Let me try to clarify what I meant here. Of course you are right. The initial noise will be less, the added bias will go away and the algorithm does inherit BDPT's high asymptotic performance. But in practice nobody waits that long! If you would be willing to wait that long, you might as well have started with vanilla BDPT to begin with! After all, the whole reason we use PPM in the first place was because BDPT is too slow for some stuff.
I completely agree -- the choice of filter support in (P)PM is crucial for the quality. And in VCM it plays an even more interesting role - it directly controls the relative weight of vertex connection and merging techniques. A smaller radius means more weight to vertex connections. Arguably, with VCM the choice of radius is less crucial than in PPM, because it does not impact the quality as directly as in PPM. We actually argue for choosing a smaller initial radius than for PPM, as this results in less bias, and diffuse transport (which suffers most from small radii) is well handled by vertex connection. Having all that said, adaptive bandwidth choice is definitely an interesting, and also orthogonal, problem. It is not addressed in that paper and can additionally improve quality.Dietger wrote: In practice, we render only for a limited time (probably way too short for the asymptotic properties to have a significant impact). In that finite render time, PPM starts off doing things badly (bias artifacts around small objects) and then hopes to fix it later. The more PPM screws up initially, the longer it will take to fix this later. If PPM would just have started with an appropriately small but fixed radius for each pixel, it could realize a better quality/bias trade off in the same render time. Unfortunately we usually don't know this optimal radius (too big => bias, too small => noise), thats why we are stuck with this reducing radius. The reducing radius should free is from having to guess the optimal radius, at the cost of some extra bias within the same render time. Unfortunately, it turns out that the extra bias/noise of using a too big/small initial radius can still be pretty significant, even when rendering for reasonably long time. This is why the initial radius is still such an important parameter for the PPM algorithm, PPM's theoretical consistency notwithstanding.
As I said, we will rerun the Mitsuba MLT tests with the latest version. Actually, Wenzel was also suspicious about the correctness of the preliminary Veach MLT implementation we used, which he was so kind to provide well before the release of Mitsuba 0.4, which reportedly includes numerous fixes.Dietger wrote:But than again, Kelemen-MLT did manage to find them. So assuming that Veach-MLT has a reasonable probability of throwing away the first few eye vertices, it should have a decent probability of finding these caustics at least once in a while. And when MLT finds them, it will hang on! So that's why I expected at least some ugly bright splotches here and there![]()
More worrying, in the 'Living room' scene, PT and BDPT have no problem at all sampling the direct light on the table in the mirror, so a properly set up Veach-MLT mutation should also have no problem there. Unless the mutation selection probabilities are set up exceptionally bad, I think something is off. Anyhow, I shouldn't complain, as this is probably the only half-decent Veach-MLT implementation out there
Re: Vertex merging paper and SmallVCM
Each lamp is constructed like this: The tube is reflective. There is a lens at the bottom of the tube, and the top is actually the diffuse ceiling. And yes, it doesn't cover the whole tube, hence the nice circles with direct illumination in the PT image. Inside the tube there is a very small (floating) horizontal area light with emission downwards. The bright light source reflections that are only seen in the PPM and VCM images is the ceiling inside the lamps being strongly illuminated via light reflected off the lens back inside.Zelcious wrote:The last scene, with the two spheres and the blue and yellow walls, how are the lights in the roof constructed? The path tracing image looks really off to me but I guess there is a natural explanation to that.
My best guess would be a lens in the middle not covering the entire hole in combination with next event estimation. Are the sides of the tube reflective?
If my guess are correct it would quite interesting to see if you turned those lenses into portals in combination with path tracing. Perhaps even importance sample every specular surface. (to a lesser degree)
Making a portal as the lens will probably help for BPT, but I'm not sure how much.
Indeed you can reduce bias this way, and this has actually been tried before. See Ph. Bekaert's tech report "A Custom Designed Density Estimator for Light Transport". But you cannot make it fully unbiased because you need to compute the probability that a "photon" falls inside your search range, i.e. the acceptance probability integral that appears in the path pdf of the vertex merging formulation. And this integral cannot be computed analytically in general, because it depends on the visibility function, which in turn depends on the scene geometry, i.e. it's arbitrary. Analytic computation may be possible in certain cases, but I doubt it will be worth the pain/overhead in practice.keldor314 wrote:PPM (and VCM) can be made to be unbiased simply by having each photon store information about the previous hit, rather than the current hit. Then when the eye path lands close to a photon, you simply shoot a ray toward the photon's source (be it directly at a light or simply the last thing the photon bounced off of). This allows you to use an arbitrarily large search radius for nearby photons, at the cost of tracing one extra ray per sample. Does anyone see any flaws with this idea?
This sounds interesting, but I'm not immediately sure this will be very useful. For example this will blur sharp caustics/shadows a lot.keldor314 wrote:You could further classify photons into several groups based on their intensity, with brighter photons being searched for at a larger radius, while their contribution would be divided by the area of the search area (volume?). This should give some of the benefits of MLT.
Re: Vertex merging paper and SmallVCM
Interesting work. How would the different methods perform on outdoor scenes?
Re: Vertex merging paper and SmallVCM
Wow, very cool! The paper looks great, I love the javascript image comparison, plus SmallVCM looks really neat and comprehensive! I look forward to examining it further. Congrats and great work!
I am curious though. The SmallVCM webpage makes a reference to "SimplePT", but I cannot find what this is on Google. I have an idea what you may be referring to, but I'm not sure
I am curious though. The SmallVCM webpage makes a reference to "SimplePT", but I cannot find what this is on Google. I have an idea what you may be referring to, but I'm not sure

-
- Posts: 22
- Joined: Wed Oct 10, 2012 12:41 pm
Re: Vertex merging paper and SmallVCM
Oops, that would be typo on my side.
Kinda kept confusing the two words throughout the project, as SmallVCM was becoming less and less small (and, arguably, less and less simple).
Should be fixed (to smallpt) now.
Kinda kept confusing the two words throughout the project, as SmallVCM was becoming less and less small (and, arguably, less and less simple).

Should be fixed (to smallpt) now.
Re: Vertex merging paper and SmallVCM
Well, it depends on the scene, illumination and viewpoint. On most typical scenes, VCM will most likely look like BPT, since PPM is not really good in those. In general, it should take the best from a BPT image and a PPM image.dbz wrote:Interesting work. How would the different methods perform on outdoor scenes?
Re: Vertex merging paper and SmallVCM
Ah, thank you!tomasdavid wrote:Should be fixed (to smallpt) now.
In case anyone else is interested, here are line counts:
Code: Select all
67 ray.hxx
68 materials.hxx
72 renderer.hxx
81 eyelight.hxx
81 frame.hxx
128 camera.hxx
192 rng.hxx
216 hashgrid.hxx
238 pathtracer.hxx
261 framebuffer.hxx
261 utils.hxx
268 geometry.hxx
388 config.hxx
396 html_writer.hxx
421 math.hxx
488 scene.hxx
512 lights.hxx
578 bsdf.hxx
948 vertexcm.hxx
5664 total
Re: Vertex merging paper and SmallVCM
Yesterday I have rendered my very first image with BiDir Vertex Merging. It is pretty much a straight port of SmallVCM code plus some of the old code used for SPPM. This is a 30secs rendering with Metropolis sampler + BiDir on CPU:
And this a 30secs rendering with Metropolis sampler + BiDir with VM on CPU:
Notice the classic SDS paths with caustics reflected on the mirror. I have still an half millions of things to tune and fix (for instance, caustics look over-bright, probably something wrong with MIS weights) however it allows me to start to collect some experience with BiDir with VM on the field:
1) VM is very easy to implement on top of an existing BiDir as an option. So easy that you may want to implement it even if it isn't strictly required by the kind of images you are going to render. It is just an useful option available for the users.
2) Not surprising, it shares with SPPM some of the implementation critical points (time to spent to create the k-NN accelerator with light vertices, lookup time of k-NN vertices, memory usage of the k-NN accelerator, etc.).
3) It shares with SPPM also some rendering characteristics. For instance, large initial search radius lead to a large initial bias (i.e. blurred caustics), good for previews (i.e. less perceived high frequency noise in the early stage of the rendering), etc.
And this a 30secs rendering with Metropolis sampler + BiDir with VM on CPU:
Notice the classic SDS paths with caustics reflected on the mirror. I have still an half millions of things to tune and fix (for instance, caustics look over-bright, probably something wrong with MIS weights) however it allows me to start to collect some experience with BiDir with VM on the field:
1) VM is very easy to implement on top of an existing BiDir as an option. So easy that you may want to implement it even if it isn't strictly required by the kind of images you are going to render. It is just an useful option available for the users.
2) Not surprising, it shares with SPPM some of the implementation critical points (time to spent to create the k-NN accelerator with light vertices, lookup time of k-NN vertices, memory usage of the k-NN accelerator, etc.).
3) It shares with SPPM also some rendering characteristics. For instance, large initial search radius lead to a large initial bias (i.e. blurred caustics), good for previews (i.e. less perceived high frequency noise in the early stage of the rendering), etc.
Last edited by Dade on Tue Nov 20, 2012 3:05 pm, edited 1 time in total.