]]>

This paper introduces a new

robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate.

I have only skimmed through the mentioned paper, though your summary reminds me of Toshiya Hachisuka's paper on optimizing for an ideal acceptance ratios. - http://www.ci.i.u-tokyo.ac.jp/~hachisuka/amcmcppm.pdf

Statistics: Posted by zsolnai — Sat Mar 28, 2015 8:35 am

]]>

]]>

]]>

A general construction for parallelizing Metropolis−Hastings

This is a paper for augmenting metropolis sampling with multiple proposals in a way that,

as opposed to earlier approaches such as multiple try metropolis and prefetching,

does not discard all but one of the proposals in one iteration.

Adaptive Markov Chain Monte Carlo

Adaptive Markov Chain Monte Carlo(AMCMC) methods adapt the parameters(usually those of the proposal distribution)

of the sampler to achieve an ideal acceptance ratio.

Exploring an Adaptive Metropolis Algorithm

This is a paper on adapting the logarithm of the step size,

which allows it to find a step size that fulfills the target acceptance ratio much faster than adapting the step size directly.

ROBUST ADAPTIVE METROPOLIS ALGORITHM WITH

COERCED ACCEPTANCE RATE

This paper introduces a new

robust adaptive Metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate.

Kernel Adaptive Metropolis-Hastings

A Kernel Adaptive Metropolis-Hastings algorithm is introduced, for the purpose of sampling

from a target distribution with strongly nonlinear support. The algorithm embeds the trajectory of the Markov chain into a reproducing kernel Hilbert space (RKHS), such that the feature space covariance of the samples informs

the choice of proposal. The procedure is computationally efficient and straightforward to implement, since the RKHS moves can be integrated out analytically: our proposal distribution in the original space is a normal distribution

whose mean and covariance depend on where the current sample lies in the support of the target distribution, and adapts to its local covariance structure.

Link to example code

Adaptive Metropolis Algorithm Using Variational

Bayesian Adaptive Kalman Filter

A new adaptive MCMC algorithm, called the variational Bayesian

adaptive Metropolis (VBAM) algorithm, is developed. The VBAM algorithm

updates the proposal covariance matrix using the variational Bayesian adaptive

Kalman filter (VB-AKF). A strong law of large numbers for the VBAM algorithm

is proven. The empirical convergence results for three simulated examples and for

two real data examples are also provided.

DRAM: Efficient adaptive MCMC

We propose to combine two quite powerful ideas

that have recently appeared in the Markov chain Monte Carlo

literature: adaptive Metropolis samplers and delayed rejec-

tion. The ergodicity of the resulting non-Markovian sampler

is proved, and the efficiency of the combination is demon-

strated with various examples. We present situations where

the combination outperforms the original methods: adap-

tation clearly enhances efficiency of the delayed rejection

algorithm in cases where good proposal distributions are

not available. Similarly, delayed rejection provides a sys-

tematic remedy when the adaptation process has a slow

start.

web page

Learn From Thy Neighbor: Parallel-Chain and

Regional Adaptive MCMC

Starting with the seminal paper of Haario, Saksman and Tamminen (Haario

et al. (2001)), a substantial amount of work has been done to validate adaptive

Markov chain Monte Carlo algorithms. In this paper we focus on two practi-

cal aspects of adaptive Metropolis samplers. First, we draw attention to the

deficient performance of standard adaptation when the target distribution is

multi-modal. We propose a parallel chain adaptation strategy that incorpo-

rates multiple Markov chains which are run in parallel. Second, we note that

the current adaptive MCMC paradigm implicitly assumes that the adaptation is

uniformly efficient on all regions of the state space. However, in many practical

instances, different “optimal” kernels are needed in different regions of the state

space. We propose here a regional adaptation algorithm in which we account

for possible errors made in defining the adaptation regions.

Slides

Delayed Rejection

Delayed rejection schemes for efficient Markov-Chain Monte-Carlo

sampling of multimodal distributions

A number of problems in a variety of fields are characterised by target dis-

tributions with a multimodal structure in which the presence of several isolated

local maxima dramatically reduces the efficiency of Markov chain Monte Carlo

sampling algorithms. Several solutions, such as simulated tempering or the use

of parallel chains, have been proposed to facilitate the exploration of the relevant

parameter space. They provide effective strategies in the cases in which the di-

mension of the parameter space is small and/or the computational costs are not

a limiting factor. These approaches fail however in the case of high-dimensional

spaces where the multimodal structure is induced by degeneracies between re-

gions of the parameter space. In this paper we present a fully Markovian way

to efficiently sample this kind of distribution based on the general Delayed Re-

jection scheme with an arbitrary number of steps, and provide details for an

efficient numerical implementation of the algorithm.

Markov Chain Quasi Monte Carlo

Markov Chain Quasi Monte Carlo(MCQMC) methods reformulate Markov Chain Monte Carlo methods

to allow for the use of low discrepancy sequences instead of a pseudo-random number generator(PRNG).

Discrepancy bounds for uniformly ergodic

Markov chain quasi-Monte Carlo

In [Chen, D., Owen, Ann. Stat., 39, 673–701, 2011] Markov chain

Monte Carlo (MCMC) was studied under the assumption that the

driver sequence is a deterministic sequence rather than independent

U (0, 1) random variables. Therein it was shown that as long as the

driver sequence is completely uniformly distributed, the Markov chain

consistently samples the target distribution. The present work extends

these results by providing bounds on the convergence rate of the dis-

crepancy between the empirical distribution of the Markov chain and

the target distribution, under the assumption that the Markov chain

is uniformly ergodic.

In a general setting we show the existence of driver sequences for

which the discrepancy of the Markov chain from the target distribution

with respect to certain test sets converges with (almost) the usual

Monte Carlo rate of n^−1/2 .

CONSISTENCY AND CONVERGENCE RATE OF MARKOV

CHAIN QUASI MONTE CARLO WITH EXAMPLES

Markov Chain Monte Carlo methods have been widely used in various science areas

for generation of samples from distributions that are difficult to simulate directly.

The random numbers driving Markov Chain Monte Carlo algorithms are modeled

as independent U[0, 1) random variables. By constructing a Markov Chain, we are

able to sample from its stationary distribution if irreducibility and Harris recurrence

conditions are satisfied. The class of distributions that could be simulated are largely

broadened by using Markov Chain Monte Carlo.

Quasi-Monte Carlo, on the other hand, aims to improve the accuracy of estimation

of an integral over the unit cube [0, 1)k . By using more carefully balanced inputs,

under some smoothness conditions the estimation error can be shown to converge at a

speed of O(log^k n/n), while the plain Monte Carlo method can only give a convergence

rate of Op(1/√n).

The improvement is significant when n is large.

In other words, Markov Chain Monte Carlo is creating a larger world, and quasi-

Monte Carlo is creating a better world. Ideally we would like to combine these two

techniques, so that we can sample more accurately from a larger class of distributions.

This method, called Markov Chain quasi-Monte Carlo (MCQMC), is the main topic

of this work.

MARKOV CHAIN MONTE CARLO ALGORITHMS USING

COMPLETELY UNIFORMLY DISTRIBUTED DRIVING

SEQUENCES

The advantage of low-discrepancy sequences in lieu of random numbers for simple

independent Monte Carlo sampling is well-known. This procedure, known as quasi-

Monte Carlo (QMC), yields an integration error that decays at a superior rate to

that obtained by IID sampling, by the Koksma-Hlawka inequality. For the class of

Markov chain Monte Carlo (MCMC) samplers, little literature has been produced

examining the use of low-discrepancy sequences, and previous experiments have of-

fered no theoretical validation for this practice. The central result in this work is

the establishment of conditions under which low-discrepancy sequences can be used

for consistent MCMC estimation. This condition of completely uniform distribution

(CUD) applies to a series of sequences that look like full outputs of a small random

number generator. A strategy for the incorporation of these sequences into a general

MCMC sampling scheme is thoroughly developed here, with attention to the preser-

vation of the CUD condition. The use of these sequences in a few MCMC examples

shows reductions in estimate error that are most significant in Gibbs samplers. From

these examples, the empirical benefits of CUD sequences in MCMC sampling are im-

mense, although no analog of the Koksma-Hlawka inequality has been produced for

MCMC to provide a general theoretical corroboration of these improvements.

A Quasi-Monte Carlo Metropolis

Algorithm

This paper presents a version of the Metropolis-Hastings algorithm using quasi-Monte Carlo inputs. We prove that the

method yields consistent estimates in some problems with

finite state spaces and completely uniformly distributed in-

puts. In some numerical examples, the proposed method is

much more accurate than ordinary Metropolis-Hastings sampling.

Interacting Markov Chain Monte Carlo

Parallel hierarchical sampling: a general-purpose

class of multiple-chains MCMC algorithms

This paper introduces the Parallel Hierarchical Sampler (PHS), a

class of Markov chain Monte Carlo algorithms using several interact-

ing chains having the same target distribution but different mixing

properties.

Statistics: Posted by Paleos — Fri Mar 27, 2015 11:15 pm

]]>

Features:

Single scattering,

Torrance-Sparrow illumination model for surfaces,

Homogeneous Media,

Depth of Field,

Motion Blur (for camera and objects)

Here is a real-time Demo an source code on shadertoy

Statistics: Posted by koiava — Fri Mar 27, 2015 2:41 pm

]]>

Statistics: Posted by Geri — Thu Mar 26, 2015 12:54 am

]]>

In his Thesis Jacco has great description of how he actually uses it(page 116).

Statistics: Posted by ultimatemau — Sat Mar 21, 2015 5:37 pm

]]>

I'd like to take this opportunity to advertise Visionaray, which is (yet another) ray tracing framework.

https://github.com/szellmann/visionaray

https://github.com/szellmann/visionaray/wiki

Visionaray is based on generic programming with C++. In contrast to other frameworks, its main goal is to achieve platform independence: write "kernels" in C++, have "schedulers" (CUDA, TBB, C++-Threads) execute kernels on the hardware you desire. Users write their own kernels, though the framework provides a few custom ones (primary rays only, whitted, path tracing, ...).

Visionaray is the preliminary result of a hobby project of mine that I mainly pursue in my spare time, and for a few weeks now partially during my work at the University of Cologne. Visionaray is open source (MIT license). At its current state, I consider it not yet mature enough to be any useful in a real project, but we (at this time it's basically a student of mine and me) are working on it.

Companion projects are concerned with ray tracing in VR (a plugin for the VR renderer OpenCOVER (https://github.com/hlrs-vis/covise) comes with Visionaray) and ray tracing on FPGAs (we're working on a Xilinx Vivado HLS scheduler and porting code to fixed point ).

Best,

Stefan

Statistics: Posted by szellmann — Sat Mar 21, 2015 12:47 pm

]]>

Nice work guys!

http://blogs.nvidia.com/blog/2015/03/19 ... death-ray/

Statistics: Posted by koiava — Sat Mar 21, 2015 7:41 am

]]>

ultimatemau wrote:

Looks good. If they use filtering, I seriously hope its not mine . You said they use MIS/RIS? Do you have the source that mentions this? Cheers

Looks good. If they use filtering, I seriously hope its not mine . You said they use MIS/RIS? Do you have the source that mentions this? Cheers

Of course I didn't know what type of filtering they actually use, but it looks very promising.

Today I didn't know any production render without MIS.

In earlier version when Jacco write this engine he used both MIS and RIS.

RIS greatly decreases variance is scenes with high number of light sources like the night scene at the end of the video. In his Thesis Jacco has great description of how he actually uses it(page 116).

Statistics: Posted by koiava — Sat Mar 21, 2015 6:50 am

]]>

Statistics: Posted by ultimatemau — Sat Mar 21, 2015 3:53 am

]]>

There is only static geometry with instensing, Unidirectional PT with MIS/RIS and there is defensively filtering on top. I think filtering technique is close to this one. I said it previously and still think that "Ray Histogram Fusion" can work in real-time on GPU(maybe with some slight modifications).

Statistics: Posted by koiava — Fri Mar 20, 2015 8:35 am

]]>

Statistics: Posted by xma — Thu Mar 19, 2015 4:46 pm

]]>

yiningkarlli wrote:

My renderer has a layered shading system not too dissimilar from Vray's layered shader or Renderman 19/RIS's LM system. So the chair uses a layered BSDF with a base diffuse BSDF with the color, and then a clearcoat layer on top and then a couple of layers with fingerprints and scratches and stuff, although the fingerprints and scratches don't really end up being visible...

My renderer has a layered shading system not too dissimilar from Vray's layered shader or Renderman 19/RIS's LM system. So the chair uses a layered BSDF with a base diffuse BSDF with the color, and then a clearcoat layer on top and then a couple of layers with fingerprints and scratches and stuff, although the fingerprints and scratches don't really end up being visible...

This piques my curiosity a bit more.

Do you have a compound layered BTDF that computes all the layer effects together or are each layer considered as a different medium which you trace through one after the other?

Statistics: Posted by ypoissant — Tue Mar 10, 2015 1:37 am

]]>