Feynman-Kac models and interacting particle systems
A brief introduction to stochastic particle methods
This website is devoted to Feynman-Kac models and their interacting particle interpretation models. These stochastic models are increasingly used to sample from complex high-dimensional distributions. They approximate a given target probability distributions by a large cloud of random samples termed particles. Practically, the particles evolve randomly around the space independently and to each particle is associated a positive potential function. Periodically we duplicate particles with high potentials at the expense of particles with low potentials which die. This intuitive genetic mutation-selection type mechanism has appeared in numerous applications ranging from nonlinear filtering, Bayesian statistics, combinatorial counting, molecular and polymer simulation, rare events simulation, financial option pricing, quantum Monte Carlo methods, and genetic algorithms among others.
From a mathematical point of view, these methods can be interpreted as stochastic numerical approximations of Feynman-Kac measures. These measures represent the distribution of the paths of a reference Markov process, weighted by a collection of potential functions. These functional models are natural mathematical extensions of the traditional change of probability measures, commonly used in importance sampling. The particle interpretation consists in evolving a population of particles mimicking natural evolution mechanisms. During the mutation stage, the particles evolve independently of one another, according to the same probability transitions as the ones of the reference Markov chain. During the selection stage, each particle evaluates the potential value of its location. The ones with small relative values are killed, while the ones with high relative values are multiplied. The corresponding genealogical tree occupation measure converges, as the population size tends to infinity, to the complete Feynman-Kac distribution on path space.
The origins of stochastic particle simulation certainly
starts with the seminal paper of N. Metropolis and S. Ulam: The Monte Carlo Method. Journal of the American
Statistical Association, Vol. 44, No. 247, pp. 335-341 (1949).
As explained by these two physicists in the introduction of their pioneering
article, the Monte Carlo method is, "essentially, a statistical approach to the
study of differential equations, or more generally, of integro-differential equations
that occur in various branches of the natural sciences". The links between genetic type particle Monte Carlo models and
quadratic type parabolic integro-differential equations has been developed in the beginning of 2000' in the series of articles on continuous time models
The earlier works on heuristic type genetic
particle schemes seem to have started in
Los Alamos National Labs with works of M.N.
Rosenbluth and A.W. Rosenbluth [Monte-Carlo calculations of the average extension of macromolecular
chains. J. Chem. Phys., 23:356-359, 1955], and T.E. Harris and H. Kahn. [Estimation of particle
transmission by random sampling. Natl. Bur. Stand. Appl. Math. Ser., 12:27-30, 1951]. We also quote the work on artificial life of Nils Aall Barricelli at the Institute for Advanced Study in Princeton
[Esempi numerici di processi di evoluzione, Methodos, pp. 45-68, 1954 &
Symbiogenetic evolution processes realized by artificial methods, Methodos, Vol. 9:35-36, pp. 143-182, 1957].
In all of these
works, the genetic Monte Carlo scheme is always presented as a natural heuristic resampling type algorithm
to generate random population models, to sample molecular conformations, or to estimate high energy particle distributions, without a single convergence
estimate to ensure the performance, nor the robustness of the Monte Carlo sampler.
Particle filters, bootstrap or genetic filters, population Monte Carlo methods, sequential Monte Carlo models, genetic
search models, branching and multi-level splitting particle rare event simulations, condensation models, go-with-the winner, spawning models, walkers population reconfigurations, pruning-enrichment strategies, quantum and diffusion Monte
Carlo, rejuvenation models, and many others.
The mathematical foundations, and the performance analysis of all of these discrete generation particle models are rather recent. The first rigorous
study in this field seems to be the article published in 1996 on the
applications of particle methods to nonlinear estimation problems :
This article provides the first proof of the unbiased property of
particle likelihood approximation models (lemma 3 page 12); and adaptive resampling criteria w.r.t. the weight dispersions (see remark 1 on page p.4).
This article also presents the first convergence results for a new class of interacting
particle filters, originally presented as heuristic Monte Carlo schemes in the beginning of the 1990's in three independent schools.
A series of classified industrial Research Contracts on tracking and control developped between 1990 and 1993 by the LAAS-CNRS Team : P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut
These two articles are mainly concerned with the rigorous analysis of i.i.d. weighted Monte Carlo particle models, conditional explorations of particles w.r.t. sequence of observations
and the description of the heuristic scheme of sampling/resampling particle filters (section 3.5.1), on simplified Singer models arising in radar signal processing.
Concrete applications of these stochastic algorithms on GPS/INS Integration in critical situations in discussed in the following article
The limiting Feynman-Kac probability measures discussed above are closely related to the so-called infinite population model arising in evolutionary computation literature. In this connection, we underline that the discrete prediction/updating
filtering equation discussed in the articles on the convergence properties of particle filters1,2,3,5,6 as well as in the articles
by Gordon-Salmon and Smith on the Bootstrap filter and the one by Kitagawa on the Monte Carlo filter coincide with the infinite population GA evolution equations discussed for instance by
Ermakov and Zhiglyavskii in an article on GA published in 1984 (see equation(6)), as well as in the article by
Qi and Palmeri (see equation (4)) and more recently in the article by
Song and Li discussing issues and wrong convergence proofs in a widely cited studies on genetic algorithms in continuous space (see equation (1)). As mentioned earlier, Particle filters, Bootstrap filters as well as Monte Carlo filters also coincide with the Genetic algorithm with roulette-wheel selection.
For a more detailed description of the origins of particle methods, and their applications we refer to the following studies
Some new stochastic models & methods discussed in this article : Look-ahead type strategies (section 4.2.2), reducing the variance using conditional explorations w.r.t. the observation sequences (example 3 p. 40), local errors transport models (proof of theo 1, p.11), mean field models w.r.t. the occupation measures of random trees (section 3.2).
The practitioner will find in the research book , and in the review article , a source of useful convergence estimates as well as a detailed
list of concrete examples of particle approximations for real models,
including restricted Markov chain simulations, random motions in absorbing
media, spectral analysis of Schrodinger operators and Feynman-Kac
semigroups, rare event analysis, sensitivity measure approximations, financial pricing numerical methods, parameter estimation in HMM models, island particle models,
interacting MCMC models,
statistical machine learning, bayesian inference, Dirichlet boundary problems, nonlinear
filtering problems, interacting Kalman-Bucy filters, directed polymer simulations, stochastic optimisation,
and interacting Metropolis type algorithms.
The recent book  presents the first comprehensive and modern mathematical treatment of these mean field particle models, including refined convergence analysis on nonlinear Markov chain models. It also covers applications related to parameter estimation in hidden Markov chain models, stochastic optimization, nonlinear filtering and multiple target tracking, stochastic optimization, calibration and uncertainty propagations in numerical codes, rare event simulation, financial mathematics, and free energy and quasi-invariant measures arising in computational physics and population biology. The book helps theoretical probability researchers, applied statisticians, biologists, statistical physicists, and computer scientists work better across their own disciplinary boundaries.
Table of Contents
Monte Carlo and Mean Field Models. Theory and Applications. Feynman-Kac Models. Four Equivalent Particle Interpretations. Continuous-Time Feynman-Kac Models. Nonlinear Evolutions of Intensity Measures. Particle Absorption Models. Signal Processing and Control Systems. Mean Field Feynman-Kac Models. A General Class of Mean Field Models. Empirical Processes. Feynman-Kac Semigroups. Intensity Measure Semigroups. Particle Density Profiles. Genealogical Tree Models. Particle Normalizing Constants. Backward Particle Markov Models.
While the diversity
of application model areas is part of the charm of the particle theory of
Feynman-Kac models the list topics presented below is not exhaustive, and actually
only reflects the tastes and interests of the author.
A series of recent tutorials presenting this interacting particle methodology, and some applications domains, is provided below :
The software BIIPS is a general software developed by the INRIA team ALEA
for bayesian inference with interacting particle systems, a.k.a. Sequential Monte Carlo methods.
A demonstration of the BiiPS software for estimating the stochastic volatility of financial data can be found in