Rare Events Simulations in the Presence of Boundaries - Søren Asmussen
Conquering the Greeks in Monte Carlo: Efficent Calculation of the Market Sensitivities and Hedge-Ratios of Financial Assets by Direct Numerical Simulation - Marco Avellaneda and Roberta Gamba
Applications of Quasi-Monte Carlo Methods in Statistics - Kai Tai Fang
Monte Carlo Computation in the Presence of Constraints - Peter W. Glynn
Quasirandom Walk Methods - Christian Lécot
Perfect Simulation: Triumphs, Tricks and Trials - Xiao-Li Meng
Recent Advances in the Theory of Nonlinear Pseudorandom Number Generators - Harald Niederreiter
Quantum Complexity of Integration - Erich Novak
QMC Integration over the Unit Cube -- Beating Intractability by Weighting the Co-ordinate Directions - Ian H. Sloan
Quasi-Monte Carlo --- Discrepancy between Theory and Practice - Shu Tezuka
Efficient Monte Carlo Simulation Methods in Statistical Physics - Jian-Sheng Wang

Rare Events Simulations in the Presence of Boundaries

Søren Asmussen
Lund University

Many rare events simulation problems for simple queues and related processes can be translated to the problem of simulating a first passage probability for a random walk, a Levy process or a Markov additive process. In this setting, exponential change of measure as derived by a large deviations argument has proved extremely efficient in a number of situations. However, for some time annoying counterexamples by Glasserman and co-authors have indicated difficulties with the technique in other situations, for example queueing networks where the difficulties arise because of the role of boundaries.

In this talk, we present an algorithm which overcomes the boundary complications in substantial generality, and is applicable always when an efficient simulation estimator for the unreflected process within a regenerative cycle is available. We also study in some more detail the problem of simulating the probability that a queueing process exceeds a large level in a finite time horizon.

The talk is based in part upon a joint paper with P.Frantz, M. Jobmann and H.-P. Schwefel, Munich,

Conquering the Greeks in Monte Carlo: Efficient Calculation of the Market Sensitivities and Hedge-Ratios of Financial Assets by Direct Numerical Simulation

Marco Avellaneda
New York University

Roberta Gamba
CIMS and University of Bergamo

The calculation of the price-sensitivities and hedge-ratios of derivative securities with respect to perturbations in market prices is re-formulated in the framework of Weighted Monte Carlo. To compute the sensitivity (market risk) of a financial security, we price it and then perturb the probabilities of the simulated paths around the uniform distribution. This is different than the "classical" approach for calculating sensitivities, which consists in perturbing the parameters that define the dynamics of the model (spot prices, local volatilties, drifts, etc.)

We characterize theoretically the computed hedge-ratios and sensitivities in terms of statistical moments of the simulated cashflows. The new approach is then compared with the Likelihood Ratio Method of Broadie and Glasserman (1993) and with the approach based on Malliavin Calculus, presented in Fournie, Lebuchoux, Lasry, Lions and Touzy (1998).

We show that the sensitivities computed by the new method are in excellent agreement with closed-from solutions whenever the latter are available, e.g. with theGreeks of the Black-Scholes formula, and with the approximate analytic formulas for the Deltas and Gammas of Basket Options in multi-asset models. The advantage of the present method for computing sensitivities is that (i) it can be readily applied to most Monte Carlo simulation schemes because it is non-parametric, (ii) it does not require multiple recalibration of the model or discrete differentiation, and (iii) it is simple to implement.

Applications of Quasi-Monte Carlo Methods in Statistics

Kai-Tai Fang
Hong Kong Baptist University

Most applications of quasi-Monte Carlo methods in numerical analysis involve evaluating high dimensional integrals. Therefore, there is much attention in the literature on generating low-discrepancy sequences and on multidimensional quadrature error analysis. However, many problems in statistics that are not related to integration benefit from quasi-Monte Carlo methods. More precisely, low-discrepancy sequences can be used to solve certain statistics problems, and the concept of uniformity is extremely useful in statistics. This talk reviews various applications of quasi-Monte Carlo methods to experimental design, optimization, statistical inference, statistical simulation, Baysian statistics, and computational statistics. For example, the discrepancy can be used to compare fractional factorial experimental designs and to detect non-isomorphic designs. In the latter case the use of the discrepancy significantly reduces the computational complexity of the problem. The talk also presents some new results that have not yet appeared in the literature and suggests a number of important open problems.

Monte Carlo Computation in the Presence of Constraints

Peter W.Glynn
Department of Management Science and Engineering
Stanford University

In a typical Monte Carlo computation, the goal is to compute the expected value EX of a given random variable X. In many applications settings, it is possible to establish bounds on the expectation of some random vector Y, where Y is correlated with X. In this talk, we will discuss how to combine knowledge of the constraints on EY with the sampled data, so as to obtain a better estimator for EX. We will also discuss connections of our ideas with control variates, large deviations, and Bayesian techniques. This work is joint with Roberto Szechtman.

Quasirandom Walk Methods

Christian Lécot
Université de Savoie

In mathematical models that involve a combination of convection (or reaction) and diffusion, the method of splitting consists of reducing the original evolutionary problem to two problems describing the convective (or the reactive) and the diffusive processes respectively.

In this talk we present particle methods for solving pure initial-value problems for convection-diffusion or reaction-diffusion equations with constant diffusion coefficients. The strategy is to represent the solution or the gradient of the solution at discrete times by particles. In each time interval the evolution of the system is obtained in two steps: the particles follow the convective field or their masses are modified by the reaction and then the diffusive process is modeled by a random walk.

We study the convergence of the scheme when low discrepancy sequences are used for the random walk step. The application of quasirandom numbers is not straightforward, because of correlations, and a reordering technique is used in every time step. It is shown that an improvement in both magnitude of error and convergence rate can be achieved when quasirandom numbers are used in place of pseudorandom numbers.

Perfect Simulation: Triumphs, Tricks and Trials

Xiao-Li Meng
Department of Statistics, The University of Chicago

The Monte Carlo community recently enjoyed a pleasant surprise. Based on the ingenious idea of running coupled Markov chains with negative time, Propp and Wilson (1996, Random Structures and Algorithms) proposed a Markov chain Monte Carlo (MCMC) algorithm that will, with probability one, terminate itself in a finite number of steps and yet deliver draws that are exactly from the limiting (i.e., stationary) distribution. By applying the algorithm to a variety of problems, including sampling exactly from the Ising model at the critical temperature with a 4200 by 4200 toroidal grid, they demonstrated the practicality and efficiency of the algorithm. Their findings are perhaps particularly exciting for statisticians/probabilists/applied mathematicians, since this seems to be the first time that a major Monte Carlo method is developed solely by non-physicists, yet the method is applicable to nontrivial computational problems in physics and other scientific studies. Built upon their method of coupling from the past (CFTP), a new class of MCMC algorithms has emerged. This class of algorithms has been labeled perfect simulation or exact simulation, because for this class of stochastic iterative algorithms the challenging issue of assessing convergence completely vanishes.

In this talk we will review the basic CFTP, Wilson's read-once CFTP, and Wilson's layered multishift coupling. We will then apply these methods to the problem of constructing perfect simulation algorithms for Bayesian analysis with normal mixture and t mixture. The latter application also relies on a powerful method in MCMC, namely the method of data augmentation, also knows as the method of auxiliary variables in physics. Open problems will also be discussed, as the t mixture application makes it clear that we are still several steps away from routine use of perfect simulation in Bayesian computation. (This talk is based on joint work with Duncan Murdoch of University of Western Ontario.)

Recent Advances in the Theory of Nonlinear Pseudorandom Number Generators

Harald Niederreiter
Institute of Discrete Mathematics, Austrian Academy of Sciences

Nonlinear pseudorandom number generators form an attractive alternative to the classical linear methods for uniform pseudorandom number generation, such as the linear congruential method. Ever since the introduction of nonlinear generators, it has been a vexing problem to prove a priori results on the distribution of these generators in parts of the period, especially for the important family of recursive inversive generators. Until recently, the only distribution results for recursive inversive generators were for the full period or for the average-case performance in parts of the period, but these results are of limited practical relevance. In the last two years, I.E. Shparlinski and the speaker have developed a new technique to deal with the distribution of nonlinear generators in parts of the period. This has led to the first results on the distribution in parts of the period of individual sequences of pseudorandom numbers for many nonlinear methods, and in particular for recursive inversive generators. These results show a reasonably good fit to the uniform distribution and to statistical independence properties in sufficiently long segments of the period. The talk will focus on this new technique and also on new nonlinear methods for pseudorandom number generation that arose from this approach.

Quantum Complexity of Integration

Erich Novak
Katholieke Universiteit Leuven, Belgium
email: novak@mi.uni-erlangen.de

We study the computation of the integral of functions from the classical Hölder classes tex2html_wrap_inline41 on tex2html_wrap_inline43 and define tex2html_wrap_inline45 by tex2html_wrap_inline47 .  The known optimal orders for the complexity of deterministic and (general) randomized methods are




For a quantum computer we prove




For restricted Monte Carlo (only coin tossing instead of general random numbers) we prove


To summarize the results one can say that

QMC Integration Over the Unit Cube -- Beating Intractability by Weighting the Co-ordinate Directions

Ian H Sloan
University of New South Wale, Australia

How can we understand the apparent success of multiple intergation in hundreds or even thousands of dimensions? One way is to assume that the successive coordinate directions are not all equally difficult, and to quantify this by weighting the first co-ordinate directions by a positive number tex2html_wrap_inline22 , the second by tex2html_wrap_inline24 , and so on. We now know that in a classical Sobolev space setting, with mixed first derivatives measured in the tex2html_wrap_inline26 sense, the minimum number of points tex2html_wrap_inline28 needed to reduce the worst case error from its initial error by a factor of tex2html_wrap_inline30 grows exponentially with the dimension d, but that tex2html_wrap_inline28 is bounded independently of d if (and only if) tex2html_wrap_inline38 . But there are many other questions, some still open. Can the convergence rate in the case tex2html_wrap_inline38 be better than the Monte Carlo rate tex2html_wrap_inline42 ? (Yes, but the condition on the weights needs to be strengthened.) Can the results be improved by taking weights other than 1/n? (No.) Can point sets with the desired properties actually be constructed? (The original proof established existence only, and was not constructive. One construction is available, but only with a proven convergence rate that is less than optimal.) And do the weighted spaces really have a role in applications? This talk will trace the story of QMC spaces in weighted spaces up to the present day.

Quasi-Monte Carlo --- Discrepancy between Theory and Practice

Shu Tezuka
IBM Corporation, Tokyo

 In this talk, we survey the current status of theory and practice of Quasi-Monte Carlo methods. First, we discuss one of the most important research direction for speeding-up Monte Carlo simulations. It is described as: MC -> QMC -> RQMC -> Derandomized RQMC where RQMC means randomized QMC. We give some important open questions concerning the gap between theory and practice of derandomized QMC. Secondly, we overview dramatic success of quasi-Monte Carlo methods for very high dimensional integration problems in finance. In the last several years, the question of how to explain this success has been extensively investigated. The main trend of research is the use of the notion of "effective dimensions". Some important results and issues on effective dimensions are discussed.

Efficient Monte Carlo Simulation Methods in Statistical Physics

Jian-Sheng Wang
Department of Computational Science,
National University of Singapore

The basic problem in equilibrium statistical mechanics is to compute phase space average, where Monte Carlo method plays a very important role. We begin with a review of nonlocal simulation algorithms for Monte Carlo simulation in statistical physics. We discuss their advantages, applications, and some challenge problems which are still awaiting for better solutions. We then discuss the transition matrix Monte Carlo method and associated algorithms. This recently proposed method offers an efficient way to compute the density of states, thus entropy and free energy, as well as the usual thermodynamic averages, as a function of model parameter (e.g., temperature) in a single run. The new sampling algorithms, known as flat histogram algorithm and flat hit algorithm, offer sampling techniques which generate uniform probability distribution for some macroscopic variable such as total energy. Connection with multi-canonical algorithms and simulated annealing are highlighted.

By MCQMC2000
Email: mcqmc2000@www.mcqmc.org