Claudia Klüppelberg based on joint work with Habib Esmaeili: Parametric estimation of multivariate Lévy processes

We propose a maximum likelihood estimation procedure of multivariate Lévy processes based on a Lévy copula for dependence modeling. As observation scheme we assume that we observe all jumps in [0,t] larger than a certain truncation point near zero. For a bivariate subordinator we investigate three different schemes in detail. The first case considers jumps in both components larger than the truncation point. The second case considers jumps, which are in one component larger than the truncation point, but arbitrary in the other. The third case considers the so-called IFM (inference for marginals) method, where in a first step the marginal parameters are estimated, the estimated marginals transformed, and then in a second step the Lévy copula parameters are estimated from the reduced likelihood. For a bivariate stable Clayton Lévy subordinator, we present some theoretical results concerning asymptotic normality of our estimates for either the observation intervall tending to infinity or the truncation point tending to zero. Finally, we show the influence of the truncation in a simulation study.

Top

Eckhard Schlemm: Quasi-maximum likelihood estimation for multivariate Lévy-driven CARMA processes

Multivariate Lévy-driven continuous-time auto-regressive moving-average (MCARMA) processes have been introduced recently in [Marquardt and Stelzer, 2007] as an extension of both discrete-time vector ARMA and univariate CARMA models (see e.g. [Brockwell, 2001] for a review). The talk will focus on the estimation of the parameters of a second-order MCARMA process based on regularly spaced observations and the problem of parameter identifiability. We use a quasi-maximum likelihood approach and prove asymptotic consistency and normality of the estimators. The quality of the proposed estimators will be assessed by the results of a simulation study.

Top

Gernot Müller based on joint work with Jean Jacod and Claudia Klüppelberg: Testing for COGARCH

In this talk we discuss a hypothesis test for the COGARCH(1,1) model. The COGARCH model was introduced by Klüppelberg et al. (2004) as a continuous time analogue of the discrete time GARCH model and is constructed directly from a single univariate background driving Lévy process. We propose an asymptotic test for the special feature of COGARCH, that there is a fixed functional relationship between the jumps of the COGARCH process and the jumps of the corresponding volatility process. The test only investigates the bigger jumps of the COGARCH (which is, in practice, identified with a log-price process) and uses local volatility estimates calculated from observation windows before and after the jumps under consideration. The null of the test is the fixed relationship conditional on the fact that there is at least one relevant (i.e. sufficiently big) jump. We apply the test to high-frequency data from the S&P 500.

Top

Mathias Vetter: Limit theorems for bipower variation of semimartingales

This talk presents limit theorems for certain functionals of semimartingales observed at high frequency. In particular, we extend results from Jacod to the case of bipower variation, showing under standard assumptions that one obtains a limiting variable, which is in general different from the case of a continuous semimartingale. In a second step a truncated version of bipower variation is constructed, which has a similar asymptotic behaviour as standard bipower variation for a continuous semimartingale.

Top

Florian Ueltzhöfer: Non-parametric estimation of Lévy densities from observations on a discrete time grid

We consider a multivariate Lévy process given by a Brownian motion with drift and an independent time-homogeneous pure jump process governed by a Lévy density. We assume that observation of a sample path takes place on a discrete time grid. Thereupon, we construct families of non-parametric estimators for the restriction of a Lévy density to bounded Borel sets away from the origin. Moreover, we introduce a data-driven criterion to select an estimator within a given family. We measure the error of our estimates in an L2-norm. For the class of sufficiently smooth Lévy densities (belonging to some Sobolev space), we derive a condition for the mesh size of the discrete time grid, to ensure that the error of our estimate converges to zero as the observation time horizon tends to infinity, where the rate of convergence equals the rate of the best estimator within our family. Additionally, we prove that this rate of convergence equals the rate of the best non-parametric estimator of our kind, based on the observation of all jumps disentangled from the Brownian motion with drift.

Top

Sylvia Frühwirth-Schnatter based on joint work with Jörn Sass and Markus Hahn: Markov Chain Monte Carlo Methods for Parameter Estimation in Multidimensional Continuous Time Markov Switching Models

A multidimensional, continuous time model is considered where the observation process is a diffusion with drift and volatility coefficients being modeled as continuous time, finite state Markov chains with a common state process. For the econometric estimation of the states for drift and volatility and the rate matrix of the underlying Markov chain, both an exact continuous time as well as an approximate discrete time MCMC sampler is developped. These MCMC approaches are compared to various approaches based on ML estimation. Using simulated data, it is demonstrated that MCMC outperforms ML estimation for difficult cases like high rates. Finally, the modelis applied to daily stock index quotes from Argentina, Brazil, Mexico, and the US. Using BIC for model selection, a four state model is identified where the various states differ not only in the volatility of the various assets, but also in their correlation.

Top

Alexander Gleim based on joint work with Christian Pigorsch: Approximate Bayesian Computation with Indirect Moment Conditions

In genetics, Approximate Bayesian Computation (ABC) has become a popular estimation method for situations where the likelihood function of a model is unavailable in closed form. In contrast to classical Bayesian inference based on MCMC this method does not use data augmentation to achieve a tractable model specification but instead relies on a function, which measures the distance between some empirical moments and the model implied counterparts. An open question is a systematic method of selecting these moments. In this talk we use an indirect approach with moment conditions based on the score of an auxiliary model as in the efficient method of moments approach. This work is also related to the recent work of Gallant and McCulloch (On the Determination of General Scientific Models With Application to Asset Pricing, Journal of the American Statistical Association, 2009, No. 485, 117-131) who propose another indirect Bayesian estimation method.

Top

Vicky Fasen: A continuous-time cointegration model with applications in finance

Empirical studies of financial time series, as exchange rates, foreign currency spot and future rates or different interest rates in different countries, show that they are cointegrated. That means that each time series is integrated but a linear combination of these integrated processes is stationary. That concept goes back to the seminal work of Granger (1981) and Engle and Granger (1987). Since a stylized fact of financial time series is heavy tails we consider a cointegration model in continuous-time where the linear combination of the integrated processes is modeled by a multivariate regularly varying Ornstein-Uhlenbeck process. We present estimators for the cointegration parameters, show that they are consistent and compute their asymptotic behavior and confidence intervals.

Top

Georgi Dimitroff based on joint work with Stefan Lorenz and Alexander Szimayer: A Parsimonious Multi-Asset Heston Model: Calibration and Derivative Pricing

We present a parsimonious multi-asset Heston model. All single-asset sub-models follow the well-known Heston dynamics and their parameters are typically calibrated on implied market volatilities. We focus on the calibration of the correlation structure between the single-asset marginals in the absence of sufficient liquid cross-asset option price data. The presented model is parsimonious in the sense that d(d - 1)/2 asset-asset cross-correlations are required for a d-asset Heston model. In order to calibrate the model, we present two general setups corresponding to relevant practical situations: (1) when the empirical cross-asset correlations in the risk neutral world are given by the user and we need to calibrate the correlations between the driving Brownian motions or (2) when they have to be estimated from the historical time series. The theoretical background, including the ergodicity of the multidimensional CIR process, for the proposed estimators is also studied.

Top

Jean Jacod based on joint work with Viktor Todorov: Do price and volatility jump together?

We consider a process Xt, which is observed on a finite time interval [0,T], at discrete times 0,?n,2?n,.... This process is an Itô semimartingale with stochastic volatility ?t2. Assuming that X has jumps on [0,T], we derive tests to decide whether the volatility process has jumps occurring simultaneously with the jumps of Xt. There are two different families of tests, for the two possible null hypotheses (common jumps or disjoint jumps). They have a prescribed asymptotic level, as the mesh ?n goes to 0. We show on some simulations that these tests perform reasonably well even in the finite sample case, and we also put them in use on S&P 500 index data.

Top

Codina Cotar based on joint work with Gero Friesecke and Claudia Klüppelberg: Optimal Transportation Applied to Finance and Physics

The general problem is to find minimizers of EQ[c(X1,...,Xn)] for some cost function c for a specified set of measures Q. A prominent example in finance is the search for bounds of certain risk functions as for instance the value-at-risk or expected shortfall of a sum of risks, given that the marginals are known, but the dependence structure is unknown. Another example is the study of molecules with atomic nuclei at positions R1,...,Rn, where we want to model the ground state energy. It turns out that the intrinsic (bivariate) problem is an optimal transportation problem with cost function c(x,y)=l(|x-y|), where c and l have certain analytic properties. This problem is new and we present its solution.

Top

Oliver Pfaffel: Option pricing in multivariate stochastic volatility models of OU type

We will discuss a multivariate stochastic volatility model of OU type that generalizes the univariate Barndorff-Nielsen/Shephard(BNS)-model. First we derive the joint characteristic function and give conditions such that it is analytic in some open complex strip around zero. We also determine equivalent martingale measures that preserve the structure of our model. Moreover we specify some concrete models where the characteristic function is known in closed form. One of them generalizes the univariate Gamma-OU BNS model. Finally, we use Fourier inversion techniques to calibrate this model to real market prices.

Top

Birgit Koos based on joint work with Dirk Broeders and An Chen: An institutional evaluation of pension funds and life insurance companies

This paper compares two different types of annuity providers, i.e. defined benefit pension funds and life insurance companies. One of the key differences is that the residual risk in pension funds is collectively borne by the beneficiaries and the sponsor while in the case of life insurers, it is borne by the external shareholders. This paper employs a contingent claim approach to evaluate the risk return trade-off for annuitants. For that, we take into account the differences in contract specification and in regulatory regimes. Mean-variance analysis is conducted to determine annuity choices of consumers with different preferences. Using realistic parameters we find that under linear and quadratic utility, life insurance companies always dominate pension funds, while under other utility specifications this is only true for low default probabilities. Furthermore, we find that power utility consumers are indifferent if the long term default probability of pension funds exceeds that of life insurers by 2 to 4%.

Top

Robert Stelzer based on joint work with Ole E. Barndorff-Nielsen: Multivariate supOU processes and a stochastic volatility model with possible long memory

Multivariate supOU processes are defined using a Lévy basis on the real numbers times the set of d x d matrices with all eigenvalues having strictly negative real parts. We discuss the existence, the finiteness of moments, the second order structure and important path properties, noting that the peculiarities of the underlying matrices cause new phenomena and features compared to the known univariate case. In particular, we give precise conditions for the validity of an analogue to the stochastic differential equation satisfied by Ornstein-Uhlenbeck type processes, which has been conjectured in the univariate case by Barndorff-Nielsen [2001, Superposition of Ornstein-Uhlenbeck type processes, Theory Probab. Appl. 45, 175-194], but not yet been proven. Our results also imply conditions when supOU processes are compatible with semimartingale integration theory. Using the general results, we define supOU processes on the positive semi-definite matrices, which we use as the "instantaneous covariance matrix" process in a stochastic volatility model. After analysing some properties of the resulting stochastic volatility models, we give some examples which show in particular that long range dependence effects may arise.

Top

Jan Kallsen based on joint work with Paul Krühner: On a Heath-Jarrow-Morton approach for stock options

In the Heath-Jarrow-Morton (HJM) approach in interest rate theory the whole forward rate curve rather than the short rate is considered as state variable for a stochastic model. Absence of arbitrage then leads to consistency and drift restrictions, in particular the HJM drift condition. Several attempts have been made to transfer this idea to options on a stock, cf. e.g. by Schönbucher (1999), Schweizer & Wissel (2008), Carmona & Nadtochiy (2009), Jacod & Protter (2006). Here, the underlying stock plays the role of the short rate. The implied volatility surface or a reparametrisation serves as state variable and hence as counterpart of the forward rate curve in the classical framework of HJM. Our approach to this problem resembles Carmona & Nadtochiy (2009) in that we try to preserve main features of the HJM setup. However, it is based on a different parametrisation or codebook, which allows to simplify both theory and application.

Top

Peter Hepperger: Electricity option pricing with Hilbert space valued jump diffusion models

In contrast to stock markets, the majority of contracts on the electricity market depend on the price during a future delivery period instead of a single point in time. Standard methods for pricing options (or rather swaptions) therefore cannot be used without modification. Moreover, the relation of spot and forward prices is not clearly defined for electricity due to its non-storability. This difficulty can be avoided by directly modeling the forward curve under the risk neutral measure, similar to the Heath-Jarrow-Morton approach. Several authors propose models that are special cases of Hilbert space valued jump-diffusion models. However, these pose some challenges when evaluating option prices, since the resulting distributions are not known in analytic form. We present a numerical pricing method, which is based on solving a Hilbert space valued partial integro-differential equation. Karhunen-Loeve projection is used to reduce the infinite dimensional problem to a low dimensional one. Convergence of the approximation can be shown with an error bound depending on eigenvalues of a covariance operator. Finally, we use a sparse grid finite difference method and discontinuous Galerkin timestepping to solve the reduced problem efficiently.

Top

Stefan Ankirchner based on joint work with Gregor Heyne: Cross hedging with stochastic correlation

This talk is concerned with the study of quadratic hedging of contingent claims with basis risk. We extend existing results by allowing for the correlation between the hedging instrument and the underlying of the contingent claim to be random itself. We assume that the correlation process ? evolves according to a stochastic differential equation with values between the boundaries -1 and 1. We keep the correlation dynamics general and derive an integrability condition on the correlation process that allows to describe and compute the quadratic hedge by means of a simple hedging formula that can be directly implemented. Furthermore we show that the conditions on ? are fulfilled by a large class of dynamics. The theory is exemplified by various, explicitly given correlation dynamics.

Top

Alexander Szimayer based on joint work with Jing Li: The Uncertain Force of Mortality Framework: Pricing Unit-Linked Life Insurance Contracts

Unit-linked life insurance contracts link the financial market and the insurance market together. In a complete and arbitrage-free financial market, financial risk can be hedged perfectly, but perfect hedging is not possible when mortality risk is embedded in a financial product. For many years, this problem was ignored by assuming that the force of mortality is deterministic. Under this assumption, an insurance company can hedge against mortality risk by pooling a large number of policyholders together. It then only needs to deal with the financial risk. However, in recent years it has been acknowledged that the force of mortality is actually stochastic and researchers have tried to model this stochastic process. The drawback of this procedure is that it cannot provide a nearly perfect hedge against mortality risk unless a large number of mortality-linked financial products are liquidly traded. In contrast to specifying a stochastic model for the force of mortality, we provide a framework where the force of mortality is uncertain but stays within lower and upper bounds. Within this framework, we obtain upper and lower price bounds for European-style unit-linked life insurance contracts by applying optimal control theory and PDE methods. In particular, the upper and lower price bounds are obtained by seeking out the worst and best scenarios for varying forces of mortality. The PDE formulation of the pricing problem is solved with finite difference methods. The upper and lower price bounds enable us to enhance hedging strategies and reduce exposure to financial and mortality risks.