Goals of Research Area H

Insurance and financial market risk.

The research program focuses on the valuation and risk management of tradable and non-tradable risk. Within the classical framework of option pricing, the valuation of contingent claims is justified by a self-financing trading strategy which duplicates the payoff of the financial contract. The formal model relies on a filtered probability space, where traded assets are adapted with respect to the filtration. The set of admissible trading strategies is restricted to the set of predictable stochastic processes. In this idealized financial market model the contract value is equal to the setup value of a self-financing trading strategy. The trading strategy equals the dynamic risk management advice for the underwriter of the claim, whereas the value equals the premium of the buyer. Within this artificial border, case valuation and risk management are closely related. Existing financial markets deviate from this artificial framework.
The recent financial crisis clearly emphasizes the key importance of financial stability for individual decisions and economic development as a whole. On an individual basis, equity linked life and pension insurance contracts combine traded and non-traded risks. The payoff of these contracts is defined in a path dependent manner by the value of complex financial investment strategies in tradable financial assets like stocks and bonds. Non-tradable risks like the death and survival uncertainty influence substantially the risk management of the underwriter, i.e. the insurer. These contracts are of extreme time to maturity and the insurer has to satisfy regulation requirements like e.g. short sell restrictions. Valuation and, hence, risk management in terms of a self-financing dynamic trading strategy is not possible. Instead, we will consider super- and sub-hedging strategies within an uncertain volatility framework. Furthermore, the ruin probability will be considered and contrasted with regularity requirements and different national bankruptcy rules. The research agenda includes the analysis of different contract designs with financial guarantees and their impact on the default probability of the underwriter. The analysis will involve continuous time stochastic processes, martingale representation and numerical techniques for partial differential equations.
On an aggregated basis, when transferring insurance risk by means of securitization to financial markets, frequently residual risks remain. An important aim in the econometric analysis of financial data consists in analysing residual risk when hedging instruments are cointegrated with the risk to be hedged. We will explore the effect of cointegration on optimal hedge ratios and on the performance of residual risk management using the setup of affine models. While guaranteeing numerical feasibility, these models also allow us to capture the stylized facts of asset returns, such as stochastic volatility and jumps. Furthermore, we will extend these models to include also quantity risk that amplifies market risks.


Based on ongoing work of Bayer and others, the analysis of macroeconomic problems will become a major topic of research. If asset markets are incomplete, the idiosyncratic income risk and the distribution of incomes over individual agents is an important determinant of the aggregate allocation. Agents will seek to insure using the assets available. For a production economy this leads to an over-accumulation of physical capital. This means that depending on the extent of market incompleteness, i.e. available insurance, aggregate savings respond to fluctuations in, say, labor market risk, such that consumption demand drops if risk increases. Here, a key question will be to understand how this mechanism can generate or perpetuate business cycles. Therefore it is a major task to document heterogeneity in income and earnings dynamics using micro panel data, in particular focusing on changes in earnings unpredictable to the household.
From an econometric and statistical point of view, an important topic will thus be the development of new techniques for analysing panel data. A major challenge in this context is to account for possible heterogeneity in the reaction to exogeneous explanatory variables, i.e. regression coefficients may vary over time and individuals. The development of substantially interpretable econometric models and corresponding efficient estimation procedures constitute an important research topic. Previous work indicates that solutions can be obtained by combining methods of functional data analysis with model selection procedures developed for sparse high-dimensional regression problems. Research in this area is related to the work of Rauhut (Bonn Junior Fellow) engaged in Research Area J.
In econometric terms, heteroscedasticity is a particular form of heterogeneity. Heteroskedasticity induces severe distortions of the distributions of test statistics with asymptotics based on functionals of Wiener processes. We aim to obtain (co)integration tests robust to heteroscedasticity. In a further step we will examine the robustness of some parameter stability tests.


In macroeconomic modeling, heterogeneity is the source of aggregation problems. Similar to the effect of market incompleteness on households’ savings decisions, if factor employment decisions of plants are of discrete nature, e.g. due to fixed costs of adjustment, the distribution of idiosyncratic productivity is an important argument of aggregate factor adjustments. This makes it necessary to aggregate explicitly in any macro model with underlying discrete micro behavior, which constitutes a challenge for quantitative numerical modeling. Once such explicit aggregation is done and the model is solved, it is possible to address new topics, such as the question whether fluctuations in productivity risk can generate sizable business cycles or not. Answering these questions requires exploiting long micro panel data in order to identify the underlying frictions leading to the discrete behavior.
The aggregation of micro-based decisions to a macro level has a direct analog in empirical work: when analysing panel data, research questions often demand an aggregation of the findings from each unit of the panel. This task is not trivial when the units are heterogeneous, and even more difficult if the unit-specific data are non-stationary. Ongoing research direction is inference with possibly non-stationary series in heterogeneous panels, e.g. the identification of structural breaks in the presence of long memory. The impact of data aggregation on the outcome of well-established econometric procedures will have to be examined in detail.
Aggregation problems arise as well in the analysis of financial data. Measuring and modeling the volatility of asset returns is important in portfolio optimization and risk management. So far mainly daily returns are used to construct volatility estimates. However, recent literature suggests that aggregation of high-frequency returns leads to more precise (co)volatility measures and can be used to improve volatility forecasts significantly. We plan to extend this structural analysis. In particular, we will investigate and exploit the information inherent in the aggregated high-frequency data to improve the multivariate modeling of return dependencies and to obtain more insight into the long-memory phenomena of volatility.