Kay Giesecke's Research
Management Science and Engineering
Office: Huang Engineering Center 307
Email: giesecke @ stanford.edu
The recent financial crisis highlights the need to better understand the behavior of risk in financial markets. My research and teaching address the quantification and management of financial risks, especially the risk of default ("credit risk"). I am interested in
- The stochastic modeling, valuation and hedging of credit risks,
- The development of statistical tools to estimate and predict these risks, and
- The methods for solving the significant computational problems that arise in this context.
My research results enable more effective hedging of credit risks, better risk management at financial institutions, and more accurate measurement of systemic risk in financial markets. They also inform the design of regulatory policies. Much of my research is methodological in nature and of broader interest. Many contributions transcend the financial engineering area and have potential applications in other areas, including insurance, queuing, and reliability.
Below I outline some of my recent and ongoing work.
1. Stochastic modeling. The estimation of credit risk in pools of defaultable assets and the valuation and hedging of securities exposed to that risk require stochastic models of correlated event timing. The modeling challenge is to capture the sources of default clustering identified in empirical research while allowing for analytical tractability of applications. In a series of articles [1, 2, 3, 4, 5], I develop and apply a top-down approach to this problem. The timing of defaults in the pool is described by a point process whose dynamics are specified without reference to the portfolio constituents. This approach allows me to describe the exposure of the constituents to common risk factors and contagion effects, two main sources of clustering, in terms of a concise set of economically meaningful parameters. In , my student Eymen Errais, industry collaborator Lisa Goldberg, and I introduce affine point processes and propose them as models of portfolio loss. In , my student Xiaowei Ding, industry collaborator Pascal Tomecek, and I construct time-changed birth processes for modeling the pool loss. The models in these point process families address the self-excitation of defaults and permit analytical solutions. Empirical analyses show that they can fit the credit derivatives market.
However, stand-alone models of portfolio loss are silent about the constituent risks. They cannot be used to estimate constituent hedge sensitivities, which are important in practice. In , Lisa Goldberg, Xiaowei Ding and I contribute a method that extends any model of portfolio loss to the constituents. We propose random thinning to decompose the intensity of the portfolio loss process into a sum of constituent default intensities. We show that a thinning process, which allocates the portfolio intensity to constituents, uniquely exists, and is a probabilistic model for the next-to-default. We derive a formula for the constituent default probability in terms of the thinning process and the portfolio intensity, and develop a semi-analytical transform approach to evaluate it. The formula leads to an estimation scheme for constituent hedges. An empirical analysis for 2008 shows that the constituent hedges generated by our approach outperform the hedges prescribed by the widely used copula model.
2. Statistical tools. The empirical analysis of defaults based on point process models of event timing requires statistical methods for parameter inference. I devise such methods and apply them to explore the sources of default clustering, study corporate bond default risk , extract risk premia in credit derivative markets , analyze the actual-measure risk profile of complex credit derivatives such as CDOs , and quantify systemic risk in the U.S. financial market [7, 8].
In , my student Gustavo Schwenkler and I develop likelihood estimators of the parameters of a marked point process and of incompletely observed explanatory factors that influence the arrival intensity and mark distribution. We provide conditions guaranteeing consistency and asymptotic normality as the sample period grows. We also establish an approximation to the likelihood and analyze the convergence and asymptotic properties of the associated estimators. These results provide a rigorous statistical foundation for empirical work in credit risk and other areas concerned with event timing.
In , my students Gustavo Schwenkler and Shahriar Azizpour and I supplement the likelihood estimators with goodness-of-fit tests based on time changes for point processes. We then use these tools to analyze the sources of default clustering in the U.S. We find strong evidence that defaults are self-exciting, after controlling for the influence of the macro-economic variables that prior studies have identified as predictors of U.S. defaults, and for the role of an unobservable frailty risk factor whose importance for U.S. default timing was recently established. This empirical result is important because it informs the design of stochastic models of correlated default timing.
3. Computational methods. The computational problems arising in applications of stochastic models of correlated event timing are significant. I develop widely applicable tools to address these challenging problems. The tools are based on the underlying point process structure; they transcend the specifics of the motivating credit risk application.
3.1 Transform methods. In , my student Shilin Zhu and I develop a measure change approach to calculating a transform of a vector point process. We show that the transform can be expressed in terms of a Laplace transform under an equivalent probability measure of the point process compensator. The latter can be calculated explicitly for a wide range of model specifications, because it is analogous to the value of a simple security. The transform formula extends the computational tractability offered by extant security pricing models to a point process and its applications, which include credit pricing and risk management problems.
3.2 Simulation methods. Monte Carlo (MC) simulation is widely used to address the computational problems arising in applications of event timing models. There are, however, significant issues. First, conventional discretization schemes often generate biased simulation estimators. The error is hard to quantify. Second, the MC analysis of rare events, which are at the center of important applications including the measurement of credit and systemic risks, tends to be highly inefficient. My research addresses these issues.
Exact sampling of point processes and jump-diffusions.
In , I team up with my doctoral and post-doctoral students Mohammad Mousavi and Hossein Kakavand, and industry collaborator Hideyuki Takada to develop a method for the unbiased MC estimation of an expectation of an arbitrary function of a vector indicator point process evaluated at a fixed time. The idea is to construct a Markov chain whose value at any given time has the same distribution as the value of the point process. We show that such a mimicking chain exists, and that its transition rate is given by a conditional expectation of the point process intensity that can be computed for many standard point process models. The construction reduces the original MC problem to one involving a simple Markov chain, which can be sampled exactly.
In related work [17, see also 18], my collaborators and I focus on the exact sampling of point process paths. Our method is based on a filtering argument. We project the point process onto its own filtration and then sample it in this coarser subfiltration. The sampling is based on the subfiltration-intensity, which is deterministic between event times and therefore facilitates the use of exact schemes. This projection method leads to an unbiased estimator of an expectation of an arbitrary functional of a point process path. In , my students Baeho Kim and Shilin Zhu and I develop other algorithms for the exact and asymptotically exact sampling of point process paths.
Jump-diffusions arise as models of security, energy and commodity prices, interest rates, and event timing. In , my student Dmitry Smelov and I develop a method for the exact simulation of a skeleton, a hitting time and other functionals of a jump-diffusion with state-dependent drift, volatility, jump intensity and jump size. The method generalizes a rejection algorithm recently proposed for diffusions. It requires the drift function to be C1, the volatility function to be C2, and the jump intensity function to be locally bounded. No further structure is imposed on these functions. The method leads to unbiased estimators of security prices, transition densities, hitting probabilities, and other quantities.
In ongoing work, we seek to generalize the rejection algorithm to vector-valued jump-diffusions. In an effort to improve the method's efficiency, we also analyze the use of alternative proposal processes. Finally, we use the exact algorithm to construct and analyze simulated likelihood estimators of the parameters of general jump-diffusions.
Provably efficient rare-event algorithms for point processes
In , my statistics colleague Tze Leung Lai, doctoral student Shaojie Deng and I develop a sequential MC method for estimating rare-event probabilities for vector indicator point processes. The method is based on a change of measure and a resampling mechanism. We propose resampling weights that generate an asymptotically optimal estimator of the tail of the distribution of the total event count at a fixed horizon.
In [22, see also 23 and 24], my student Alexander Shkolnik and I develop an importance sampling (IS) algorithm for the tail of the distribution of the total event count at a fixed horizon. The algorithm differs from standard exponential twisting. It entails a change of measure induced by scaling the component intensities of the vector point process. We identify the asymptotically optimal scaling in a general stochastic intensity setting. In ongoing work, we seek to design a measure change that generates estimators with bounded relative error.
3.3 Asymptotic approximation methods. In practice, many computational problems involve large pools. The portfolios of credit assets held by banks often consist of tens of thousands of positions. A Monte Carlo analysis of such pools can be burdensome. I use limiting arguments to develop "large pool approximations." These approximations offer significant computational advantages and provide important analytical insights into the risk profile of the pool. However, they require somewhat more concrete assumptions on the point process representing event timing than the simulation methods described above.
In early work [25 and 26], Stefan Weber and I provide Gaussian approximations to the loss from default in a large homogenous pool based on central limit theorems. In recent efforts [27 and 28], Kostas Spiliopoulos, Richard Sowers, my student Justin Sirignano, and I develop laws of large numbers (LLNs) for the loss from default in a heterogeneous pool. We show that the density of the limiting measure solves a non-linear stochastic PIDE, and that the moments of the limiting measure satisfy an infinite system of SDEs. The solution to this system leads to the solution of the stochastic PIDE through an inverse moment problem. It also leads to the distribution of the limiting portfolio loss, which we propose as an approximation to the distribution of the loss in a large pool.
In , Kostas Spiliopoulos, Justin Sirignano, and I analyze the fluctuations of the portfolio loss around its large pool limit. We prove a weak convergence result for the fluctuations process, and use it to develop a conditionally Gaussian approximation to the loss distribution. This second-order approximation is signanficantly more accurate than an approximation based on just the LLN, especially for smaller portfolios.
In ongoing work, Richard Sowers, Kostas Spiliopoulos, and I study large deviations. The analysis of the atypical behavior of the pool will lead to approximations to the tail of the loss and help us design provably efficient IS algorithms.