## Recovery Rate RR

Credit Repair Magic

Get Instant Access

Recovery rate measures the expected value that will be recovered should the obligor default. Historically, estimating recovery rates was as much a focus as the estimation of probability of default. This is because RR was thought of as being entirely dependent on the individual features of the underlying collateral and not a function of systematic factors. Recently, however, this has changed and a significant amount of effort is being expended in estimating RR.

With the estimation of the above three parameters, the Expected Loss (EL) associated with each obligor can be calculated as follows:

LGD is the Loss Given Default and can be calculated as CE x (1 - RR).

The older models of credit risk, for the most part, used some form of credit scoring, where, based on a set of factors, the obligor would be assigned a credit score which in turn would map into a specific measure of credit risk. However, with the advent of very complex credit securities an increasing need for sophisticated credit models was felt. This requirement, coupled with the availability of cheap computing power allowed for the development of an increasingly quantitative framework in credit modeling. The approaches to analyzing credit risk can be classified into two main categories, namely,

1. Structural Approach

2. Reduced Form Approach

### 20.6.3.1 Structural Approach

The foundation for the structural approach is based on the original framework developed by Merton using the option pricing principle. Here, the liabilities of a firm are thought of as contingent claims on its assets and default is thought to occur when the market value of the firm's assets is lower than the face value of debt. Hence, assuming that a firm's debt is made up of zero coupon bonds, if the market value of the firm's assets is greater than the face value of the bonds, then the bondholder gets the full face value at time of maturity. If however, the market value of the firm's assets is lower than the face value of the bonds, then the bondholder gets back the market value of the firm, and incurs a loss. Thus the payoff structure for the bondholders at maturity can be thought of as,

Payoff = Min (face value of bond market value of firm)

It is clear that in these models, the credit risk of an obligor is a function of the market value assets and a well-defined default threshold/barrier. There are several limitations to the Merton model such as the assumption that default only occurs at maturity, which is clearly not realistic in the real world as defaults can occur at anytime during the life of the bond. Similarly, this model assumes only one class of debt, where as firms have different types of debt with different seniority structure and often times this structure is not strictly adhered to upon default. Several modifications to the original Merton model have been made, with each model attempting to remove several of the unrealistic assumptions in the original framework.

As the structural model is built up from theory, the impact of each variable can be quantifies explicitly. As a result, its strength can be though of as the ability of diagnose and change the credit characteristics of a portfolio through a set of well-defined inputs. However, its limitation lies in the fact that while we assume we can accurately characterize a default process in theory, in reality, it is virtually impossible to do so.

### 20.6.3.2 Reduced Form Approach

The reduced form approach assumes no knowledge of the underlying default process. It takes the view that a firm's default time is unpredictable and is driven by a default intensity that is a function of unobservable or latent variables. Thus, in this model, the default process of a firm is not a function of its asset value and we need not estimate asset values to compute the credit risk. This model considers current market prices to represent the true value of the security under risk neutral valuation. Thus, this model calibrates to market prices and is less bound by underlying theory. The strength of this approach lies in that we can make excellent in-sample predictions, given that we calibrate to sample. Furthermore, the flexibility and tractability of this form of modeling is very attractive as it does not require the modeler to specify an underlying default process. The shortcoming of this approach, however, lies in that while we may be able to calibrate to in-sample, with no clear understanding of the underlying process, our out-of sample estimates could be very poor. Furthermore, any prescriptive work would be very difficult, as the underlying drivers are not clearly understood.