Weather Data And Weather Derivatives

13.2.1 The importance of weather data for weather derivatives pricing

The history of the weather derivatives market dates back to 1996, when electricity deregulation in the USA caused the power market to begin changing from series of local monopolies to competitive regional wholesale markets. Energy companies realising the impact of weather on their operations took control of their weather risk and created a new market around it. While a number of pioneering trades were made as early as 1996, the market did not really take off until after September 1999, when the Chicago Mercantile Exchange (CME) embarked on listing and trading standard futures and options contracts on US temperature indexes. Since 2000, the weather risk market has grown significantly and become progressively more diversified across industries, even if this market originally evolved from the energy sector. According to the Second Annual Weather Risk Industry survey commissioned by the Weather Risk Management Association (WRMA), the total market size for weather derivatives increased to an impressive $11.5 billion at the end of 2001 (see PricewaterhouseCoopers (2002)).

Most weather derivatives contracts traded were temperature related (over 82%). These are "heating degree days" (HDDs) and "cooling degree days" (CDDs) contracts. A degree day is the deviation of the average daily temperature (ADT) from a predefined temperature defined as K in the following equations. HDDs and CDDs are the most common degree day measurements. The temperature below K represents the temperature below which heating devices are expected to turn on (HDDs), and above which air conditioners are expected to turn on (CDDs). Since HDDs and CDDs are measuring heating and cooling needs compared to the base temperature, they are calculated according to the following equations:

where Tmax and Tmin are the maximum and minimum, respectively, recorded temperature during the day.1 The calculation of daily degree days for longer periods (week, season, year) is based on straightforward summation:

In order to demonstrate how a degree day index may be structured to moderate risk, suppose that an energy company through the past 10 years' supply and temperature n

1 In the USA, the standard baseline temperature (K) is 65°F (18.3°C), but the 65°F base temperature can fluctuate from one region to another.

analysis has determined the expected cumulative number of HDDs during the heating season. The company is expecting to sell a given amount of energy units according to this number and sets its budgets to this level. Its analysis also suggests that, for each HDD below the 10-year average, demand will decrease by a set number of energy units, creating a marginal loss of $55 000 versus budget. In such a case the company may purchase a HDD index derivative to achieve protection at a price of $55 000 for each HDD lower than a given strike (see Corbally and Dang (2001)).

Traditionally, financial derivatives such as options on equities, bonds, forex or commodities are priced using no-arbitrage models such as the Black-Scholes pricing model. In the case of weather derivatives, the underlying asset is a physical quantity (temperature, rain, wind or snow) rather than a traded asset. Given that the underlying weather indexes are not traded, a no-arbitrage model cannot be directly applied to price weather derivatives.

Another approach is used in the insurance industry, known as Historical Burn Analysis (HBA). The central assumption of the method is that the historical record of weather contracts payouts gives a precise illustration of the distribution of potential payouts. As noted by Henderson (2001), if weather risk is calculated as the payouts standard deviation, then the price of the contract will be given by the equation:

where D(t, T) is the discount factor from contract maturity T to the pricing time t, f is the historical average payout, a is the historical standard deviation of payouts and a is a positive number denoting the protection seller's risk tolerance.

HBA therefore crucially depends on historical temperature data. Collecting historical data may be somewhat difficult and costly but, even when the data is available, there are several types of errors like missing data or unreasonable readings. Consequently, the data must be "cleaned" - that is, the errors and omissions must be fixed - in order to be used for pricing and risk management purposes.

13.2.2 The weather databank

For this research, we use daily temperature data (cleaned data) for Philadelphia International (WMO: 724080) weather station (the index station) and for its "fallback station" according to the WRMA, the Allentown-Bethlehem (WMO: 725170).2 This data was obtained from QuantWeather ( Furthermore, to perform PCA and NNR, we use temperature data for all the neighbouring stations for which cleaned data are available. In order to create a series of consequent daily temperature observations for Tmax (maximum daily temperature), Tmin (minimum daily temperature) and the corresponding Tavg ((Tmax + Tmin)/2) for all the available stations, the databank spans from 1 July 1960 to 31 December 1993 (i.e. 33 years and 12237 observations). Moreover, to examine the impact of seasonality, this databank was divided into two separate subsets. The first one includes only the "Autumn" (September, October and November) temperature observations and the second one only November temperature observations, totalling

2 Weather stations are identified in several ways. The most common is a six-digit WMO ID assigned by the World Meteorological Organisation. WMO IDs are the primary means of identifying weather stations outside the USA, while in the USA the five-digit Weather Bureau Army Navy identities (WBAN IDs) are widely used.

3094 and 1020 observations respectively. Standard tests like the serial correlation LM test, ADF, Phillips-Perron, ARCH-LM and Jarque-Bera tests (not reported here in order to conserve space) showed that all the series are autocorrelated, stationary, heteroskedastic and not normally distributed.

In order to assess the imputation accuracy of the methodologies, several "holes" were created in the datasets. Arbitrarily, it is assumed that all the missing observations occurred in November 1993. We also supposed that the missing data were missing at random. The fallback methodology employs different procedures to handle missing data according to the number of consecutive missing days, with 12 days or more an important breakpoint.3 Therefore, five different gaps were created including 1, 7, 10 (<12), 20 and 30 (>12) consecutive missing days. Since HDDs and CDDs are calculated as the deviation of the average daily temperature from the standard baseline temperature and, again, in order to conserve space only the imputed values of the average temperature dataset (ravg) are presented and compared later in this chapter.

0 0

Post a comment