## E jV0fl0 iei1017

where the distribution of e„ conditional on its value in the previous period, er_l5 is normal with mean 0 and variance a0 + If ax - 0, the variance of the error in every period is just aQ. The variance is constant over time and does not depend on past errors. Now suppose that fli > 0. Then the variance of the error in one period depends on how large the squared error was in the previous period. If a large error occurs in one period, the variance of the error in the next period will be even larger.

Engle shows that we can test whether a time series is ARCH(l) by regressing the squared residuals from a previously estimated time-series model (AR, MA, or ARMA) on a constant and one lag of the squared residuals. We can estimate the linear regression equation e? = a0 + + "r 8)

where ut is an error term. If the estimate of a ] is statistically significantly different from zero, we conclude that the time series is ARCH(l). If a time-series model has ARCH(l) errors, then the variance of the errors in period t +1 can be predicted in period t using the formula of+1 =a0 +

EXAMPLE 10-17. Testing for ARCH(1) in Monthly Inflation.

Analyst Lisette Miller wants to test whether monthly data on CPI inflation contain autoregressive conditional heteroskedasticity. She could estimate Equation 10-18 using the residuals from the time-series model. As discussed in Example 10-8, if she modeled monthly CPI inflation from 1971 to 2000, she would conclude that an AR(1) model was the best autoregressive model to use to forecast inflation out of sample. Table 10-17 shows the results of testing whether the errors in that model are ARCH(l).

TABLE 10-17 Test for ARCH(1) in an AR(1) Model

Residuals from Monthly CPI Inflation at an Annual Rate February 1971-December 2000

TABLE 10-17 Test for ARCH(1) in an AR(1) Model

Residuals from Monthly CPI Inflation at an Annual Rate February 1971-December 2000

 Regression Statistics /^-squared 0.1376 Standard error 26.3293 Observations 359 Durbin-Watson 1.9126 Coefficient Standard Error t-Statistic Intercept 7.2958 1.5050 4.8478 Lag 1 0.3687 0.0488 7.5483

Source: U.S. Bureau of Labor Statistics.

Source: U.S. Bureau of Labor Statistics.

Because the i-statistic for the coefficient on the previous period's squared residuals is greater than 7.5, Miller easily rejects the null hypothesis that the variance of the error does not depend on the variance of previous errors. Consequently, the test statistics she computed in Table 10-5 are not valid, and she should not use them in deciding her investment strategy.

It is possible Miller's conclusion—that the AR(1) model for monthly inflation has ARCH in the errors—may have been due to the sample period employed (1971 to 2000). In Example 10-9, she used a shorter sample period of 1985 to 2000 and concluded that monthly CPI inflation follows an AR(1) process. (These results were shown in Table 10-8.) Table 10-17 shows that errors for a time-series model of inflation for the entire sample (1971 to 2000) have ARCH errors. Do the errors estimated with a shorter sample period (1985 to 2000) also display ARCH? For the shorter sample period, Miller estimated an AR(1) model using monthly inflation data.37 Now she tests to see whether the errors display ARCH. Table 10-18 shows the results.

37 The AR(I) results are reported in Example 10-9.

TABLE 10-18 Test for ARCH(1) in an AR(1) Model

Monthly CPI Inflation at an Annual Rate February 1985-December 2000

TABLE 10-18 Test for ARCH(1) in an AR(1) Model

Monthly CPI Inflation at an Annual Rate February 1985-December 2000

 Regression Statistics ^-squared 0.0106 Standard error 11.2593 Observations 191 Durbin-Watson 1.9969 Coefficient Standard Error t-Statistic Intercept 5.3939 0.9224 5.8479 Lag 1 0.1028 0.0724 1.4205

Source: U.S. Bureau of Labor Statistics.

Source: U.S. Bureau of Labor Statistics.

In this sample, the coefficient on the previous period's squared residual is quite small and has a ¿-statistic of only 1.4205. Consequently, Miller fails to reject the null hypothesis that the errors in this regression have no autoregressive conditional heteroskedasticity. This is additional evidence that the AR(1) model for 1985 to 2000 is a good fit. The error variance appears to be homoskedastic, and Miller can rely on the f-statistics. This result again confirms that a single AR process for the entire 1971-2000 period is misspecified (it does not describe the data well).

Suppose a model contains ARCH(l) errors. What are the consequences of that fact? First, if ARCH exists, the standard errors for the regression parameters will not be correct. In case ARCH exists, we will need to use generalized least squares38 or other methods that correct for heteroskedasticity to correctly estimate the standard error of the parameters in the time-series model. Second, if ARCH exists and we have it modeled, for example as ARCH(l), we can predict the variance of the errors. Suppose, for instance, that we want to predict the variance of the error in inflation using the estimated parameters from Table 10-17:6? = 7.2958 + 0.3687e?_i. If the error in one period were 0 percent, the predicted variance of the error in the next period would be 7.2958 + 0.3687(0) = 7.2958. If the error in one period were 1 percent, the predicted variance of the error in the next period would be 7.2958 + 0.3687(12) = 7.6645.

Engle and other researchers have suggested many generalizations of the ARCH(l) model, including ARCH(p) and generalized autoregressive conditional heteroskedasticity (GARCH) models. In an ARCH(p) model, the variance of the error term in the current period depends linearly on the squared errors from the previous p periods: aj = a0 + ai e2_t + ... + ap£-p. GARCH models are similar to ARMA models of the error variance in a time series. Just like ARMA models, GARCH models can be finicky and unstable: Their results can depend greatly on the sample period and the initial guesses of

38 See Greene (2003).

the parameters in the GARCH model. Financial analysts who use GARCH models should be well aware of how delicate these models can be, and they should examine whether

GARCH estimates are robust to changes in the sample and the initial guesses about the 10

parameters. 