Xem mẫu

Time series models 267 The acf can now be obtained by dividing the covariances by the variance, so that τ0 = γ0 = 1 ! 2 τ1 = γ1 = á1 −φ1¢! = φ1 0 ¡1 −φ1¢ à ! 2 2 τ2 = γ2 = á1 −φ1¢! = φ2 0 ¡1 −φ2¢ τ3 = φ3 The autocorrelation at lag s is given by τs = φs (8A.72) (8A.73) (8A.74) (8A.75) (8A.76) whichmeansthatcorr(yt,yt−s) = φs.NotethatuseoftheYule–Walkerequations would have given the same answer. 9 Forecast evaluation Learning outcomes In this chapter, you will learn how to ● compute forecast evaluation tests; ● distinguish between and evaluate in-sample and out-of-sample forecasts; ● undertake comparisons of forecasts from alternative models; ● assess the gains from combining forecasts; ● run rolling forecast exercises; and ● calculate sign and direction predictions. In previous chapters, we focused on diagnostic tests that the real estate analyst can compute to choose between alternative models. Once a model or competing models have been selected, we really want to know how accurately these models forecast. Forecast adequacy tests complement the diagnostic checking that we performed in earlier chapters and can be used as additional criteria to choose between two or more models that have satisfactory diagnostics. In addition, of course, assessing a model’s forecast performance is also of interest in itself. Determining the forecasting accuracy of a model is an important test of its adequacy. Some econometricians would go as far as to suggest that the statistical adequacy of a model, in terms of whether it violates the CLRM assumptionsorwhetheritcontainsinsignificantparameters,islargelyirrel-evant if the model produces accurate forecasts. This chapter presents commonly used forecast evaluation tests. The lit-erature on forecast accuracy is large and expanding. In this chapter, we draw upon conventional forecast adequacy tests, the application of which generates useful information concerning the forecasting ability of different models. 268 Forecast evaluation 269 At the outset we should point out that forecast evaluation can take place with a number of different tests. The choice of which to use depends largely on the objectives of the forecast evaluation exercise. These objectives and tasks to accomplish in the forecast evaluation process are illustrated in this chapter. In addition, we review a number of studies that undertake forecast evaluation so as to illustrate alternative aspects of and approaches to the evaluation process, all of which have practical value. Thecomputationoftheforecastmetricswepresentbelowrevolvesaround theforecasterrors.Wedefinetheforecasterrorastheactualvalueminusthe forecast value (although, in the literature, the forecast error is sometimes specified as the forecast value minus the actual value). We can categorise four influences that determine the size of the forecast error. (1) Poor specification on the part of the model. (2) Structural events: major events that change the nature of the relation-ship between the variables permanently. (3) Inaccurate inputs to the model. (4) Random events: unpredictable circumstances that are short-lived. Theforecastevaluationanalysisinthischapteraimstoexposepoormodel specification that is reflected in the forecast error. We neutralise the impact of inaccurate inputs on the forecast error by assuming perfect information aboutthefuturevaluesoftheinputs.Ouranalysisisstillsubjecttostructural impacts and random events on the forecast error, however. Unfortunately, there is not much that can be done – at least, not quantitatively – when these occur out of the sample. 9.1 Forecast tests An object of crucial importance in measuring forecast accuracy is the loss function, defined as L(At+n,Ft+n,t) or L(et+n,t), where A is the realisations (actualvalues),F istheforecastseries,et+n,t istheforecasterrorAt+n –Ft+n,t and n is the forecast horizon. At+n is the realisation at time t +n and Ft+n,t is the forecast for time t +n made at time t (n periods beforehand). The loss function charts the ‘loss’ or ‘cost’ associated with the forecasts and realisa-tions (see Diebold and Lopez, 1996). Loss functions differ, as they depend on the situation at hand (see Diebold, 1993). The loss function of the fore-cast by a government agency will differ from that of a company forecasting the economy or forecasting real estate. A forecaster may be interested in volatility or mean accuracy or the contribution of alternative models to more accurate forecasting. Thus the appropriate accuracy measure arises 270 Real Estate Modelling and Forecasting from the loss function that best describes the utility of the forecast user regarding the forecast error. In the literature on forecasting, several measures have been proposed to describethelossfunction.Thesemeasuresofforecastqualitycanbegrouped intoanumberofcategories,includingforecastbias,signpredictability,fore-cast accuracy with emphasis on large errors, forecast efficiency and encom-passing.Theevaluationoftheforecastperformanceonthesemeasurestakes place through the computation of the appropriate statistics. The question frequently arises as to whether there is systematic bias in a forecast. It is obviously a desirable property that the forecast is not biased. The null hypothesis is that the model produces forecasts that lead to errors with a zero mean. A t-test can be calculated to determine whether there is a statistically significant negative or positive bias in the forecasts. For simplicityofexposition,lettingthesubscripti nowdenoteeachobservation for which the forecast has been made and the error calculated, the mean error ME or mean forecast error MFE is defined as ME = 1 n ˆi (9.1) i=1 where n is the number of periods that the model forecasts. Another conventional error measure is the mean absolute error MAE, which is the average of the differences between the actual and forecast valuesinabsoluteterms,anditisalsosometimestermedthemeanabsolute forecast error MAFE. Thus an error of −2 per cent or +2 per cent will have the same impact on the MAE of 2 per cent. The MAE formula is MAE = 1 n |ei| (9.2) i=1 Since both ME and MAE are scale-dependent measures (i.e. they vary with the scale of the variable being forecast), a variant often reported is the mean absolute percentage error MAPE: MAPE = 100% n ¯Ai −Fi ¯ (9.3) i=1 i Themeanabsoluteerrorandthemeanabsolutepercentageerrorbothuse absolute values of the forecast errors, which prevent positive and negative errors from cancelling each other out. The above measures are used to assess how closely individual predictions track their corresponding real data figures. In practice, when the series under investigation is already Forecast evaluation 271 expressed in percentage terms, the MAE criterion is sufficient. Therefore, if we forecast rent growth (expressed as a percentage), MAE is used. If we forecast the actual rent or a rent index, however, MAPE facilitates forecast comparisons. Another set of tests commonly used in forecast comparisons builds on the variance of the forecast errors. An important statistic from which other metrics are computed is the mean squared error MSE or, equivalently, the mean squared forecast error MSFE: MSE = 1 n ˆ2 (9.4) i=1 MSE will have units of the square of the data – i.e. of At2. In order to produce a statistic that is measured on the same scale as the data, the root mean squared error RMSE is proposed: RMSE = √MSE (9.5) The MSE and RMSE measures have been popular methods to aggregate the deviations of the forecasts from their actual trajectory. The smaller the values of the MSE and RMSE, the more accurate the forecasts. Due to its similar scale with the dependent variable, the RMSE of a forecast can be compared to the standard error of the model. An RMSE higher than, say, twice the standard error does not suggest a good set of forecasts. The RMSE and MSE are useful when comparing different methods applied to the same set of data, but they should not be used when comparing data sets that have different scales (see Chatfield, 1988, and Collopy and Armstrong, 1992). The MSE and RMSE impose a greater penalty for large errors. The RMSE is a better performance criterion than measures such as MAE and MAPE when the variable of interest undergoes fluctuations and turning points. If the forecast misses these large changes, the RMSE will disproportionately penalise the larger errors. If the variable follows a steadier path, then other measures such as the mean absolute error may be preferred. It follows that the RMSE heavily penalises forecasts with a few large errors relative to forecasts with a large number of small errors. This is important for samples of the small size that we often encounter in real estate. A few large errors willproducehigherRMSEandMSEstatisticsandmayleadtotheconclusion that the model is less fit for forecasting. Since these measures are sensitive to outliers, some authors (such as Armstrong, 2001) have recommended caution in their use for forecast accuracy evaluation. ... - tailieumienphi.vn
nguon tai.lieu . vn