**SLIDE DESCRIPTION**

Forecast Error and Performance Measures

Slide
Audio

**AUDIO TRANSCRIPTION**

So we have a number of different forecast errors and performance measures:
one is simply the forecast error at time t, written as the error at
time t which is equal to the value, the actual data value, the x of
t minus the forecast of t. So if the forecast were larger than the data,
that's going to be a negative value; if the forecast were smaller than
the data, that's going to be a positive value. So e of t is a signed
quantity—can be either positive or negative. One very easily computed
measure of deviation is the mean absolute deviation or MAD; that's equal
to the sum of the absolute values of the forecast error at time t divided
by the number of values that you have in your data stream. So if you
made say 60 forecasts, 5 years of 12-month years, then you divide the
errors, the absolute value of the errors by 60 and that gives you the
mean absolute deviation. This is real quick for computers to do, obviously,
so it's a fairly useful measure. You could also use the mean squared
error, which is the sum of the errors, again, taking the mean of that,
divide by n; the cumulative forecast error, which is just the sum of
all the errors, so this can be some very large number; you can use the
mean absolute percentage error, which is as it says the percentage error,
the absolute value of the percentage error, or there's another value
that we'll see printed out on the printouts, which is called the tracking
signal, which is the cumulative forecast error divided by the mean absolute
deviation, which will either be above or below 1. So those are some
of the criteria that one could use.