Which metric expresses forecast error as a percentage of actual values?

Prepare for the Quantitative Business Analysis Exam 3 with interactive quizzes and comprehensive explanations. Dive into multiple choice questions that will help solidify your understanding and boost your confidence before test day!

Multiple Choice

Which metric expresses forecast error as a percentage of actual values?

Explanation:
The metric being described is the one that turns forecast errors into a percentage of what actually happened. It does this by taking the absolute difference between each actual value and its forecast, dividing by the actual value, and then averaging those percentages. In formula form, MAPE = (1/n) Σ |Actual_t − Forecast_t| / |Actual_t| × 100%. This percentage view lets you judge forecast accuracy on a consistent scale across different magnitudes, so a 5% error means the same level of accuracy whether you’re forecasting sales of hundreds or thousands. This contrasts with MAE, which reports errors in the same units as the data (not a percentage), so you can’t compare relative accuracy across series with different scales. MSE goes further by squaring errors, producing a metric in squared units and amplifying larger errors, which isn’t a direct percentage. R-squared measures how well the model explains variation in the data, not the size of forecast errors at all. A caveat: MAPE can be problematic when actual values are zero or very close to zero because the denominator becomes problematic or inflates the percentage. In such cases, alternatives like sMAPE or other error measures may be more appropriate.

The metric being described is the one that turns forecast errors into a percentage of what actually happened. It does this by taking the absolute difference between each actual value and its forecast, dividing by the actual value, and then averaging those percentages. In formula form, MAPE = (1/n) Σ |Actual_t − Forecast_t| / |Actual_t| × 100%. This percentage view lets you judge forecast accuracy on a consistent scale across different magnitudes, so a 5% error means the same level of accuracy whether you’re forecasting sales of hundreds or thousands.

This contrasts with MAE, which reports errors in the same units as the data (not a percentage), so you can’t compare relative accuracy across series with different scales. MSE goes further by squaring errors, producing a metric in squared units and amplifying larger errors, which isn’t a direct percentage. R-squared measures how well the model explains variation in the data, not the size of forecast errors at all.

A caveat: MAPE can be problematic when actual values are zero or very close to zero because the denominator becomes problematic or inflates the percentage. In such cases, alternatives like sMAPE or other error measures may be more appropriate.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy