Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Fan Charts – Strategy @ Risk

Tag: Fan Charts

  • Forecasting sales and forecasting uncertainty

    Forecasting sales and forecasting uncertainty

    This entry is part 1 of 4 in the series Predictive Analytics

     

    Introduction

    There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.).

    Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios.  We will in the following take a look at the un-usefulness of such single point forecasts.

    As example we will use a simple forecast ‘model’ for net sales for a large multinational company. It turns out that there is a good linear relation between the company’s yearly net sales in million euro and growth rates (%) in world GDP:

    with a correlation coefficient R= 0.995. The relation thus accounts for almost 99% of the variation in the sales data. The observed data is given as green dots in the graph below, and the regression as the green line. The ‘model’ explains expected sales as constant equal 1638M and with 53M in increased or decreased sales per percent increase or decrease in world GDP:

    The International Monetary Fund (IMF) that kindly provided the historical GDP growth rates also gives forecasts for expected future change in the World GDP growth rate (WEO, April 2012) – for the next five years. When we put these forecasts into the ‘model’ we ends up with forecasts for net sales for 2012 to 2016 as depicted by the yellow dots in the graph above.

    So mission accomplished!  …  Or is it really?

    We know that the probability for getting a single-point forecast right is zero even when assuming that the forecast of the GDP growth rate is correct – so the forecasts we so far have will certainly be wrong, but how wrong?

    “Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.” (Gardner & Tetlock, 2011)

    Maybe we should take a closer look at possible forecast errors, input data and the final forecast.

    The prediction band

    Given the regression we can calculate a forecast band for future observations of sales given forecasts of the future GDP growth rate. That is the region where we with a certain probability will expect new values of net sales to fall. In the graph below the green area give the 95% forecast band:

    Since the variance of the predictions increases the further new forecasts for the GDP growth rate lies from the mean of the sample values (used to compute the regression), the band will widen as we move to either side of this mean. The band will also widen with decreasing correlation (R) and sample size (the number of observations the regression is based on).

    So even if the fit to the data is good, our regression is based on a very small sample giving plenty of room for prediction errors. In fact a 95% confidence interval for 2012, with an expected GDP growth rate of 3.5%, is net sales 1824M plus/minus 82M. Even so the interval is still only approx. 9% of the expected value.

    Now we have shown that the model gives good forecasts, calculated the confidence interval(s) and shown that the expected relative error(s) with high probability will be small!

    So the mission is finally accomplished!  …  Or is it really?

    The forecasts we have made is based on forecasts of future world GDP growth rates, but how certain are they?

    The GDP forecasts

    Forecasting the future growth in GDP for any country is at best difficult and much more so for the GDP growth for the entire world. The IMF has therefore supplied the baseline forecasts with a fan chart ((  The Inflation Report Projections: Understanding the Fan Chart By Erik Britton, Paul Fisher and John Whitley, BoE Quarterly Bulletin, February 1998, pages 30-37.)) picturing the uncertainty in their estimates.

    This fan chart ((Figure 1.12. from:, World Economic Outlook (April 2012), International Monetary Fund, Isbn  9781616352462))  shows as blue colored bands the uncertainty around the WEO baseline forecast with 50, 70, and 90 percent confidence intervals ((As shown, the 70 percent confidence interval includes the 50 percent interval, and the 90 percent confidence interval includes the 50 and 70 percent intervals. See Appendix 1.2 in the April 2009 World Economic Outlook for details.)) :

    There is also another band on the chart, implied but un-seen, indicating a 10% chance of something “unpredictable”. The fan chart thus covers only 90% of the IMF’s estimates of the future probable growth rates.

    The table below shows the actual figures for the forecasted GDP growth (%) and the limits of the confidence intervals:

    Lower

    Baseline

    Upper

    90%

    70%

    50%

    50%

    70%

    90%

    2012

    2.5

    2.9

    3.1

    .5

    3.8

    4.0

    4.3

    2013

    2.1

    2.8

    3.3

    4.1

    4.8

    5.2

    5.9

    The IMF has the following comments to the figures:

    “Risks around the WEO projections have diminished, consistent with market indicators, but they remain large and tilted to the downside. The various indicators do not point in a consistent direction. Inflation and oil price indicators suggest downside risks to growth. The term spread and S&P 500 options prices, however, point to upside risks.”

    Our approximation of the distribution that can have produced the fan chart for 2012 as given in the World Economic Outlook for April 2012 is shown below:

    This distribution has:  mean 3.43%, standard deviation 0.54, minimum 1.22 and maximum 4.70 – it is skewed with a left tail. The distribution thus also encompasses the implied but un-seen band in the chart.

    Now we are ready for serious forecasting!

    The final sales forecasts

    By employing the same technique that we used to calculate the forecast band we can by Monte Carlo simulation compute the 2012 distribution of net sales forecasts, given the distribution of GDP growth rates and by using the expected variance for the differences between forecasts using the regression and new observations. The figure below describes the forecast process:

    We however are not only using the 90% interval for The GDP growth rate or the 95% forecast band, but the full range of the distributions. The final forecasts of net sales are given as a histogram in the graph below:

    This distribution of forecasted net sales has:  mean sales 1820M, standard deviation 81, minimum sales 1590M and maximum sales 2055M – and it is slightly skewed with a left tail.

    So what added information have we got from the added effort?

    Well, we now know that there is only a 20% probability for net sales to be lower than 1755 or above 1890. The interval from 1755M to 1890M in net sales will then with 60% probability contain the actual sales in 2012 – see graph below giving the cumulative sales distribution:

    We also know that we with 90% probability will see actual net sales in 2012 between 1720M and 1955M.But most important is that we have visualized the uncertainty in the sales forecasts and that contingency planning for both low and high sales should be performed.

    An uncertain past

    The Bank of England’s fan chart from 2008 showed a wide range of possible futures, but it also showed the uncertainty about where we were then – see that the black line showing National Statistics data for the past has probability bands around it:

    This indicates that the values for past GDP growth rates are uncertain (stochastic) or contains measurement errors. This of course also holds for the IMF historic growth rates, but they are not supplying this type of information.

    If the growth rates can be considered stochastic the results above will still hold, if the conditional distribution for net sales given the GDP growth rate still fulfills the standard assumptions for using regression methods. If not other methods of estimation must be considered.

    Black Swans

    But all this uncertainty was still not enough to contain what was to become reality – shown by the red line in the graph above.

    How wrong can we be? Often more wrong than we like to think. This is good – as in useful – to know.

    “As Donald Rumsfeld once said: it’s not only what we don’t know – the known unknowns – it’s what we don’t know we don’t know.”

    While statistic methods may lead us to a reasonably understanding of some phenomenon that does not always translate into an accurate practical prediction capability. When that is the case, we find ourselves talking about risk, the likelihood that some unfavorable or favorable event will take place. Risk assessment is then necessitated and we are left only with probabilities.

    A final word

    Sales forecast models are an integrated part of our enterprise simulation models – as parts of the models predictive analytics. Predictive analytics can be described as statistic modeling enabling the prediction of future events or results ((in this case the probability distribution of future net sales)) , using present and past information and data.

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the management process. The ability to quickly and accurately detect changes in key external and internal variables and adjust tactics accordingly can make all the difference between success and failure:

    1. Forecasts must integrate both external and internal drivers of business and the financial results.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5. Never relay on single point or scenario forecasting.

    The forecasts are usually done in three stages, first by forecasting the market for that particular product(s), then the firm’s market share(s) ending up with a sales forecast. If the firm has activities in different geographic markets then the exercise has to be repeated in each market, having in mind the correlation between markets:

    1. All uncertainty about the different market sizes, market shares and their correlation will finally end up contributing to the uncertainty in the forecast for the firm’s total sales.
    2. This uncertainty combined with the uncertainty from other forecasted variables like interest rates, exchange rates, taxes etc. will eventually be manifested in the probability distribution for the firm’s equity value.

    The ‘model’ we have been using in the example have never been tested out of sample. Its usefulness as a forecast model is therefore still debatable.

    References

    Gardner, D & Tetlock, P., (2011), Overcoming Our Aversion to Acknowledging Our Ignorance, http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

    World Economic Outlook Database, April 2012 Edition; http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/index.aspx

    Endnotes

     

     

  • WACC, Uncertainty and Infrastructure Regulation

    WACC, Uncertainty and Infrastructure Regulation

    This entry is part 2 of 2 in the series The Weighted Average Cost of Capital

     

    There is a growing consensus that the successful development of infrastructure – electricity, natural gas, telecommunications, water, and transportation – depends in no small part on the adoption of appropriate public policies and the effective implementation of these policies. Central to these policies is development of a regulatory apparatus that provides stability, protects consumers from the abuse of market power, guard’s consumers and operators against political opportunism, and provides incentives for service providers to operate efficiently and make the needed investments’ capital  (Jamison, & Berg, 2008, Overview).

    There are four primary approaches to regulating the overall price level – rate of return regulation (or cost of service), price cap regulation, revenue cap regulation, and benchmarking (or yardstick) regulation. Rate of return regulation adjusts overall price levels according to the operator’s accounting costs and cost of capital. In most cases, the regulator reviews the operator’s overall price level in response to a claim by the operator that the rate of return that it is receiving is less than its cost of capital, or in response to a suspicion of the regulator or claim by a consumer group that the actual rate of return is greater than the cost of capital (Jamison, & Berg, 2008, Price Level Regulation).

    We will in the following look at cost of service models (cost-based pricing); however some of the reasoning will also apply to the other approaches.  A number of different models exist:

    •    Long Run Average Total Cost – LRATC
    •    Long Run Incremental Cost – LRIC
    •    Long Run Marginal cost – LRMC
    •    Forward Looking Long Run Average Incremental Costs – FL-LRAIC
    •    Long Run Average Interconnection Costs – LRAIC
    •    Total Element Long Run Incremental Cost – TELRIC
    •    Total Service Long Run Incremental Cost – TSLRIC
    •    Etc.

    Where:
    Long run: The period over which all factors of production, including capital, are variable.
    Long Run Incremental Costs: The incremental costs that would arise in the long run with a defined increment to demand.
    Marginal cost: The increase in the forward-looking cost of a firm caused by an increase in its output of one unit.
    Long Run Average Interconnection Costs: The term used by the European Commission to describe LRIC with the increment defined as the total service.

    We will not discuss the merits and use of the individual methods only direct the attention on the fact that an essential ingredient in all methods is their treatment of capital and the calculation of capital cost – Wacc.

    Calculating Wacc a World without Uncertainty

    Calculating Wacc for the current year is a straight forward task, we know for certain the interest (risk free rate and credit risk premium) and tax rates, the budget values for debt and equity, the market premium and the company’s beta etc.

    There is however a small snag, should we use the book value of Equity or should we calculate the market value of Equity and use this in the Wacc calculations? The last approach is the recommended one (Copeland, Koller, & Murrin, 1994, p248-250), but this implies a company valuation with calculation of Wacc for every year in the forecast period. The difference between the two approaches can be large – it is only when book value equals market value for every year in the future that they will give the same Wacc.

    In the example below market value of equity is lower than book value hence market value Wacc is lower than book value Wacc. Since this company have a low and declining ROIC the value of equity is decreasing and hence also the Wacc.

    Wacc-and-Wacc-weights

    Calculating Wacc for a specific company for a number of years into the future ((For some telecom cases, up to 50 years.)) is not a straight forward task. Wacc is no longer a single value, but a time series with values varying from year to year.

    Using the average value of Wacc can quickly lead you astray. Using an average in e.g. an LRIC model for telecommunications regulation, to determine the price paid by competitors for services provided by an operator with significant market power (incumbent) will in the first years give a too low price and in the later years a to high price when the series is decreasing and vice versa. So the use of an average value for Wacc can either add to the incumbent’s problems or give him a windfall income.

    The same applies for the use of book value equity vs. market value equity. If for the incumbent the market value of equity is lower than the book value, the price paid by the competitors when book value Wacc is used will be to high and the incumbent will have a windfall gain and vise versa.

    Some advocates the use of a target capital structure (Copeland, Koller, & Murrin, 1994, p250) to avoid the computational difficulties (solving implicit equations) of using market value weights in the Wacc calculation. But in real life it can be very difficult to reach and maintain a fixed structure. And it does not solve the problems with market value of equity deviating from book value.

    Calculating Wacc a World with Uncertainty

    The future values for most, if not all variable will in the real world be highly uncertain – in the long run even the tax rates will vary.

    The ‘long run’ aspect of the methods therefore implies an ex-ante (before the fact) treatment of a number of variable; inflation, interest and tax rates, demand, investments etc. that have to be treated as stochastic variable.
    This is underlined by the fact that more and more central banks is presenting their forecasts of macro economic variable as density tables/charts (e.g. Federal Reserve Bank of Philadelphia, 2009) or as fan charts (Nakamura, & Shinichiro, 2008) like below from the Swedish Central Bank (Sveriges Riksbank, 2009):

    Riksbank_dec09

    Fan charts like this visualises the region of uncertainty or the possible yearly event space for central variable. These variables will also be important exogenous variables in any corporate valuation as value or cost drivers. Add to this all other variables that have to be taken into account to describe the corporate operation.

    Now, for every possible outcome of any of these variables we will have a different value of the company and is equity and hence it’s Wacc. So we will not have one time series of Wacc, but a large number of different time series all equally probable. Actually the probability of having a single series forecasted correctly is approximately zero.

    Then there is the question about how long it is feasible to forecast macro variables without having to use just the unconditional mean (Galbraith, John W. and Tkacz). In the charts above the ‘content horizon’ is set to approximately 30 month, in other the horizon can be 40 month or more (Adolfson, Andersson, Linde, Villani, & Vredin, 2007).

    As is evident from the charts the fan width is increasing as we lengthen the horizon. This is an effect from the forecast methods as the band of forecast uncertainty increases as we go farther and farther into the future.

    The future nominal values of GDP, costs, etc. will show even greater variation since these values will be dependent on the growth rates path’s to that point in time.

    Mont Carlo Simulation

    A possible solution to the problems discussed above is to use Monte Carlo techniques to forecast the company’s equity value distribution – coupled with market value weights calculation to forecast the corresponding yearly Wacc distributions:

    Wacc-2012

    This is the approach we have implemented in our models – it will not give a single value for Wacc but its distribution.  If you need a single value, the mean or mode from the yearly distributions is better than using the Wacc found from using average values of the exogenous variable – cf. Jensen’s inequality (Savage & Danziger, 2009).

    References

    Adolfson, A., Andersson, M.K., Linde, J., Villani, M., & Vredin, A. (2007). Modern forecasting models in action: improving macroeconomic analyses at central banks. International Journal of Central Banking, (December), 111-144.

    Copeland, T., Koller, T., & Murrin, J. (1994). Valuation. New York: Wiley.

    Copenhag Eneconomics. (2007, February 02). Cost of capital for broadcasting transmission . Retrieved from http://www.pts.se/upload/Documents/SE/WACCforBroadcasting.pdf

    Federal Reserve Bank of Philadelphia, Initials. (2009, November 16). Fourth quarter 2009 survey of professional forecasters. Retrieved from http://www.phil.frb.org/research-and-data/real-time-center/survey-of-professional-forecasters/2009/survq409.cfm

    Galbraith, John W. and Tkacz, Greg, Forecast Content and Content Horizons for Some Important Macroeconomic Time Series. Canadian Journal of Economics, Vol. 40, No. 3, pp. 935-953, August 2007. Available at SSRN: http://ssrn.com/abstract=1001798 or doi:10.1111/j.1365-2966.2007.00437.x

    Jamison, Mark A., & Berg, Sanford V. (2008, August 15). Annotated reading list for a body of knowledge on infrastructure regulation (Developed for the World Bank). Retrieved from http://www.regulationbodyofknowledge.org/

    Nakamura, K., & Shinichiro, N. (2008). The Uncertainty of the economic outlook and central banks’ communications. Bank of Japan Review, (June 2008), Retrieved from http://www.boj.or.jp/en/type/ronbun/rev/data/rev08e01.pdf

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Sveriges Riksbank, . (2009). The Economic outlook and inflation prospects. Monetary Policy Report, (October), p7. Retrieved from http://www.riksbank.com/upload/Dokument_riksbank/Kat_publicerat/Rapporter/2009/mpr_3_09oct.pdf