Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Corporate risk analysis – Page 4 – Strategy @ Risk

Category: Corporate risk analysis

  • WACC, Uncertainty and Infrastructure Regulation

    WACC, Uncertainty and Infrastructure Regulation

    This entry is part 2 of 2 in the series The Weighted Average Cost of Capital

     

    There is a growing consensus that the successful development of infrastructure – electricity, natural gas, telecommunications, water, and transportation – depends in no small part on the adoption of appropriate public policies and the effective implementation of these policies. Central to these policies is development of a regulatory apparatus that provides stability, protects consumers from the abuse of market power, guard’s consumers and operators against political opportunism, and provides incentives for service providers to operate efficiently and make the needed investments’ capital  (Jamison, & Berg, 2008, Overview).

    There are four primary approaches to regulating the overall price level – rate of return regulation (or cost of service), price cap regulation, revenue cap regulation, and benchmarking (or yardstick) regulation. Rate of return regulation adjusts overall price levels according to the operator’s accounting costs and cost of capital. In most cases, the regulator reviews the operator’s overall price level in response to a claim by the operator that the rate of return that it is receiving is less than its cost of capital, or in response to a suspicion of the regulator or claim by a consumer group that the actual rate of return is greater than the cost of capital (Jamison, & Berg, 2008, Price Level Regulation).

    We will in the following look at cost of service models (cost-based pricing); however some of the reasoning will also apply to the other approaches.  A number of different models exist:

    •    Long Run Average Total Cost – LRATC
    •    Long Run Incremental Cost – LRIC
    •    Long Run Marginal cost – LRMC
    •    Forward Looking Long Run Average Incremental Costs – FL-LRAIC
    •    Long Run Average Interconnection Costs – LRAIC
    •    Total Element Long Run Incremental Cost – TELRIC
    •    Total Service Long Run Incremental Cost – TSLRIC
    •    Etc.

    Where:
    Long run: The period over which all factors of production, including capital, are variable.
    Long Run Incremental Costs: The incremental costs that would arise in the long run with a defined increment to demand.
    Marginal cost: The increase in the forward-looking cost of a firm caused by an increase in its output of one unit.
    Long Run Average Interconnection Costs: The term used by the European Commission to describe LRIC with the increment defined as the total service.

    We will not discuss the merits and use of the individual methods only direct the attention on the fact that an essential ingredient in all methods is their treatment of capital and the calculation of capital cost – Wacc.

    Calculating Wacc a World without Uncertainty

    Calculating Wacc for the current year is a straight forward task, we know for certain the interest (risk free rate and credit risk premium) and tax rates, the budget values for debt and equity, the market premium and the company’s beta etc.

    There is however a small snag, should we use the book value of Equity or should we calculate the market value of Equity and use this in the Wacc calculations? The last approach is the recommended one (Copeland, Koller, & Murrin, 1994, p248-250), but this implies a company valuation with calculation of Wacc for every year in the forecast period. The difference between the two approaches can be large – it is only when book value equals market value for every year in the future that they will give the same Wacc.

    In the example below market value of equity is lower than book value hence market value Wacc is lower than book value Wacc. Since this company have a low and declining ROIC the value of equity is decreasing and hence also the Wacc.

    Wacc-and-Wacc-weights

    Calculating Wacc for a specific company for a number of years into the future ((For some telecom cases, up to 50 years.)) is not a straight forward task. Wacc is no longer a single value, but a time series with values varying from year to year.

    Using the average value of Wacc can quickly lead you astray. Using an average in e.g. an LRIC model for telecommunications regulation, to determine the price paid by competitors for services provided by an operator with significant market power (incumbent) will in the first years give a too low price and in the later years a to high price when the series is decreasing and vice versa. So the use of an average value for Wacc can either add to the incumbent’s problems or give him a windfall income.

    The same applies for the use of book value equity vs. market value equity. If for the incumbent the market value of equity is lower than the book value, the price paid by the competitors when book value Wacc is used will be to high and the incumbent will have a windfall gain and vise versa.

    Some advocates the use of a target capital structure (Copeland, Koller, & Murrin, 1994, p250) to avoid the computational difficulties (solving implicit equations) of using market value weights in the Wacc calculation. But in real life it can be very difficult to reach and maintain a fixed structure. And it does not solve the problems with market value of equity deviating from book value.

    Calculating Wacc a World with Uncertainty

    The future values for most, if not all variable will in the real world be highly uncertain – in the long run even the tax rates will vary.

    The ‘long run’ aspect of the methods therefore implies an ex-ante (before the fact) treatment of a number of variable; inflation, interest and tax rates, demand, investments etc. that have to be treated as stochastic variable.
    This is underlined by the fact that more and more central banks is presenting their forecasts of macro economic variable as density tables/charts (e.g. Federal Reserve Bank of Philadelphia, 2009) or as fan charts (Nakamura, & Shinichiro, 2008) like below from the Swedish Central Bank (Sveriges Riksbank, 2009):

    Riksbank_dec09

    Fan charts like this visualises the region of uncertainty or the possible yearly event space for central variable. These variables will also be important exogenous variables in any corporate valuation as value or cost drivers. Add to this all other variables that have to be taken into account to describe the corporate operation.

    Now, for every possible outcome of any of these variables we will have a different value of the company and is equity and hence it’s Wacc. So we will not have one time series of Wacc, but a large number of different time series all equally probable. Actually the probability of having a single series forecasted correctly is approximately zero.

    Then there is the question about how long it is feasible to forecast macro variables without having to use just the unconditional mean (Galbraith, John W. and Tkacz). In the charts above the ‘content horizon’ is set to approximately 30 month, in other the horizon can be 40 month or more (Adolfson, Andersson, Linde, Villani, & Vredin, 2007).

    As is evident from the charts the fan width is increasing as we lengthen the horizon. This is an effect from the forecast methods as the band of forecast uncertainty increases as we go farther and farther into the future.

    The future nominal values of GDP, costs, etc. will show even greater variation since these values will be dependent on the growth rates path’s to that point in time.

    Mont Carlo Simulation

    A possible solution to the problems discussed above is to use Monte Carlo techniques to forecast the company’s equity value distribution – coupled with market value weights calculation to forecast the corresponding yearly Wacc distributions:

    Wacc-2012

    This is the approach we have implemented in our models – it will not give a single value for Wacc but its distribution.  If you need a single value, the mean or mode from the yearly distributions is better than using the Wacc found from using average values of the exogenous variable – cf. Jensen’s inequality (Savage & Danziger, 2009).

    References

    Adolfson, A., Andersson, M.K., Linde, J., Villani, M., & Vredin, A. (2007). Modern forecasting models in action: improving macroeconomic analyses at central banks. International Journal of Central Banking, (December), 111-144.

    Copeland, T., Koller, T., & Murrin, J. (1994). Valuation. New York: Wiley.

    Copenhag Eneconomics. (2007, February 02). Cost of capital for broadcasting transmission . Retrieved from http://www.pts.se/upload/Documents/SE/WACCforBroadcasting.pdf

    Federal Reserve Bank of Philadelphia, Initials. (2009, November 16). Fourth quarter 2009 survey of professional forecasters. Retrieved from http://www.phil.frb.org/research-and-data/real-time-center/survey-of-professional-forecasters/2009/survq409.cfm

    Galbraith, John W. and Tkacz, Greg, Forecast Content and Content Horizons for Some Important Macroeconomic Time Series. Canadian Journal of Economics, Vol. 40, No. 3, pp. 935-953, August 2007. Available at SSRN: http://ssrn.com/abstract=1001798 or doi:10.1111/j.1365-2966.2007.00437.x

    Jamison, Mark A., & Berg, Sanford V. (2008, August 15). Annotated reading list for a body of knowledge on infrastructure regulation (Developed for the World Bank). Retrieved from http://www.regulationbodyofknowledge.org/

    Nakamura, K., & Shinichiro, N. (2008). The Uncertainty of the economic outlook and central banks’ communications. Bank of Japan Review, (June 2008), Retrieved from http://www.boj.or.jp/en/type/ronbun/rev/data/rev08e01.pdf

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Sveriges Riksbank, . (2009). The Economic outlook and inflation prospects. Monetary Policy Report, (October), p7. Retrieved from http://www.riksbank.com/upload/Dokument_riksbank/Kat_publicerat/Rapporter/2009/mpr_3_09oct.pdf

  • Concession Revenue Modelling and Forecasting

    Concession Revenue Modelling and Forecasting

    This entry is part 2 of 4 in the series Airports

     

    Concessions are an important source of revenue for all airports. An airport simulation model should therefore be able to give a good forecast of revenue from different types of concessions -given a small set of assumptions about local future price levels and income development for its international Pax. Since we already have a good forecast model for the expected number of international Pax (and its variation) we will attempt to forecast the airports revenue pr Pax from one type of concession and use both forecasts to estimate the airports revenue from that concession.

    The theory behind is simple; the concessionaires sales is a function of product price and the customers (Pax) income level. Some other airport specific variables also enter the equation however they will not be discussed here. As a proxy for change in Pax income we will use the individual countries change in GDP.  The price movement is represented by the corresponding movements of a price index.

    We assume that changes in the trend for the airports revenue is a function of the changes in the general income level and that the seasonal variance is caused by the seasonal changes in the passenger mix (business/leisure travel).

    It is of course impossible to forecast the exact level of revenue, but that is as we shall see where Monte Carlo simulation proves its worth.

    The fist step is a time series analysis of the observed revenue pr Pax, decomposing the series in trend and seasonal factors:

    Concession-revenue

    The time series fit turns out to be very good explaining more than 90 % of the series variation. At this point however our only interest is the trend movements and its relation to change in prices, income and a few other airport specific variables. We will however here only look at income – the most important of the variable.

    Step two, is a time series analysis of income (weighted average of GDP development in countries with majority of Pax) separating trend and seasonal factors. This trend is what we are looking for; we want to use it to explain the trend movements in the revenue.

    Step three, is then a regression of the revenue trend on the income trend as shown in the graph below. The revenue trend was estimated assuming a quadratic relation over time and we can see that the fit is good. In fact 98 % of the variance in the revenue trend can be explained by the change in income (+) trend:

    Concession-trend

    Now the model will be as follows – step four:

    1. We will collect the central banks GDP forecasts (base line scenario) and use this to forecast the most likely change in income trend
    2. More and more central banks are now producing fan charts giving the possible event space (with probabilities) for their forecasts. We will use this to establish a probability distribution for our income proxy

    Below is given an example of a fan chart taken from the Bank of England’s inflation report November 2009. (Bank of England, 2009) ((The fan chart depicts the probability of various outcomes for GDP growth.  It has been conditioned on the assumption that the stock of purchased assets financed by the issuance of central bank reserves reaches £200 billion and remains there throughout the forecast period.  To the left of the first vertical dashed line, the distribution reflects the likelihood of revisions to the data over the past; to the right, it reflects uncertainty over the evolution of GDP growth in the future.  If economic circumstances identical to today’s were to prevail on 100 occasions, the MPC’s best collective judgement is that the mature estimate of GDP growth would lie within the darkest central band on only 10 of those occasions.  The fan chart is constructed so that outturns are also expected to lie within each pair of the lighter green areas on 10 occasions.  In any particular quarter of the forecast period, GDP is therefore expected to lie somewhere within the fan on 90 out of 100 occasions.  The bands widen as the time horizon is extended, indicating the increasing uncertainty about outcomes.  See the box on page 39 of the November 2007 Inflation Report for a fuller description of the fan chart and what it represents.  The second dashed line is drawn at the two-year point of the projection.))

    Bilde1

    3. We will then use the relation between historic revenue and income trend to forecast the revenue trend
    4. Adding the seasonal variation using the estimated seasonal factors – give us a forecast of the periodic revenue.

    For our historic data the result is shown in the graph below:

    Concession-revenue-estimate

    The calculated revenue series have a very high correlation with the observed revenue series (R=0.95) explaining approximately 90% of the series variation.

    Step five, now we can forecast the revenue from concession pr Pax figures for the next periods (month, quarters or years), using Monte Carlo simulation:

    1. From the income proxy distribution we draw a possible change in yearly income and calculates the new trend
    2. Using the estimated relation between historic revenue and income trend we forecast the most likely revenue trend and calculate the 95% confidence interval. We then use this to establish a probability distribution for the period’s trend level and draws a value. This value is adjusted with the period’s seasonal factor and becomes our forecasted value for the airports revenue from the concession – for this period.

    Running thru this a thousand times we get a distribution as given below:

    Concession-revenue-distribuIn the airport EBITDA model this only a small but important part for forecasting future airport revenue. As the models data are updated (monthly) all the time series analysis and regressions are redone dynamically to capture changes in trends and seasonal factors.

    The level of monthly revenue from the concession is obviously more complex than can be described with a small set of variable and assumptions. Our model has with high probability specification errors and we may or may not have violated some of the statistical methods assumptions (the model produces output to monitor this). But we feel that we are far better of than having put all our money on a single figure as a forecast. At least we know something about the forecasts uncertainty.

    References

    Bank of England. (2009, November). Inflation Report November 2009 . Retrieved from http://www.bankofengland.co.uk/publications/inflationreport/ir09nov5.ppt

  • When in doubt, develop the situation

    When in doubt, develop the situation

    Developing the situation is the common-sense approach to dealing with complexity. Both as a method and a mind-set, it uses time and our minds to actively build context, so that we can recognize patterns, discover options, and master the future as it unfolds in front of us (Blaber, 2008)

    In our setting ‘developing the situation’ is the process of numerically describing (modelling) the company’s operations taking into account input from all parts of the company; sales, procurement, production, finance etc. This again has to be put into the company’s environment; tax regimes, interest and currency rates, investors expected return and all other stake holders expectations.

    This is a context building process ending up with a map of the company’s operations giving clear roles and responsibilities to all departments and owners to each set of input data (assumptions).

    Without including uncertainty and volatility in both assumptions and data, this is however only a two dimensional map.  Adding the always present uncertainty gives us the third dimension and the option of innovation:

    … discovering innovative options instead of being forced to default to the status quo. Developing the situation optimizes our potential to recognize patterns and discover innovative options because it’s synergistic with how the human mind thinks and makes decisions (Blaber, 2008)

    Having calculated the cumulative probability distributions for key variable, new information is immediately available. Shape and localization tells us about underlying uncertainty and possible outcomes. Some distributions can be tweaked and some not. Characteristics of production like machine speed, error rates or the limit of air traffic movements are given and can only be changed over time with new investments. Other like sales, ebitda, profit etc. can be tweaked and in some cases even fine tuned by changing some of the exogenous variable or by introducing financial instruments or hedges etc.

    Planning for an uncertain future is a hard task, but preparing for it by adapting to the uncertainties and risk uncovered is well within our abilities – giving us:

    …  freedom of choice and flexibility to adapt to uncertainties instead of avoiding them because they weren’t part of the plan. Happenstance, nature, and human behaviour all interact within an environment to constantly alter the situation. No environment is ever static. As the environment around us changes, developing the situation allows us to maintain our most prized freedom: the freedom of choice – to adapt our thinking and decision-making accordingly (Blaber, 2008)

    Not all uncertainty represents risk of loss, but manifestations of opportunities given the right strategy, the means and will of implementation:

    … having the audacity to seize opportunities, instead of neglecting them due to risk aversion and fear of the unknown. Risk aversion and fear of the unknown are direct symptoms of a lack of context, and are the polar opposites of audacity. The way to deal with a fear of the unknown isn’t to avoid it by doing nothing … (Blaber, 2008)

    Pete Blaber’s book originally written on a totally different theme than ours can, as good books on strategy and hard earned experience from military planning, easily be adapted to our civilian purpose.

    References

    Blaber, P., (2008). The Mission, the Men, and Me. New York, Berkley Hardcover.

  • Top Ten Concerns of CFO’s – May 2009

    Top Ten Concerns of CFO’s – May 2009

    A poll of more than 1200 senior finance executives by CFO Europe together with Tilburg and Duke University ranks the ten top external and internal concerns in Europe, Asia and America (Jason, 2009).

    cfo_europe_top_ten1

    High in all regions we find as external concerns; consumer demand, interest rates, currency volatility and competition.

    For the internal concerns the ability to forecast results together with working capital management and balance sheet weakness ranked highest. This is concerns that balance simulation addresses with the purpose of calculating the effects of different strategies. Adding the uncertainty of future currency and interest rates, demand and competition you have all the ingredients implying the necessity of a stochastic simulation model.

    The risk that “now” has surfaced should compel more managers to look into the risk inherent in their operations. Even if you can’t plan for an uncertain future you can prepare for what it might bring.

    References

    Karaian, Jason (2009, May). Top Ten Concerns of CFO’s. CFO Europe, 12(1), 10-11.

  • The fallacies of Scenario analysis

    The fallacies of Scenario analysis

    This entry is part 1 of 4 in the series The fallacies of scenario analysis

     

    Scenario analysis is often used in company valuation – with high, low and most likely scenarios to estimate the value range and expected value. A common definition seems to be:

    Scenario analysis is a process of analyzing possible future events or series of actions by considering alternative possible outcomes (scenarios). The analysis is designed to allow improved decision-making by allowing consideration of outcomes and their implications.

    Actually this definition covers at least two different types of analysis:

    1. Alternative scenario analysis; in politics or geo-politics, scenario analysis involves modeling the possible alternative paths of a social or political environment and possibly diplomatic and war risks – “rehearsing the future”,
    2. Scenario analysis; a number of versions of the underlying mathematical problem are created to model the uncertain factors in the analysis.

    The first addresses “wicked” problems; ill-defined, ambiguous and associated with strong moral, political and professional issues. Since they are strongly stakeholder dependent, there is often little consensus about what the problem is, let alone how to resolve it. (Rittel & Webber,1974)

    The second cover “tame” problems; that has well-defined and stable problem statements and belongs to a class of similar problems which are all solved in the same similar way. (Conklin, 2001) Tame however does not mean simple – a tame problem can be very technically complex.

    Scenario analysis in the last sense is a compromise between computational complex stochastic models (the S&R approach) and the overly simplistic and often unrealistic deterministic models. Each scenario is a limited representation of the uncertain elements and one sub-problem is generated for each scenario.

    Best Case/ Worse Case Scenarios analysis.
    With risky assets, the actual cash flows can be very different from expectations. At the minimum, we can estimate the cash flows if everything works to perfection – a best case scenario – and if nothing does – a worst case scenario.

    In practice, each input into asset value is set to its best (or worst) possible outcome and the cash flows estimated with those values.

    Thus, when valuing a firm, the revenue growth rate and operating margin etc. is set at the highest possible level while interest rates etc. is set at its lowest level, and then the best-case scenario value is computed.

    The question now is – if this really is the best (or worst) value or if let’s say a 95% (5%) percentile is chosen for each input – will that give the 95% (5%) percentile for the firm’s value?

    Let’ say that we in the first case – (X + Y) – want to calculate entity value by adding ‘NPV of market value of FCF’ (X) and ‘NPV of continuing value’ (Y). Both are stochastic variables, X is positive while Y can be positive or negative.  In the second case – (X – Y) – we want to calculate the value of equity by subtracting value of debt (Y) from entity value (X). Both X and Y are stochastic, positive variables.

    From statistics we know that for the joint distribution of (X ±Y) the expected value E(X ±Y) is E(X) ± E(Y) and that Var(X ± Y) is Var(X) + Var(Y) ± 2Cov(X,Y). Already from the expression for the joint variance we can see that this not necessarily will be true. However the expected value will be the same.

    We can demonstrate this by calculating a number of percentiles for two normal independent distributions (with Cov(X,Y)=0, to make it simple) and add (subtract) them and plot the result (red line) with the same percentiles from the joint distribution  – blue line for (X+Y) and green line for (X-Y).

    joint-distrib-1

    As we can see the lines for X+Y only coincides at the expected value and the deviation increases as we move out on the tails. For X-Y the deviation is even more pronounced:

    joint-distrib-2

    Plotting the deviation from the joint distribution as percentage from X Y, demonstrates very large relative deviations as we move out on the tails and that the sign of the numerical operator totally changes the direction of the deviations:

    pct_difference

    Add to this, a valuation analysis with a large number of:

    1. both correlated and auto-correlated stochastic variables,
    2. complex calculations,
    3. simultaneous equations,

    and there is no way of finding out where you are on the probability distribution – unless you do a complete Monte Carlo simulation. It is like being out in the woods at night without a map and compass – you know you are in the woods but not where.

    Some advocates scenario analysis to measure risk on an asset using the difference between the best-case and worst-case. Based on the above this can only be a very bad idea, since risk in the sense of loss is connected to the left tail where the deviation from the joint distribution can be expected to be the largest. This brings us to the next post in the series.

    References

    Rittel, H., and Webber, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, Vol. 4, pp 155-169. Elsevier Scientific Publishing Company, Inc: Amsterdam.

    Conklin, Jeff (2001). Wicked Problems. Retrieved April 28, 2009, from CofNexus Institute Web site: http://www.cognexus.org/wpf/wickedproblems.pdf

     

  • Airport Simulation

    Airport Simulation

    This entry is part 1 of 4 in the series Airports

     

    The basic building block in airport simulation is the passenger (Pax) forecast. This is the basis for subsequent estimation of aircraft movements (ATM), investment in terminal buildings and airside installations, all traffic charges, tax free sales etc. In short it is the basic determinant of the airport’s economics.

    The forecast model is usually based on a logarithmic relation between Pax, GDP and airfare price movement. ((Manual on Air Traffic Forecasting. ICAO, 2006)), ((Howard, George P. et al. Airport Economic Planning. Cambridge: MIT Press, 1974.))

    There has been a large number of studies over time and across the world on Air Travel Demand Elasticities, a good survey is given in a Canadian study ((Gillen, David W.,William G. Morrison, Christopher Stewart . “Air Travel Demand Elasticities: Concepts, Issues and Measurement.” 24 Feb 2009 http://www.fin.gc.ca/consultresp/Airtravel/airtravStdy_-eng.asp)).

    In a recent project for an European airport – aimed at establishing an EBITDA model capable of simulating risk in its economic operations – we embedded the Pax forecast models in the EBITDA model. Since the seasonal variations in traffic are very pronounced and since the cycles are reverse for domestic and international traffic a good forecast model should attempt to forecast the seasonal variations for the different groups of travellers.

    int_dom-pax

    In the following graph we have done just that, by adding seasonal factors to the forecast model based on the relation between Pax and change in GDP and air fare cost. We have however accepted the fact that neither is the model specification complete, nor is the seasonal factors fixed and constant. We therefore apply Monte Carlo simulation using estimation and forecast errors as the stochastic parts. In the figure the green lines indicate the 95% limit, the blue the mean value and the red the 5% limit. Thus with 90% probability will the number of monthly Pax fall within these limits.

    pax

    From the graph we can clearly se the effects of estimation and forecast “errors” and the fact that it is international travel that increases most as GDP increases (summer effect).

    As an increase in GDP at this point of time is not exactly imminent we supply the following graph, displaying effects of different scenarios in growth in GDP and air fare cost.

    pax-gdp-and-price

    References