Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Scenario analysis – Strategy @ Risk

Tag: Scenario analysis

  • Planning under Uncertainty

    Planning under Uncertainty

    This entry is part 3 of 6 in the series Balance simulation

     

    ‘Would you tell me, please, which way I ought to go from here?’ (asked Alice)
    ‘That depends a good deal on where you want to get to,’ said the Cat.
    ‘I don’t much care where—‘said Alice.
    ‘Then it doesn’t matter which way you go,’ said the Cat.
    –    Lewis Carroll, Alice’s Adventures in Wonderland

    Let’s say that the board have sketched a future desired state (value of equity) of the company and that you are left to find if it is possible to get there and if so – the road to take. The first part implies to find out if the desired state belongs to a set of feasible future states to your company. If it does you will need a road map to get there, if it does not you will have to find out what additional means you will need to get there and if it is possible to acquire those.

    The current state (equity value of) your company is in itself uncertain since it depends on future sales, costs and profit – variable that usually are highly uncertain. The desired future state is even more so since you need to find strategies (roads) that can take you there and of those the one best suited to the situation. The ‘best strategies’ will be those that with highest probability and lowest costs will give you the desired state that is, that has the desired state or a better one as a very probable outcome:

    Each of the ‘best strategies’ will have many different combinations of values for the variables –that describe the company – that can give the desired state(s). Using Monte Carlo simulations this means that a few, some or many of the thousands of runs – or realizations of future states-will give equity value outcomes that fulfill the required state. What we need then is to find how each of these has come about – the transition – and select the most promising ones.

    The S@R balance simulation model has the ability to make intermediate stops when the desired state(s) has been reached giving the opportunity to take out complete reports describing the state(s) and how it was reached and by what path of transitional states.

    The flip side of this is that we can use the same model and the same assumptions to take out similar reports on how undesirable states were reached – and their path of transitional states. This set of reports will clearly describe the risks underlying the strategy and how and when they might occur.

    The dominant strategy will then be the one that has the desired state or a better one as a very probable outcome and that have at the same time the least probability of highly undesirable outcomes (the stochastic dominant strategy):

    Mulling over possible target- or scenario analysis; calculating backwards the value of each variable required to meet the target is a waste of time since both the environment is stochastic and a number of different paths (time-lines) can lead to the desired state:

    And even if you could do the calculations, what would the probabilities be?

    Carroll, L., (2010). Alice‘s Adventures in Wonderland -Original Version. City: Cosimo Classics.

  • Public Works Projects

    Public Works Projects

    This entry is part 2 of 4 in the series The fallacies of scenario analysis

     

    It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Hofstadter,1999)

    In public works and large scale construction or engineering projects – where uncertainty mostly (only) concerns cost, a simplified scenario analysis is often used.

    Costing Errors

    An excellent study carried out by Flyvberg, Holm and Buhl (Flyvbjerg, Holm, Buhl2002) address the serious questions surrounding the chronic costing errors in public works projects. The purpose was to identify typical deviation from budget and the specifics of the major causes for these deviations:

    The main findings from the study reported in their article – all highly significant and most likely conservative -are as follows:

    In 9 out of 10 transportation infrastructure projects, costs are underestimated. For a randomly selected project, the probability of actual costs being larger than estimated costs is  0.86. The probability of actual costs being lower than or equal to estimated costs is only 0.14. For all project types, actual costs are on average 28% higher than estimated costs.

    Cost underestimation:

    – exists across 20 nations and 5 continents:  appears to be a global phenomena.
    – has not decreased over the past 70 years:  no improvement in cost estimate accuracy.
    – cannot be excused by error:  seems best explained by strategic misrepresentation, i.e. the planned,   systematic  distortion or misstatement of facts inn the budget process. (Jones, Euske,1991)

    Demand Forecast Errors

    The demand forecasts only adds more errors to the final equation (Flyvbjerg, Holm, Buhl, 2005):

    • 84 percent of rail passenger forecasts are wrong by more than ±20 percent.
    • 50 percent of road traffic forecasts are wrong by more than ±20 percent.
    • Errors in traffic forecasts are found in the 14 nations and 5 continents covered by the study.
    • Inaccuracy is constant for the 30-year period covered: no improvement over time.

    The Machiavellian Formulae

    Adding the cost and demand errors to other uncertain effects, we get :

    Machiavelli’s Formulae:
    Overestimated revenues + Overvalued development effects – Underestimated cost – Undervalued environmental impact = Project Approval (Flyvbjerg, 2007)

    Cost Projections

    Transportation infrastructure projects do not appear to be more prone to cost underestimation than are other types of large projects like: power plants, dams, water distribution, oil and gas extraction, information technology systems, aerospace systems, and weapons systems.

    All of the findings above should be considered forms of risk. As has been shown in cost engineering research, poor risk analysis account for many project cost overruns.
    Two components of errors in the cost estimate can easily be identified (Bertisen, 2008):

    • Economic components: these errors are the result of incorrectly forecasted exchange rates, inflation rates of unit prices, fuel prices, or other economic variables affecting the realized nominal cost. Many of these variables have positive skewed distribution. This will then feed through to positive skewness in the total cost distribution.
    • Engineering components: this relates to errors both in estimating unit prices and in the required quantities. There may also be an over- or underestimation of the contingency needed to capture excluded items. Costs and quantity errors are always limited on the downside. However, there is no limit to costs and quantities on the upside, though. For many cost and quantity items, there is also a small probability of a “catastrophic event”, which would dramatically increase costs or quantities.

    When combining these factors the result is likely to be a positive skewed cost distribution, with many small and large under run and overrun deviations (from most likely value) joined by a few very large or catastrophic overrun deviations.

    Since the total cost (distribution) is positively skewed, expected cost can be considerably higher than the calculated most likely cost.

    We will have these findings as a backcloth when we examine the Norwegian Ministry of Finance’s guidelines  for assessing risk in public works (Ministry of Finance, 2008, pp 3) (Total uncertainty equal to the sum of systematic and unsystematic uncertainty):

    Interpreting the guidelines, we find the following assumption and advices:

    1. Unsystematic risk cancels out looking at large portfolios of projects.
    2. All systematic risk is perfectly correlated to the business cycle.
    3. Total cost approximately normal distributed.

    Since total risk is equal to the sum of systematic and unsystematic risk will, by the 2nd assumption, unsystematic risks comprise all uncertainty not explained by the business cycle. That is it will be comprised of all uncertainty in planning, mass calculations etc. and production of the project.

    It is usually in these tasks that the projects inherent risks later are revealed. Based on the above studies it is reasonable to believe that the unsystematic risk have a skewed distribution and is located in its entirety on the positive part of the cost axis i.e. it will not cancel out even in a portfolio of projects.

    The 2nd assumption that all systematic risk is perfectly correlated to the business cycle is a convenient one. It opens for a simple summation of percentiles (10%/90%) for all cost variables to arrive at total cost percentiles. (see previous post in this series)

    The effect of this assumption is that the risk model becomes a perverted one, with only one stochastic variable. All the rest can be calculated from the outcomes of the “business cycle” distribution.

    Now we know that delivery time, quality and prices for all equipment, machinery and raw materials are dependent on the activity level in all countries demanding or producing the same items. So, even if there existed a “business cycle” for every item (and a measure for it) these cycles would not necessarily be perfectly synchronised and thus prove false the assumption.

    The 3rd assumption implies either that all individual cost distributions are “near normal” or that they are independent and identically-distributed with finite variance, so that the central limit theorem can be applied.

    However, the individual cost distributions will be the product of unit price, exchange rate and quantity so even if the elements in the multiplication has a normal distribution, the product will not have a normal distribution.

    Claiming the central limit theorem is also a no-go since the cost elements by the 2nd assumption is perfectly correlated, they can not be independent.

    All experience and every study concludes that the total cost distribution does not have a normal distribution. The cost distribution evidently is positively skewed with fat tails whereas the normal distribution is symmetric with thin tails.

    Our concerns about the wisdom of the 3rd assumption, was confirmed in 2014, see: The implementation of the Norwegian Governmental Project Risk Assessment Scheme and the following articles.

    The solution to all this is to establish a proper simulation model for every large project and do the Monte Carlo simulation necessary to establish the total cost distribution, and then calculate the risks involved.

    “If we arrive, as our forefathers did, at the scene of battle inadequately equipped, incorrectly trained and mentally unprepared, then this failure will be a criminal one because there has been ample warning” — (Elliot-Bateman, 1967)

    References

    Bertisen, J., Davis, Graham A. (2008). Bias and error in mine project capital cost estimation.. Engineering Economist, 01-APR-08

    Elliott-Bateman, M. (1967). Defeat in the East: the mark of Mao Tse-tung on war. London: Oxford University Press.

    Flyvbjerg Bent (2007), Truth and Lies about Megaprojects, Inaugural speech, Delft University of Technology, September 26.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2002), “Underestimating Costs in Public Works Projects: Error or Lie?” Journal of the American Planning Association, vol. 68, no. 3, 279-295.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2005), “How (In)accurate Are Demand Forecasts in Public Works Projects?” Journal of the American Planning Association, vol. 71, no. 2, 131-146.

    Hofstadter, D., (1999). Gödel, Escher, Bach. New York: Basic Books

    Jones, L.R., K.J. Euske (1991).Strategic Misrepresentation in Budgeting. Journal of Public Administration Research and Theory, 1(4), 437-460.

    Ministry of Finance, (Norway) (2008,). Systematisk usikkerhet. Retrieved July 3, 2009, from The Concept research programme Web site: http://www.ivt.ntnu.no/bat/pa/forskning/Concept/KS-ordningen/Dokumenter/Veileder%20nr%204%20Systematisk%20usikkerhet%2011_3_2008.pdf

  • The fallacies of Scenario analysis

    The fallacies of Scenario analysis

    This entry is part 1 of 4 in the series The fallacies of scenario analysis

     

    Scenario analysis is often used in company valuation – with high, low and most likely scenarios to estimate the value range and expected value. A common definition seems to be:

    Scenario analysis is a process of analyzing possible future events or series of actions by considering alternative possible outcomes (scenarios). The analysis is designed to allow improved decision-making by allowing consideration of outcomes and their implications.

    Actually this definition covers at least two different types of analysis:

    1. Alternative scenario analysis; in politics or geo-politics, scenario analysis involves modeling the possible alternative paths of a social or political environment and possibly diplomatic and war risks – “rehearsing the future”,
    2. Scenario analysis; a number of versions of the underlying mathematical problem are created to model the uncertain factors in the analysis.

    The first addresses “wicked” problems; ill-defined, ambiguous and associated with strong moral, political and professional issues. Since they are strongly stakeholder dependent, there is often little consensus about what the problem is, let alone how to resolve it. (Rittel & Webber,1974)

    The second cover “tame” problems; that has well-defined and stable problem statements and belongs to a class of similar problems which are all solved in the same similar way. (Conklin, 2001) Tame however does not mean simple – a tame problem can be very technically complex.

    Scenario analysis in the last sense is a compromise between computational complex stochastic models (the S&R approach) and the overly simplistic and often unrealistic deterministic models. Each scenario is a limited representation of the uncertain elements and one sub-problem is generated for each scenario.

    Best Case/ Worse Case Scenarios analysis.
    With risky assets, the actual cash flows can be very different from expectations. At the minimum, we can estimate the cash flows if everything works to perfection – a best case scenario – and if nothing does – a worst case scenario.

    In practice, each input into asset value is set to its best (or worst) possible outcome and the cash flows estimated with those values.

    Thus, when valuing a firm, the revenue growth rate and operating margin etc. is set at the highest possible level while interest rates etc. is set at its lowest level, and then the best-case scenario value is computed.

    The question now is – if this really is the best (or worst) value or if let’s say a 95% (5%) percentile is chosen for each input – will that give the 95% (5%) percentile for the firm’s value?

    Let’ say that we in the first case – (X + Y) – want to calculate entity value by adding ‘NPV of market value of FCF’ (X) and ‘NPV of continuing value’ (Y). Both are stochastic variables, X is positive while Y can be positive or negative.  In the second case – (X – Y) – we want to calculate the value of equity by subtracting value of debt (Y) from entity value (X). Both X and Y are stochastic, positive variables.

    From statistics we know that for the joint distribution of (X ±Y) the expected value E(X ±Y) is E(X) ± E(Y) and that Var(X ± Y) is Var(X) + Var(Y) ± 2Cov(X,Y). Already from the expression for the joint variance we can see that this not necessarily will be true. However the expected value will be the same.

    We can demonstrate this by calculating a number of percentiles for two normal independent distributions (with Cov(X,Y)=0, to make it simple) and add (subtract) them and plot the result (red line) with the same percentiles from the joint distribution  – blue line for (X+Y) and green line for (X-Y).

    joint-distrib-1

    As we can see the lines for X+Y only coincides at the expected value and the deviation increases as we move out on the tails. For X-Y the deviation is even more pronounced:

    joint-distrib-2

    Plotting the deviation from the joint distribution as percentage from X Y, demonstrates very large relative deviations as we move out on the tails and that the sign of the numerical operator totally changes the direction of the deviations:

    pct_difference

    Add to this, a valuation analysis with a large number of:

    1. both correlated and auto-correlated stochastic variables,
    2. complex calculations,
    3. simultaneous equations,

    and there is no way of finding out where you are on the probability distribution – unless you do a complete Monte Carlo simulation. It is like being out in the woods at night without a map and compass – you know you are in the woods but not where.

    Some advocates scenario analysis to measure risk on an asset using the difference between the best-case and worst-case. Based on the above this can only be a very bad idea, since risk in the sense of loss is connected to the left tail where the deviation from the joint distribution can be expected to be the largest. This brings us to the next post in the series.

    References

    Rittel, H., and Webber, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, Vol. 4, pp 155-169. Elsevier Scientific Publishing Company, Inc: Amsterdam.

    Conklin, Jeff (2001). Wicked Problems. Retrieved April 28, 2009, from CofNexus Institute Web site: http://www.cognexus.org/wpf/wickedproblems.pdf