Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
S@R – Page 8 – Strategy @ Risk

Author: S@R

  • When in doubt, develop the situation

    When in doubt, develop the situation

    Developing the situation is the common-sense approach to dealing with complexity. Both as a method and a mind-set, it uses time and our minds to actively build context, so that we can recognize patterns, discover options, and master the future as it unfolds in front of us (Blaber, 2008)

    In our setting ‘developing the situation’ is the process of numerically describing (modelling) the company’s operations taking into account input from all parts of the company; sales, procurement, production, finance etc. This again has to be put into the company’s environment; tax regimes, interest and currency rates, investors expected return and all other stake holders expectations.

    This is a context building process ending up with a map of the company’s operations giving clear roles and responsibilities to all departments and owners to each set of input data (assumptions).

    Without including uncertainty and volatility in both assumptions and data, this is however only a two dimensional map.  Adding the always present uncertainty gives us the third dimension and the option of innovation:

    … discovering innovative options instead of being forced to default to the status quo. Developing the situation optimizes our potential to recognize patterns and discover innovative options because it’s synergistic with how the human mind thinks and makes decisions (Blaber, 2008)

    Having calculated the cumulative probability distributions for key variable, new information is immediately available. Shape and localization tells us about underlying uncertainty and possible outcomes. Some distributions can be tweaked and some not. Characteristics of production like machine speed, error rates or the limit of air traffic movements are given and can only be changed over time with new investments. Other like sales, ebitda, profit etc. can be tweaked and in some cases even fine tuned by changing some of the exogenous variable or by introducing financial instruments or hedges etc.

    Planning for an uncertain future is a hard task, but preparing for it by adapting to the uncertainties and risk uncovered is well within our abilities – giving us:

    …  freedom of choice and flexibility to adapt to uncertainties instead of avoiding them because they weren’t part of the plan. Happenstance, nature, and human behaviour all interact within an environment to constantly alter the situation. No environment is ever static. As the environment around us changes, developing the situation allows us to maintain our most prized freedom: the freedom of choice – to adapt our thinking and decision-making accordingly (Blaber, 2008)

    Not all uncertainty represents risk of loss, but manifestations of opportunities given the right strategy, the means and will of implementation:

    … having the audacity to seize opportunities, instead of neglecting them due to risk aversion and fear of the unknown. Risk aversion and fear of the unknown are direct symptoms of a lack of context, and are the polar opposites of audacity. The way to deal with a fear of the unknown isn’t to avoid it by doing nothing … (Blaber, 2008)

    Pete Blaber’s book originally written on a totally different theme than ours can, as good books on strategy and hard earned experience from military planning, easily be adapted to our civilian purpose.

    References

    Blaber, P., (2008). The Mission, the Men, and Me. New York, Berkley Hardcover.

  • Top Ten Concerns of CFO’s – May 2009

    Top Ten Concerns of CFO’s – May 2009

    A poll of more than 1200 senior finance executives by CFO Europe together with Tilburg and Duke University ranks the ten top external and internal concerns in Europe, Asia and America (Jason, 2009).

    cfo_europe_top_ten1

    High in all regions we find as external concerns; consumer demand, interest rates, currency volatility and competition.

    For the internal concerns the ability to forecast results together with working capital management and balance sheet weakness ranked highest. This is concerns that balance simulation addresses with the purpose of calculating the effects of different strategies. Adding the uncertainty of future currency and interest rates, demand and competition you have all the ingredients implying the necessity of a stochastic simulation model.

    The risk that “now” has surfaced should compel more managers to look into the risk inherent in their operations. Even if you can’t plan for an uncertain future you can prepare for what it might bring.

    References

    Karaian, Jason (2009, May). Top Ten Concerns of CFO’s. CFO Europe, 12(1), 10-11.

  • The fallacies of Scenario analysis

    The fallacies of Scenario analysis

    This entry is part 1 of 4 in the series The fallacies of scenario analysis

     

    Scenario analysis is often used in company valuation – with high, low and most likely scenarios to estimate the value range and expected value. A common definition seems to be:

    Scenario analysis is a process of analyzing possible future events or series of actions by considering alternative possible outcomes (scenarios). The analysis is designed to allow improved decision-making by allowing consideration of outcomes and their implications.

    Actually this definition covers at least two different types of analysis:

    1. Alternative scenario analysis; in politics or geo-politics, scenario analysis involves modeling the possible alternative paths of a social or political environment and possibly diplomatic and war risks – “rehearsing the future”,
    2. Scenario analysis; a number of versions of the underlying mathematical problem are created to model the uncertain factors in the analysis.

    The first addresses “wicked” problems; ill-defined, ambiguous and associated with strong moral, political and professional issues. Since they are strongly stakeholder dependent, there is often little consensus about what the problem is, let alone how to resolve it. (Rittel & Webber,1974)

    The second cover “tame” problems; that has well-defined and stable problem statements and belongs to a class of similar problems which are all solved in the same similar way. (Conklin, 2001) Tame however does not mean simple – a tame problem can be very technically complex.

    Scenario analysis in the last sense is a compromise between computational complex stochastic models (the S&R approach) and the overly simplistic and often unrealistic deterministic models. Each scenario is a limited representation of the uncertain elements and one sub-problem is generated for each scenario.

    Best Case/ Worse Case Scenarios analysis.
    With risky assets, the actual cash flows can be very different from expectations. At the minimum, we can estimate the cash flows if everything works to perfection – a best case scenario – and if nothing does – a worst case scenario.

    In practice, each input into asset value is set to its best (or worst) possible outcome and the cash flows estimated with those values.

    Thus, when valuing a firm, the revenue growth rate and operating margin etc. is set at the highest possible level while interest rates etc. is set at its lowest level, and then the best-case scenario value is computed.

    The question now is – if this really is the best (or worst) value or if let’s say a 95% (5%) percentile is chosen for each input – will that give the 95% (5%) percentile for the firm’s value?

    Let’ say that we in the first case – (X + Y) – want to calculate entity value by adding ‘NPV of market value of FCF’ (X) and ‘NPV of continuing value’ (Y). Both are stochastic variables, X is positive while Y can be positive or negative.  In the second case – (X – Y) – we want to calculate the value of equity by subtracting value of debt (Y) from entity value (X). Both X and Y are stochastic, positive variables.

    From statistics we know that for the joint distribution of (X ±Y) the expected value E(X ±Y) is E(X) ± E(Y) and that Var(X ± Y) is Var(X) + Var(Y) ± 2Cov(X,Y). Already from the expression for the joint variance we can see that this not necessarily will be true. However the expected value will be the same.

    We can demonstrate this by calculating a number of percentiles for two normal independent distributions (with Cov(X,Y)=0, to make it simple) and add (subtract) them and plot the result (red line) with the same percentiles from the joint distribution  – blue line for (X+Y) and green line for (X-Y).

    joint-distrib-1

    As we can see the lines for X+Y only coincides at the expected value and the deviation increases as we move out on the tails. For X-Y the deviation is even more pronounced:

    joint-distrib-2

    Plotting the deviation from the joint distribution as percentage from X Y, demonstrates very large relative deviations as we move out on the tails and that the sign of the numerical operator totally changes the direction of the deviations:

    pct_difference

    Add to this, a valuation analysis with a large number of:

    1. both correlated and auto-correlated stochastic variables,
    2. complex calculations,
    3. simultaneous equations,

    and there is no way of finding out where you are on the probability distribution – unless you do a complete Monte Carlo simulation. It is like being out in the woods at night without a map and compass – you know you are in the woods but not where.

    Some advocates scenario analysis to measure risk on an asset using the difference between the best-case and worst-case. Based on the above this can only be a very bad idea, since risk in the sense of loss is connected to the left tail where the deviation from the joint distribution can be expected to be the largest. This brings us to the next post in the series.

    References

    Rittel, H., and Webber, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, Vol. 4, pp 155-169. Elsevier Scientific Publishing Company, Inc: Amsterdam.

    Conklin, Jeff (2001). Wicked Problems. Retrieved April 28, 2009, from CofNexus Institute Web site: http://www.cognexus.org/wpf/wickedproblems.pdf

     

  • Valuation as a strategic tool

    Valuation as a strategic tool

    This entry is part 1 of 2 in the series Valuation

     

    Valuation is something usually done only when selling or buying a company (see: probability of gain and loss). However it is a versatile tool in assessing issues as risk and strategies both in operations and finance.

    The risk and strategy element is often not evident unless the valuation is executed as a Monte Carlo simulation giving the probability distribution for equity value (or the value of entity).  We will in a new series of posts take a look at how this distribution can be used.

    By strategy we will in the following mean a plan of action designed to achieve a particular goal. The plan may involve issues across finance and operation of the company; debt, equity, taxes, currency, markets, sales, production etc. The goal usually is to move the value distribution to the right (increasing value), but it may well be to shorten the left tail – reducing risk – or increasing the upside by lengthening the right tail.

    There are a variety of definitions of risk. In general, risk can be described as; “uncertainty of loss” (Denenberg, 1964); “uncertainty about loss” (Mehr &Cammack, 1961); or “uncertainty concerning loss” (Rabel, 1968). Greene defines financial risk as the “uncertainty as to the occurrence of an economic loss” (Greene, 1962).

    Risk can also be described as “measurable uncertainty” when the probability of an outcome is possible to calculate (is knowable), and uncertainty, when the probability of an outcome is not possible to determine (is unknowable) (Knight, 1921). Thus risk can be calculated, but uncertainty only reduced.

    In our context some uncertainty is objectively measurable like down time, error rates, operating rates, production time, seat factor, turnaround time etc. For others like sales, interest rates, inflation rates, etc. the uncertainty can only subjectively be measured.

    “[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.” (John Maynard Keynes, 1937)

    On this basis we will proceed, using managers best guess about the range of possible values and most likely value for production related variables and market consensus etc. for possible outcomes for variables like inflation, interest etc. We will use this to generate appropriate distributions (log-normal) for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate or does not exist, we will use rectangular distributions.

    Benoit Mandelbrot (Mandelbrot, 2004) and Taleb Nasim (Nasim, 2007) have rightly criticized the economic profession for “over use” of the normal distribution – the bell curve. The argument is that it has too thin and short tails. It will thus underestimate the possibility of far out extremes – that is, low probability events with high impact (Black Swan’s).

    Since we use Monte Carlo simulation we can use any distribution to represent possible outcomes of a variable. So using the normal distribution for it’s statistically nicety is not necessary. We can even construct distributions that have the features we look for, without having to describe it mathematically.

    However using normal distributions for some variables and log-normal for others etc. in a value simulation will not give you a normal or log-normal distributed equity value. A number of things can happen in the forecast period; adverse sales, interest or currency rates, incurred losses, new equity called etc. Together with tax, legal and IFRS rules etc. the system will not be linear and much more complex to calculate then mere additions, subtraction or multiplication of probability distributions.

    We will in the following adhere to uncertainty and loss, where loss is an event where calculated equity value is less than book value of equity or in the case of M&A, less than the price paid.

    Assume that we have calculated  the value distribution (cumulative) for two different strategies. The distribution for current operations (blue curve) have a shape showing considerable downside risk (left tail) and a limited upside potential; give a mean equity value of $92M with a minimum of $-28M and a maximum of $150M. This, the span of possible outcomes and the fact that it can be negative compelled the board to look for new strategies reducing downside risk.

    strategy1

    They come up with strategy #1 (green curve) which to a risk-averse board is a good proposition: reducing downward risk by substantially shortening the left tail, increasing expected value of equity by moving the distribution to the right and reducing the overall uncertainty by producing a more vertical curve. In numbers; the minimum value was reduced to $68M, the mean value of equity was increased to $112M and the coefficient of variation was reduced from 30% to 14%. The upside potential increased somewhat but not much.
    To a risk-seeking board strategy#2 (red curve) would be a better proposition: the right tail has been stretched out giving a maximum value of $241M, however so have the left tail giving a minimum value to $-163M, increasing the event space and the coefficient of variation to 57%. The mean value of equity has been slightly reduced to $106M.

    So how could the strategies have been brought about?  Strategy #1 could involve introduction of long term energy contracts taking advantage of today’s low energy cost. Strategy #2 introduces a new product with high initial investments and considerable uncertainties about market acceptance.

    As we now can see the shape of the value distribution gives a lot of information about the company’s risk and opportunities.  And given the boards risk appetite it should be fairly simple to select between strategies just looking at the curves. But what if it is not obvious which the best is? We will return later in this series to answer that question and how the company’s risk and opportunities can be calculated.

    References

    Denenberg, H., et al. (1964). Risk and insurance. Englewood Cliffs, NJ: PrenticeHall,Inc.
    Greene, M. R. (1962). Risk and insurance. Cincinnati, OH: South-Western Publishing Co.
    Keynes, John Maynard. (1937). General Theory of Employment. Quarterly Journal of Economics.
    Knight, F. H. (1921). Risk, uncertainty and profit. Boston, MA: Houghton Mifflin Co.
    Mandelbrot, B., & Hudson, R. (2006). The (Mis) Behavior of Markets. Cambridge: Perseus Books Group.
    Mehr, R. I. and Cammack, E. (1961). Principles of insurance, 3.  Edition. Richard D. Irwin, Inc.
    Rable, W. H. (1968). Further comment. Journal of Risk and Insurance, 35 (4): 611-612.
    Taleb, N., (2007). The Black Swan. New York: Random House.

  • The Probability of Bankruptcy

    The Probability of Bankruptcy

    This entry is part 3 of 4 in the series Risk of Bankruptcy

     

    In the simulation we have for every year calculated all four metrics, and over the 250 runs their mean and standard deviation. All metrics is thus based on the same data set. During the forecast period the company invested heavily, financed partly by equity and partly by loans. The operations admittedly give a low but fairly stable return to assets. It was however never at any time in need for capital infusion to avoid insolvency. Since we now “know” the future we can judge the metrics ability to predict bankruptcy.

    A good metric should have a low probability of rejecting a true hypothesis of bankruptcy (false positive) and a high probability of rejecting a false hypothesis of bankruptcy (false negative).

    In the figures below the more or less horizontal curve gives the most likely value of the metric, while the vertical red lines indicate the 90% event space. By visual inspection of the area covered by the red lines we can get an indication of the false negative and false positive rate.

    The Z-Index shows an increase over time in the probability of insolvency, but the probability is very low for all years in the forecast period. The most striking effect is the increase in variance as we move towards the end of the simulated period. This is caused by the fact that uncertainty is “accumulated” over the forecast period. However, according to the Z-index, this company will not be endangered inside the 15 year horizon.

    z-index_time_serie

    In our case the Z-Index correctly identifies the probability of insolvency as small. By inspecting the yearly outcomes represented by the vertical lines we also find an almost zero false negative rate.

    The Z-score metrics tells a different story. The Z’’-score starts in the grey area and eventually ends up in the distress zone. The two others put the company in the distress zone for the whole forecast period.

    z-scores_time_series

    Since the distress zone for the Z-score is below 1.8, a visual inspection of the area covered by the red lines indicates that most of the outcomes fall in the distress zone. The Z-score metrics in this case performs type II errors by giving false negative judgements. However it is not clear what this means – only that the company in some respect is similar to companies gone bankrupt.

    z-score_time_serie

    If we look at the Z metrics for the individual years we find that the Z-score have values from minus two to plus three, in fact it has a coefficient of variation ranging from 300% to 500%. In addition there is very little evidence of the expected cumulative effect.

    z-coeff-of-var

    The other two metrics (Z’ and Z’’) shows much less variation and the expected cumulative effect.  The Z’-score outcomes fall entirely in the distress zone, giving a 100% false negative rate.

    z-score_time_serie1

    The Z’’-score outcome falls mostly in the distress zone below 1.1, but more and more falls in the grey area as we move forward in time. If we combine the safe zone with the grey we get a much lower false negative rate than for both the Z and the Z’ score.

    z-score_time_serie2

    It is difficult to draw conclusions from this exercise, but it points to the possibility of high false negative rates for the Z metrics. Use of ratios in assessing a company’s performance is often questionable and a linear metric based on a few such ratios will obviously have limitations. The fact that the original sample consisted of the same number of healthy and bankrupt companies might also have contributed to a bias in the discriminant coefficients. In real life the failure rate is much lower than 50%!

  • Predicting Bankruptcy

    Predicting Bankruptcy

    This entry is part 2 of 4 in the series Risk of Bankruptcy

     

    The Z-score formula for predicting bankruptcy was developed in 1968 by Edward I. Altman. The Z-score is not intended to predict when a firm will file a formal declaration of bankruptcy in a district court. It is instead a measure of how closely a firm resembles other firms that have filed for bankruptcy.

    The Z-score is classification method using a multivariate discriminant function that measures corporate financial distress and predicts the likelihood of bankruptcy within two years. ((Altman, Edward I., “Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy”. Journal of Finance, (September 1968): pp. 589-609.))

    Others like Springate ((Springate, Gordon L.V., “Predicting the Possibility of Failure in a Canadian Firm”. Unpublished M.B.A. Research Project, Simon Fraser University, January 1978.)), Fulmer ((Fulmer, John G. Jr., Moon, James E., Gavin, Thomas A., Erwin, Michael J., “A Bankruptcy Classification Model For Small Firms”. Journal of Commercial Bank Lending (July 1984): pp. 25-37.)) and the CA-SCORE model ((“C.A. – Score, A Warning System for Small Business Failures”, Bilanas (June 1987): pp. 29-31.)) have later followed in Altman’s track using step-wise multiple discriminant analysis to evaluate a large number of financial ratio’s ability to discriminate between corporate future failures and successes.

    Since Altman’s discriminant function only is linear in the explanatory variables, there has been a number of attempts to capture non-linear relations thru other types of models ((Berg, Daniel. “Bankruptcy Prediction by Generalized Additive Models.” Statistical Research Report. January 2005. Dept. of Math. University of Oslo. 20 Mar 2009 <http://www.math.uio.no/eprint/stat_report/2005/01-05.pdf>.))  ((Dakovic, Rada,Claudia Czado,Daniel Berg. Bankruptcy prediction in Norway: a comparison study. June 2007. Dept. of Math. University of Oslo. 20 Mar 2009 <http://www.math.uio.no/eprint/stat_report/2007/04-07.pdf>.)). Even if some of these models shows a somewhat better predicting ability, we will use the better known Z-score model in the following.

    Studies measuring the effectiveness of the Z-score claims the model to be accurate with >70% reliability. Altman found that about 95% of the bankrupt firms were correctly classified as bankrupt. And roughly 80% of the sick, non-bankrupt firms were correctly classified as non-bankrupt (( Altman, Edward I.. “Revisiting Credit Scoring Models in a Basel 2 Environment.” Finance Working Paper Series . May 2002. Stern School of Business. 20 Mar 2009 <http://w4.stern.nyu.edu/finance/docs/WP/2002/html/wpa02041.html>. )). However others find that the Z-score tends to misclasifie the non-bankrupt firms ((Ricci, Cecilia Wagner. “Bankruptcy Prediction: The Case of the CLECS.” Mid-American Journal of Business 18(2003): 71-81.)).

    The Z-score combines four or five common business ratios using a linear discriminant function to determine the regions with high likelihood of bankruptcy. The discriminant coefficients (ratio value weights) were originally based on data from publicly held manufacturers, but have since been modified for private manufacturing, non-manufacturing and service companies.

    The original data sample consisted of 66 firms, half of which had filed for bankruptcy under Chapter 7. All businesses in the database were manufacturers and small firms with assets of <$1million was eliminated.

    The advantage of discriminant analysis is that many characteristics can be combined into a single score. A low score implies membership in one group, a high score implies membership in the other group, and a middling score causes uncertainty as to which group the subject belongs.

    The original score was as follows:

    Z = 1.2 WC/TA + 1.4 RE/TA + 3.3 EBIT/TA +0.6R ME/BL +0.999 S/TA
    where:

    WC/TA= Working Capital / Total Assets, RE/TA= Retained Earnings / Total Assets
    EBIT/TA = EBIT/ Total Assets, S/TA = Sales/ Total Assets
    ME/BL = Market Value of Equity / Book Value of Total Liabilities

    From about 1985 onwards, the Z-scores have gained acceptance by auditors, management accountants, courts, and database systems used for loan evaluation. It has been used in a variety of contexts and countries, but was designed originally for publicly held manufacturing companies with assets of more than $1 million. Later revisions take into account the book value of privately held shares, and the fact that turnover ratios vary widely in non-manufacturing industries:

    1. Z-score for publicly held Manufacturers
    2. Z’-score for private Firms
    3. Z’’-score for Manufacturers, Non-Manufacturer Industrials & Emerging Market Credits

    The estimated discriminant coefficients for the different models is given in the following table: [Table=3] and the accompanying borders of the different regions – risk zones – are given in the table below. [Table=4] In the following calculations we will use the estimated value of equity as a proxy for market capitalization. Actually it is the other way around since the market capitalization is a guesstimate of the intrinsic equity value.

    In our calculations the Z-score metrics will become stochastic variables with distributions derived both from the operational input distributions for sale, prices, costs etc. and the distributions for the financial variables like risk free interest rate, inflation etc. The figures below are taken from the fifth year in the simulation to be comparable with the previous Z-index calculation that gave a very low probability for insolvency.

    We have in the following calculated all three Z metrics, even when only the Z-score fits the company description.

    z-score

    Using the Z-score metric we find that the company with high probability will be found in the distress area – it can even have negative Z-score. The last is due to the fact that the company has negative working capital – being partly financed by its suppliers and partly to the use of calculated value of equity – which can be negative.

    The Z’’-score is even more somber giving no possibility for values outside the distress area:

    z-score1

    The Z’’-score however puts most of the observations in the gray area:

    z-score2

    Before drawing any conclusions we will in the next post look at the time series for both the Z-index and the Z-scores. Nevertheless one observation can be made – the Z metric is a stochastic variable with an event space that easily can encompass all three risk zones – we therefore need the probability distribution over the zones to forecast the risk of bankruptcy.

    References