Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Corporate Strategy – Page 3 – Strategy @ Risk

Category: Corporate Strategy

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.

  • When in doubt, develop the situation

    When in doubt, develop the situation

    Developing the situation is the common-sense approach to dealing with complexity. Both as a method and a mind-set, it uses time and our minds to actively build context, so that we can recognize patterns, discover options, and master the future as it unfolds in front of us (Blaber, 2008)

    In our setting ‘developing the situation’ is the process of numerically describing (modelling) the company’s operations taking into account input from all parts of the company; sales, procurement, production, finance etc. This again has to be put into the company’s environment; tax regimes, interest and currency rates, investors expected return and all other stake holders expectations.

    This is a context building process ending up with a map of the company’s operations giving clear roles and responsibilities to all departments and owners to each set of input data (assumptions).

    Without including uncertainty and volatility in both assumptions and data, this is however only a two dimensional map.  Adding the always present uncertainty gives us the third dimension and the option of innovation:

    … discovering innovative options instead of being forced to default to the status quo. Developing the situation optimizes our potential to recognize patterns and discover innovative options because it’s synergistic with how the human mind thinks and makes decisions (Blaber, 2008)

    Having calculated the cumulative probability distributions for key variable, new information is immediately available. Shape and localization tells us about underlying uncertainty and possible outcomes. Some distributions can be tweaked and some not. Characteristics of production like machine speed, error rates or the limit of air traffic movements are given and can only be changed over time with new investments. Other like sales, ebitda, profit etc. can be tweaked and in some cases even fine tuned by changing some of the exogenous variable or by introducing financial instruments or hedges etc.

    Planning for an uncertain future is a hard task, but preparing for it by adapting to the uncertainties and risk uncovered is well within our abilities – giving us:

    …  freedom of choice and flexibility to adapt to uncertainties instead of avoiding them because they weren’t part of the plan. Happenstance, nature, and human behaviour all interact within an environment to constantly alter the situation. No environment is ever static. As the environment around us changes, developing the situation allows us to maintain our most prized freedom: the freedom of choice – to adapt our thinking and decision-making accordingly (Blaber, 2008)

    Not all uncertainty represents risk of loss, but manifestations of opportunities given the right strategy, the means and will of implementation:

    … having the audacity to seize opportunities, instead of neglecting them due to risk aversion and fear of the unknown. Risk aversion and fear of the unknown are direct symptoms of a lack of context, and are the polar opposites of audacity. The way to deal with a fear of the unknown isn’t to avoid it by doing nothing … (Blaber, 2008)

    Pete Blaber’s book originally written on a totally different theme than ours can, as good books on strategy and hard earned experience from military planning, easily be adapted to our civilian purpose.

    References

    Blaber, P., (2008). The Mission, the Men, and Me. New York, Berkley Hardcover.

  • Valuation as a strategic tool

    Valuation as a strategic tool

    This entry is part 1 of 2 in the series Valuation

     

    Valuation is something usually done only when selling or buying a company (see: probability of gain and loss). However it is a versatile tool in assessing issues as risk and strategies both in operations and finance.

    The risk and strategy element is often not evident unless the valuation is executed as a Monte Carlo simulation giving the probability distribution for equity value (or the value of entity).  We will in a new series of posts take a look at how this distribution can be used.

    By strategy we will in the following mean a plan of action designed to achieve a particular goal. The plan may involve issues across finance and operation of the company; debt, equity, taxes, currency, markets, sales, production etc. The goal usually is to move the value distribution to the right (increasing value), but it may well be to shorten the left tail – reducing risk – or increasing the upside by lengthening the right tail.

    There are a variety of definitions of risk. In general, risk can be described as; “uncertainty of loss” (Denenberg, 1964); “uncertainty about loss” (Mehr &Cammack, 1961); or “uncertainty concerning loss” (Rabel, 1968). Greene defines financial risk as the “uncertainty as to the occurrence of an economic loss” (Greene, 1962).

    Risk can also be described as “measurable uncertainty” when the probability of an outcome is possible to calculate (is knowable), and uncertainty, when the probability of an outcome is not possible to determine (is unknowable) (Knight, 1921). Thus risk can be calculated, but uncertainty only reduced.

    In our context some uncertainty is objectively measurable like down time, error rates, operating rates, production time, seat factor, turnaround time etc. For others like sales, interest rates, inflation rates, etc. the uncertainty can only subjectively be measured.

    “[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.” (John Maynard Keynes, 1937)

    On this basis we will proceed, using managers best guess about the range of possible values and most likely value for production related variables and market consensus etc. for possible outcomes for variables like inflation, interest etc. We will use this to generate appropriate distributions (log-normal) for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate or does not exist, we will use rectangular distributions.

    Benoit Mandelbrot (Mandelbrot, 2004) and Taleb Nasim (Nasim, 2007) have rightly criticized the economic profession for “over use” of the normal distribution – the bell curve. The argument is that it has too thin and short tails. It will thus underestimate the possibility of far out extremes – that is, low probability events with high impact (Black Swan’s).

    Since we use Monte Carlo simulation we can use any distribution to represent possible outcomes of a variable. So using the normal distribution for it’s statistically nicety is not necessary. We can even construct distributions that have the features we look for, without having to describe it mathematically.

    However using normal distributions for some variables and log-normal for others etc. in a value simulation will not give you a normal or log-normal distributed equity value. A number of things can happen in the forecast period; adverse sales, interest or currency rates, incurred losses, new equity called etc. Together with tax, legal and IFRS rules etc. the system will not be linear and much more complex to calculate then mere additions, subtraction or multiplication of probability distributions.

    We will in the following adhere to uncertainty and loss, where loss is an event where calculated equity value is less than book value of equity or in the case of M&A, less than the price paid.

    Assume that we have calculated  the value distribution (cumulative) for two different strategies. The distribution for current operations (blue curve) have a shape showing considerable downside risk (left tail) and a limited upside potential; give a mean equity value of $92M with a minimum of $-28M and a maximum of $150M. This, the span of possible outcomes and the fact that it can be negative compelled the board to look for new strategies reducing downside risk.

    strategy1

    They come up with strategy #1 (green curve) which to a risk-averse board is a good proposition: reducing downward risk by substantially shortening the left tail, increasing expected value of equity by moving the distribution to the right and reducing the overall uncertainty by producing a more vertical curve. In numbers; the minimum value was reduced to $68M, the mean value of equity was increased to $112M and the coefficient of variation was reduced from 30% to 14%. The upside potential increased somewhat but not much.
    To a risk-seeking board strategy#2 (red curve) would be a better proposition: the right tail has been stretched out giving a maximum value of $241M, however so have the left tail giving a minimum value to $-163M, increasing the event space and the coefficient of variation to 57%. The mean value of equity has been slightly reduced to $106M.

    So how could the strategies have been brought about?  Strategy #1 could involve introduction of long term energy contracts taking advantage of today’s low energy cost. Strategy #2 introduces a new product with high initial investments and considerable uncertainties about market acceptance.

    As we now can see the shape of the value distribution gives a lot of information about the company’s risk and opportunities.  And given the boards risk appetite it should be fairly simple to select between strategies just looking at the curves. But what if it is not obvious which the best is? We will return later in this series to answer that question and how the company’s risk and opportunities can be calculated.

    References

    Denenberg, H., et al. (1964). Risk and insurance. Englewood Cliffs, NJ: PrenticeHall,Inc.
    Greene, M. R. (1962). Risk and insurance. Cincinnati, OH: South-Western Publishing Co.
    Keynes, John Maynard. (1937). General Theory of Employment. Quarterly Journal of Economics.
    Knight, F. H. (1921). Risk, uncertainty and profit. Boston, MA: Houghton Mifflin Co.
    Mandelbrot, B., & Hudson, R. (2006). The (Mis) Behavior of Markets. Cambridge: Perseus Books Group.
    Mehr, R. I. and Cammack, E. (1961). Principles of insurance, 3.  Edition. Richard D. Irwin, Inc.
    Rable, W. H. (1968). Further comment. Journal of Risk and Insurance, 35 (4): 611-612.
    Taleb, N., (2007). The Black Swan. New York: Random House.

  • Fish farming

    Fish farming

    When we in 2002 were asked to look into the risk of Cod fish farming, we had to start with the basics; how do cod feed and grow at different locations and what is the mortality at the same locations.

    The first building block was Björn Björnsson’s paper; Björnsson, B., Steinarsson, A., Oddgeirsson, M. (2001). Optimal temperature for growth and feed conversion of immature cod. ICES Journal of Marine Science, 58: 29-38.

    Together with: Björn Björnsson, Marine Research Institute, Iceland and Nils Henrik Risebro, University of Oslo, Norway we did the study presented in the attached paper – Growth, mortality, feed conversion and optimal temperature for maximum rate of increase in biomass and earnings in cod fish farming. (Growth, mortality, feed conversion and optimal temperature for maximum …..)

    This formed the basis for a stochastic simulation model used to calculate the risk in investing in cod fish farming at different locations in Norway.

    simulation-model-for-fisher

    The stochastic part was taken from the “estimation errors” for the relations between growth, feed conversion, mortality etc. as function of deviation from optimal temperature.

    As optimal temperature  varies with cod size, temperature at a fixed location will during the year and over the production cycle deviate from optimal temperature. Locations with temperature profiles close to optimal temperature profile for growth in biomass will, other parameters held constant, are more favorable.

    The results that came out favorably for certain locations were subsequently used as basis for an IPO to finance the investment.

    The use of the model was presented as an article in Norsk Fiskeoppdrett 2002, #4 and 5. It can be downloaded here  (See: Cod fish farming), even if it is in Norwegian some of the graphs might be of interest.

    The following graph sums up the project. It is based on local yield in biomass relative to yield at optimal temperature profile for growth in biomass. Farming operation is simulated on different locations along the coast of Norway and local yield and its coefficient of variation (standard deviation divided by mean) is in the graph plotted against the locations position north. As we can see is not only the yield increasing as the location moves north, but also the coefficient of variation, indicating less risk in an investment.

    yield-as-function-of-positi

    The temperature profile for the locations was taken from the Institute of Marine Research publication: Hydrographic normals and long – term variations at fixed surface layer stations along the Norwegian coast from 1936 to 2000, Jan Aure and Øyvin Strand, Fisken og Havet, #13, 2001.

    Locations of fixed termografic stations along the coast of Norway.

    Locations of fixed termografic stations along the coast of Norway.

    The study gives the monthly mean and standard deviation of temperature (and salinity) in the surface layer at the coastal stations between Sognesjøen and Vardø, for the period 1936 – 1989.

    Monthly mean of temperature in the surface layer at all stations

    Monthly mean of temperature in the surface layer at all stations

    By employing a specific temperature profile in the simulation model we were able to estimate the probability distribution for one cycle biomass at that location as given in the figure below.

    position-n7024

    Having the probability distribution for production we added forecasts for cost and prices as well as for their variance. The probability distributions for production also give the probability distribution for the necessary investment, so that we in the end were able to calculate the probability distribution for value of the entity (equity).

    value-of-fish-farm-operatio

  • What is the correct company value?

    What is the correct company value?

    Nobel Prize winner in Economics, Milton Friedman, has said; “the only concept/theory which has gained universal acceptance by economists is that the value of an asset is determined by the expected benefits it will generate”.

    Value is not the same as price. Price is what the market is willing to pay. Even if the value is high, most want to pay as little as possible. One basic relationship will be the investor’s demand for return on capital – investor’s expected return rate. There will always be alternative investments, and in a free market, investor will compare the investment alternatives attractiveness against his demand for return on invested capital. If the expected return on invested capital exceeds the investments future capital proceeds, the investment is considered less attractive.

    value-vs-price-table

    One critical issue is therefore to estimate and fix the correct company value that reflects the real values in the company. In its simplest form this can be achieved through:

    Budget a simple cash flow for the forecast period with fixed interest cost throughout the period, and ad the value to the booked balance.

    This evaluation will be an indicator, but implies a series of simplifications that can distort the reality considerably. For instance, real balance value differs generally from book value. Proceeds/dividends are paid out according to legislation; also the level of debt will normally vary throughout the prognosis period. These are some factors that suggest that the mentioned premises opens for the possibility of substantial deviation compared to an integral and detailed evaluation of the company’s real values.

    A more correct value can be provided through:

    • Correcting the opening balance, forecast and budget operations, estimate complete result and balance sheets for the whole forecast period. Incorporate market weighted average cost of capital when discounting.

    The last method is considerably more demanding, but will give an evaluation result that can be tested and that also can take into consideration qualitative values that implicitly are part of the forecast.
    The result is then used as input in a risk analysis such that the probability distribution for the value of the chosen evaluation method will appear. With this method a more correct picture will appear of what the expected value is given the set of assumption and input.

    The better the value is explained, the more likely it is that the price will be “right”.

    The chart below illustrates the method.

    value-vs-price_chart1