Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
FRM – Page 4 – Strategy @ Risk

Tag: FRM

  • The fallacies of Scenario analysis

    The fallacies of Scenario analysis

    This entry is part 1 of 4 in the series The fallacies of scenario analysis

     

    Scenario analysis is often used in company valuation – with high, low and most likely scenarios to estimate the value range and expected value. A common definition seems to be:

    Scenario analysis is a process of analyzing possible future events or series of actions by considering alternative possible outcomes (scenarios). The analysis is designed to allow improved decision-making by allowing consideration of outcomes and their implications.

    Actually this definition covers at least two different types of analysis:

    1. Alternative scenario analysis; in politics or geo-politics, scenario analysis involves modeling the possible alternative paths of a social or political environment and possibly diplomatic and war risks – “rehearsing the future”,
    2. Scenario analysis; a number of versions of the underlying mathematical problem are created to model the uncertain factors in the analysis.

    The first addresses “wicked” problems; ill-defined, ambiguous and associated with strong moral, political and professional issues. Since they are strongly stakeholder dependent, there is often little consensus about what the problem is, let alone how to resolve it. (Rittel & Webber,1974)

    The second cover “tame” problems; that has well-defined and stable problem statements and belongs to a class of similar problems which are all solved in the same similar way. (Conklin, 2001) Tame however does not mean simple – a tame problem can be very technically complex.

    Scenario analysis in the last sense is a compromise between computational complex stochastic models (the S&R approach) and the overly simplistic and often unrealistic deterministic models. Each scenario is a limited representation of the uncertain elements and one sub-problem is generated for each scenario.

    Best Case/ Worse Case Scenarios analysis.
    With risky assets, the actual cash flows can be very different from expectations. At the minimum, we can estimate the cash flows if everything works to perfection – a best case scenario – and if nothing does – a worst case scenario.

    In practice, each input into asset value is set to its best (or worst) possible outcome and the cash flows estimated with those values.

    Thus, when valuing a firm, the revenue growth rate and operating margin etc. is set at the highest possible level while interest rates etc. is set at its lowest level, and then the best-case scenario value is computed.

    The question now is – if this really is the best (or worst) value or if let’s say a 95% (5%) percentile is chosen for each input – will that give the 95% (5%) percentile for the firm’s value?

    Let’ say that we in the first case – (X + Y) – want to calculate entity value by adding ‘NPV of market value of FCF’ (X) and ‘NPV of continuing value’ (Y). Both are stochastic variables, X is positive while Y can be positive or negative.  In the second case – (X – Y) – we want to calculate the value of equity by subtracting value of debt (Y) from entity value (X). Both X and Y are stochastic, positive variables.

    From statistics we know that for the joint distribution of (X ±Y) the expected value E(X ±Y) is E(X) ± E(Y) and that Var(X ± Y) is Var(X) + Var(Y) ± 2Cov(X,Y). Already from the expression for the joint variance we can see that this not necessarily will be true. However the expected value will be the same.

    We can demonstrate this by calculating a number of percentiles for two normal independent distributions (with Cov(X,Y)=0, to make it simple) and add (subtract) them and plot the result (red line) with the same percentiles from the joint distribution  – blue line for (X+Y) and green line for (X-Y).

    joint-distrib-1

    As we can see the lines for X+Y only coincides at the expected value and the deviation increases as we move out on the tails. For X-Y the deviation is even more pronounced:

    joint-distrib-2

    Plotting the deviation from the joint distribution as percentage from X Y, demonstrates very large relative deviations as we move out on the tails and that the sign of the numerical operator totally changes the direction of the deviations:

    pct_difference

    Add to this, a valuation analysis with a large number of:

    1. both correlated and auto-correlated stochastic variables,
    2. complex calculations,
    3. simultaneous equations,

    and there is no way of finding out where you are on the probability distribution – unless you do a complete Monte Carlo simulation. It is like being out in the woods at night without a map and compass – you know you are in the woods but not where.

    Some advocates scenario analysis to measure risk on an asset using the difference between the best-case and worst-case. Based on the above this can only be a very bad idea, since risk in the sense of loss is connected to the left tail where the deviation from the joint distribution can be expected to be the largest. This brings us to the next post in the series.

    References

    Rittel, H., and Webber, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, Vol. 4, pp 155-169. Elsevier Scientific Publishing Company, Inc: Amsterdam.

    Conklin, Jeff (2001). Wicked Problems. Retrieved April 28, 2009, from CofNexus Institute Web site: http://www.cognexus.org/wpf/wickedproblems.pdf

     

  • Valuation as a strategic tool

    Valuation as a strategic tool

    This entry is part 1 of 2 in the series Valuation

     

    Valuation is something usually done only when selling or buying a company (see: probability of gain and loss). However it is a versatile tool in assessing issues as risk and strategies both in operations and finance.

    The risk and strategy element is often not evident unless the valuation is executed as a Monte Carlo simulation giving the probability distribution for equity value (or the value of entity).  We will in a new series of posts take a look at how this distribution can be used.

    By strategy we will in the following mean a plan of action designed to achieve a particular goal. The plan may involve issues across finance and operation of the company; debt, equity, taxes, currency, markets, sales, production etc. The goal usually is to move the value distribution to the right (increasing value), but it may well be to shorten the left tail – reducing risk – or increasing the upside by lengthening the right tail.

    There are a variety of definitions of risk. In general, risk can be described as; “uncertainty of loss” (Denenberg, 1964); “uncertainty about loss” (Mehr &Cammack, 1961); or “uncertainty concerning loss” (Rabel, 1968). Greene defines financial risk as the “uncertainty as to the occurrence of an economic loss” (Greene, 1962).

    Risk can also be described as “measurable uncertainty” when the probability of an outcome is possible to calculate (is knowable), and uncertainty, when the probability of an outcome is not possible to determine (is unknowable) (Knight, 1921). Thus risk can be calculated, but uncertainty only reduced.

    In our context some uncertainty is objectively measurable like down time, error rates, operating rates, production time, seat factor, turnaround time etc. For others like sales, interest rates, inflation rates, etc. the uncertainty can only subjectively be measured.

    “[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.” (John Maynard Keynes, 1937)

    On this basis we will proceed, using managers best guess about the range of possible values and most likely value for production related variables and market consensus etc. for possible outcomes for variables like inflation, interest etc. We will use this to generate appropriate distributions (log-normal) for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate or does not exist, we will use rectangular distributions.

    Benoit Mandelbrot (Mandelbrot, 2004) and Taleb Nasim (Nasim, 2007) have rightly criticized the economic profession for “over use” of the normal distribution – the bell curve. The argument is that it has too thin and short tails. It will thus underestimate the possibility of far out extremes – that is, low probability events with high impact (Black Swan’s).

    Since we use Monte Carlo simulation we can use any distribution to represent possible outcomes of a variable. So using the normal distribution for it’s statistically nicety is not necessary. We can even construct distributions that have the features we look for, without having to describe it mathematically.

    However using normal distributions for some variables and log-normal for others etc. in a value simulation will not give you a normal or log-normal distributed equity value. A number of things can happen in the forecast period; adverse sales, interest or currency rates, incurred losses, new equity called etc. Together with tax, legal and IFRS rules etc. the system will not be linear and much more complex to calculate then mere additions, subtraction or multiplication of probability distributions.

    We will in the following adhere to uncertainty and loss, where loss is an event where calculated equity value is less than book value of equity or in the case of M&A, less than the price paid.

    Assume that we have calculated  the value distribution (cumulative) for two different strategies. The distribution for current operations (blue curve) have a shape showing considerable downside risk (left tail) and a limited upside potential; give a mean equity value of $92M with a minimum of $-28M and a maximum of $150M. This, the span of possible outcomes and the fact that it can be negative compelled the board to look for new strategies reducing downside risk.

    strategy1

    They come up with strategy #1 (green curve) which to a risk-averse board is a good proposition: reducing downward risk by substantially shortening the left tail, increasing expected value of equity by moving the distribution to the right and reducing the overall uncertainty by producing a more vertical curve. In numbers; the minimum value was reduced to $68M, the mean value of equity was increased to $112M and the coefficient of variation was reduced from 30% to 14%. The upside potential increased somewhat but not much.
    To a risk-seeking board strategy#2 (red curve) would be a better proposition: the right tail has been stretched out giving a maximum value of $241M, however so have the left tail giving a minimum value to $-163M, increasing the event space and the coefficient of variation to 57%. The mean value of equity has been slightly reduced to $106M.

    So how could the strategies have been brought about?  Strategy #1 could involve introduction of long term energy contracts taking advantage of today’s low energy cost. Strategy #2 introduces a new product with high initial investments and considerable uncertainties about market acceptance.

    As we now can see the shape of the value distribution gives a lot of information about the company’s risk and opportunities.  And given the boards risk appetite it should be fairly simple to select between strategies just looking at the curves. But what if it is not obvious which the best is? We will return later in this series to answer that question and how the company’s risk and opportunities can be calculated.

    References

    Denenberg, H., et al. (1964). Risk and insurance. Englewood Cliffs, NJ: PrenticeHall,Inc.
    Greene, M. R. (1962). Risk and insurance. Cincinnati, OH: South-Western Publishing Co.
    Keynes, John Maynard. (1937). General Theory of Employment. Quarterly Journal of Economics.
    Knight, F. H. (1921). Risk, uncertainty and profit. Boston, MA: Houghton Mifflin Co.
    Mandelbrot, B., & Hudson, R. (2006). The (Mis) Behavior of Markets. Cambridge: Perseus Books Group.
    Mehr, R. I. and Cammack, E. (1961). Principles of insurance, 3.  Edition. Richard D. Irwin, Inc.
    Rable, W. H. (1968). Further comment. Journal of Risk and Insurance, 35 (4): 611-612.
    Taleb, N., (2007). The Black Swan. New York: Random House.

  • What is the correct company value?

    What is the correct company value?

    Nobel Prize winner in Economics, Milton Friedman, has said; “the only concept/theory which has gained universal acceptance by economists is that the value of an asset is determined by the expected benefits it will generate”.

    Value is not the same as price. Price is what the market is willing to pay. Even if the value is high, most want to pay as little as possible. One basic relationship will be the investor’s demand for return on capital – investor’s expected return rate. There will always be alternative investments, and in a free market, investor will compare the investment alternatives attractiveness against his demand for return on invested capital. If the expected return on invested capital exceeds the investments future capital proceeds, the investment is considered less attractive.

    value-vs-price-table

    One critical issue is therefore to estimate and fix the correct company value that reflects the real values in the company. In its simplest form this can be achieved through:

    Budget a simple cash flow for the forecast period with fixed interest cost throughout the period, and ad the value to the booked balance.

    This evaluation will be an indicator, but implies a series of simplifications that can distort the reality considerably. For instance, real balance value differs generally from book value. Proceeds/dividends are paid out according to legislation; also the level of debt will normally vary throughout the prognosis period. These are some factors that suggest that the mentioned premises opens for the possibility of substantial deviation compared to an integral and detailed evaluation of the company’s real values.

    A more correct value can be provided through:

    • Correcting the opening balance, forecast and budget operations, estimate complete result and balance sheets for the whole forecast period. Incorporate market weighted average cost of capital when discounting.

    The last method is considerably more demanding, but will give an evaluation result that can be tested and that also can take into consideration qualitative values that implicitly are part of the forecast.
    The result is then used as input in a risk analysis such that the probability distribution for the value of the chosen evaluation method will appear. With this method a more correct picture will appear of what the expected value is given the set of assumption and input.

    The better the value is explained, the more likely it is that the price will be “right”.

    The chart below illustrates the method.

    value-vs-price_chart1

  • The Probability of Gain and Loss

    The Probability of Gain and Loss

    Every item written into a firm’s profit and loss account and its balance sheet is a stochastic variable with a probability distribution derived from probability distributions for each factor of production. Using this approach we are able to derive a probability distribution for any measure used in valuing companies and in evaluating strategic investment decisions. Indeed, using this evaluation approach we are able to calculate expected gain, loss and their probability when investing in a company where the capitalized value (price) is known.

    For a closer study, please download Corporate-risk-analysis.

    The Probability Distribution for the Value of Equity

    The simulation creates frequency and cumulative probability distributions as shown in the figure below.

    value-of-equity

    We can use the information contained in the figure to calculate the risk of investing in the company for different levels of the company’s market capitalization. The expected value of the company is 10.35 read from the intersection between probability curve and a line drawn from the 50% probability point on the left Y-axis.

    The Probability Distribution for Gain and Loss

    The shape of the probability curve provides concise information concerning uncertainty in calculating expected values of equity. Uncertainty is probability-of-gainreduced the steeper the probability curve, whereas the flatter the curve so uncertainty is more evident. The figures below depicts the value of this type of information enabling calculation of expected gains or losses from investments in a company for differing levels of market capitalization.

    We have calculated expected Gain or Loss as the difference between expected values of equity and the market capitalization; the ‘S’ curve in the graph shows this. The X-axis gives different levels of market capitalization; the right Y-axis gives the expected gain (loss) and the left y-axis the probability. Drawing a line from the 50% probability point to the probability curve and further to the right Y-axis point to the position where the expected gain (loss) is zero. At this point there is a 50/50 chance of realising or loosing money through investing in the company capitalized at 10.35, which is exactly the expected value of the company’s equity.

    To the left of this point is the investment area. The green lines indicate a situation where the company is capitalized at 5.00 indicating an expected gain of 5.35 or more with a probability of 59% (100%-41%).

    probability-of-loss1

    The figure to the right describes a situation where a company is capitalized above the expected value.

    To the right is the speculative area where an industrial investor, with perhaps synergistic possibilities, could reasonably argue a valid case when paying a price higher than expected value. The red line in the figure indicates a situation where the company is capitalized at 25.00 – giving a loss of 14.65 or more with 78% probability.

    To a financial investor it is obviously the left part – the investment area – that is of interest. It is this area that expected gain is higher than expected loss.

  • The weighted average cost of capital

    The weighted average cost of capital

    This entry is part 1 of 2 in the series The Weighted Average Cost of Capital

     

    A more extensive version of this article can be read here in .pdf format.

    The weighted cost of capital (WACC) and the return on invested capital (ROIC) are the most important elements in company valuation, and the basis for most strategy and performance evaluation methods.

    WACC is the discount rate (time value of money) used to convert expected future cash flow into present value for all investors. Usually it is calculated both assuming a constant cost of capital and a fixed set of target market value weights ((Valuation, Measuring and Managing the Value of Companies. Tom Copeland et al.)) , throughout the time frame of the analysis. As this simplifies the calculations, it also imposes severe restrictions on how a company’s financial strategy can be simulated.

    Now, to be able to calculate WACC we need to know the value of the company, but to calculate that value we need to know WACC. So we have a circularity problem involving the simultaneous solution of WACC and company value.

    In addition all the variables and parameters determining the company value will be stochastic, either by themselves or by being functions of other stochastic variables. As such WACC is a stochastic variable– determined by the probability distributions for yield curves, exchange rates, sale, prices, costs and investments. But this also enables us – by Monte Carlo simulation –to estimate a confidence interval for WACC.

    Some researchers have claimed that the free cash flow value only in special cases will be equal to the economic profit value. By solving the simultaneous equations, giving a different WACC for every period, we will always satisfy the identity between free cash flow and economic profit value. In fact we will use this to check that the calculations are consistent.

    We will use the most probable value for variables/parameters in the calculations. Since most of the probability distributions involved are non-symmetric (sale, prices etc), the expected values will in general not be equal to the most probable values. And as we shall see, this is also the case for the individual values of WACC.

    WACC

    To be consistent with the free cash flow or economic profit approach, the estimated cost of capital must comprise a weighted average of the marginal cost of all sources of capital that involves cash payment – excluding non-interest bearing liabilities (in simple form):

    WACC = {C_d}(1-t)*{D/V} + {C_e}*{E/V}

    {C_d} = Pre-tax debt nominal interest rate
    {C_e} = Opportunity cost of equity,
    t = Corporate marginal tax rate
    D = Market value debt
    E = Market value of equity
    V = Market value of entity (V=D+E).

    The weights used in the calculation are the ratio between the market value of each type of debt and equity in the capital structure, and the market value of the company. To estimate WACC we then first need to establish the opportunity cost of equity and non-equity financing and then the market value weights for the capital structure.

    THE OPPORTUNITY COST OF EQUITY AND NON-EQUITY FINANCING

    To have a consistent WACC, the estimated cost of capital must:

    1. Use interest rates and cost of equity of new financing at current market rates,
    2. Be computed after corporate taxes,
    3. Be adjusted for systematic risk born by each provider of capital,
    4. Use nominal rates built from real rates and expected inflation.

    However we need to forecast the future risk free rates. They can usually be found from the yield curve for treasury notes, by calculating the implicit forward rates.

    THE OPPORTUNITY COST OF EQUITY

    The equation for the cost of equity (pre investor tax), using the capital asset pricing model (CAPM) is:

    C = R+M*beta+L

    R  = risk-free rate,
    beta  = the levered systematic risk of equity,
    M  = market risk premium,
    L  = liquidity premium.

    If tax on dividend and interest income differs, the risk-free rate and the market premium has to be adjusted, assuming tax rate -ti, for interest income:

    R = (1-t_i)*R  and  M = M+t_i*R.

    t_i = Investor tax rate,
    R  = tax adjusted risk-free rate,
    M = tax adjusted market premium

    The pre-tax cost of equity can then be computed as:

    R/(1-t_d)+{beta}*{M/(1-t_d)}+{LP/(1-t_d)}

    C_e(pre-tax) = C_e/(1-t_d) = R/(1-t_d)+{beta}*{M/(1-t_d)}+{LP/(1-t_d)}

    Where the first line applies for an investor with a tax rate of -td, on capital income, the second line for an investor when tax on dividend and interest differs  ((See also: Wacc and a Generalized Tax Code, Sven Husmann et al.,  Diskussionspapier 243 (2001), Universität Hannover)) .

    The long-term strategy is a debt-equity ratio of one, the un-levered beta is assumed to be 1.1 and the market risk premium 5.5%. The corporate tax rate is 28%, and the company pays all taxes on dividend. The company’s stock has low liquidity, and a liquidity premium of 2% has been added.

    cost-of-equity_corrected

    In the Monte Carlo simulation all data in the tables will be recalculated for every trial (simulation), and in the end produce the basis for estimating the probability distributions for the variables. This approach will in fact create a probability distribution for every variable in the profit and loss account as well as in the balance sheet.

    THE OPPORTUNITY COST OF DEBT

    It is assumed that the pre-tax debt interest rate can be calculated using risk adjusted return on capital (RAROC) as follows:

    Lenders Cost = L_C+L_L+L_A+L_RP

    L_C = Lenders Funding Cost (0.5%),
    L_L = Lenders Average Expected Loss (1.5%),
    L_A = Lenders Administration Cost (0.8%),
    L_RP= Lenders Risk Premium (0.5%).

    The parameters (and volatility) have to be estimated for the different types of debt involved. In this case there are two types; short -term with a maturity of four years and long-term with a maturity of 10 years. The risk free rates are taken from the implicit forward rates in the yield curve and lenders cost are set to 3.3%.

    In every period the cost and value of debt are recalculated using the current rates for that maturity, ensuring use of the current (future) opportunity cost of debt.

    THE MARKET VALUE WEIGHTS

    By solving the simultaneous equations, we find the market value for each type of debt and equity:

    And the value weights:

    Multiplying the value weights by the respective rate and adding, give us the periodic most probable WACC rate:

    As can be seen from the table above, the rate varies slightly from year to year. The relative small differences are mainly due to the low gearing in the forecast period.

    MONTE CARLO SIMULATION

    In the figure below we have shown the result from simulation of the company’s operations, and the resulting WACC for year 2002. This shows that the expected value of WACC in is 17.4 %, compared with the most probable value of 18.9 %. This indicates that the company will need more capital in the future, and that an increasing part will be financed by debt. A graph of the probability distributions for the yearly capital transactions (debt and equity) in the forecast period would have confirmed this.

    In the figure the red curve indicates the cumulative probability distribution for the value of WACC in this period and the blue columns the frequencies. By drawing horizontal lines on the probability axis (left), we can find confidence intervals for WACC. In this case there is only a 5% probability that WACC will be less than 15%, and a 95% probability that it will be less than 20%. So we can expect WACC for 2002 with 90% probability to fall between 15% and 20%. The variation is quite high  – with a coefficient of variation of 6.8 ((Coefficient of variation = 100*st.dev/mean)).

    VALUATION

    The value of the company and the resulting value of equity can be calculated using either the free cash flow or the economic profit approach. Correctly done, both give the same value. This is the final test for consistency in the business model. The calculations are given in the tables below, and calculated as the value at end of every year in the forecast period.

    As usual, the market value of free cash flow is the discounted value of the yearly free cash flow in the forecast period, while the continuing value is the value of continued operation after the forecast period. All surplus cash are paid, as dividend so there is no excess marketable securities.

    The company started operations in 2002 after having made the initial investments. The charge on capital is the WACC rate multiplied by the value of invested capital. In this case capital at beginning of each period is used, but average capital or capital at end could have been used with a suitable definition of capital charge.
    Economic profit has been calculated by multiplying RIOC – WACC with invested capital, and the market value at any period is the net present value of future economic profit. The value of debt as the net present value of future debt payments – is equal for both methods.

    For both methods using the same series of WACC when discounting cash the flows, we find the same value for the both company and equity. This ensures that the calculations are both correct and consistent.

    Tore Olafsen and John Martin Dervå

    reprint_fen

  • Risk – Exposure to Gain and Loss

    Risk – Exposure to Gain and Loss

    This entry is part 4 of 6 in the series Monte Carlo Simulation

     

    It is first when the decision involves consequences for the decision maker he faces a situation of risk. A traditional way of understanding risk is to calculate how much a certain event varies over time. The less it varies the minor the risk. In every decision where historical data exists we can identify historical patterns, study them and calculate how much they varies. Such a study gives us a good impression of what kind of risk profile we face.

    • Risk – randomness with knowable probabilites.
    • Uncertainty – randomness with unknowable probabilities.

    Another situation occurs when little or no historical data is available but we know fairly well all the options (e.g. tossing a dice). We have a given resource, certain alternatives and a limited number of trials. This is equal to the Manahattan project.

    In both cases we are interested in the probability of success. We like to get a figure, a percentage of the probability for gain or loss. When we know that number we can decide whether we will accept the risk or not.

    Just to illustrate risk, budgeting makes a good example. If we have five items in our budget where we have estimated the expected values (that is 50% probability) it is only three percent probability that all five will target their expectation at the same time.

    0.5^5 = 3,12%

    A common mistake is to summarize the items rather than multiplying them. The risk is expressed by the product of the opportunities.