Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Corporate Strategy – Page 2 – Strategy @ Risk

Category: Corporate Strategy

  • M&A: When two plus two is five or three or …

    M&A: When two plus two is five or three or …

    When two plus two is five (Orwell, 1949)

    Introduction

    Mergers & Acquisitions (M&A) is a way for companies to expand rapidly and much faster than organic growth – that is coming from existing businesses – would have allowed. M&A’s have for decades been a trillion-dollar business, but empirical studies reports that a significant proportion must be considered as failures.

    The conventional wisdom – is that the majority of deals fail to add shareholder value to the acquiring company. According to this research, only 30-50% of deals are considered to be successful (See Bruner, 2002).

    If most deals fail, why do companies keep doing them? Is it because they think the odds won’t apply to them, or are executives more concerned with extending its influence and company growth (empire building) and not with increasing their shareholder (s) value?

    Many writers argue that these are the main reasons driving the M&A activities, with the implication that executives are basically greedy (because their compensation is often tied to the size of the company) – or incompetent.

    To be able to create shareholder value the M&A must give rise to some forms of synergy. Synergy is the ability of the merged companies to generate higher shareholder value (wealth) than the standalone entities. That is; that the whole will be greater than the sum it’s of parts.

    For many of the observed M&A’s however, the opposite have been the truth – value have been destroyed; the whole have turned out to be less than the sum of its parts (dysergy).

    “When asked to name just one big merger that had lived up to expectations, Leon Cooperman, former co-chairman of Goldman Sachs’ Investment Policy Committee, answered: I’m sure there are success stories out there, but at this moment I draw a blank.” (Sirower, 1997)

    The “apparent” M&A failures have also been attributed to both methodological and measurement problems, stating that evidence – as cost saving or revenue enhancement brought by the M&A is difficult to obtain after the fact. This might also apply to some of the success stories.

    What is surprising in most (all?) of the studies of M&A success and failures is the lack understanding of the stochastic nature of business activities. For any company it is impossible to estimate with certainty its equity value, the best we can do is to estimate a range of values and the probability that the true value will fall inside this range. The merger two companies amplify this, and the discussion of possible synergies or dysergies can only be understood in the context of randomness (stochasticity) ((See: the IFA.com – Probability Machine, Galton Board, Randomness and Fair Price Simulator, Quincunx at http://www.youtube.com/watch?v=AUSKTk9ENzg)).

    [tube] http://www.youtube.com/watch?v=AUSKTk9ENzg, 400,300 [/tube]

    The M&A cases

    Let’s assume that we have two companies A and B that are proposed merged. We have the distribution for each company’s equity value (shareholders value) for both companies and we can calculate the equity distribution for the merged company. Company A’s value is estimated to be in the range of 0 to 150M with expected value 90M. Company B’s value is estimated to be in the range of -40 to 200M with expected value 140M. (See figure below)

    If we merge the two companies assuming no synergy or dysergy we get the value (shareholder) distribution shown by the green curve in the figure. The merged company will have a value in the range of 65 to 321M, with an expected value of 230M. Since there is no synergy/dysergy no value have been created or destroyed by the merger.

    For company B no value would be added in the merger if A was bought at a price equal to or higher than the expected value of the company.  If it was bought at a price less than expected value, then there is a probability that the wealth of the shareholders of company B will increase. But even then it is not with certainty. All increase of wealth to the shareholders of company B will be at the expenses of the shareholders of company A and vice versa.

    Case 1

    If we assume that there is a “connection” between the companies, such that an increase in one of the company’s revenues also will increase the revenues in the other, we will have a synergy that can be exploited.

    This situation is depicted in the figure below. The green curve gives the case with no synergy and the blue the case described above. The difference between them is the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is high and even negative (dysergy) when revenues is low.

    If we produce a frequency diagram of the sizes of the possible synergies it will look as the diagram below. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Case 2

    If we assume that the “connection” between the companies is such that a reduction in one of the company’s revenues streams will reduce the total production costs, we again have a synergy that can be exploited.
    This situation is depicted in the figure below. The green curve gives the case with no synergy and the red the case described above. The difference between them is again the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is lower and even negative (dysergy) when revenues is high.

    In this case, the merger acts as a hedge against revenue losses at the cost of parts of the upside created by the merger. This should not deter the participants from a merger since there is only a 30 % probability that this will happen.

    The graph above again gives the frequency diagram for the sizes of the possible synergies. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Conclusion

    The elusiveness of synergies in many M&A cases can be explained by the natural randomness in business activities. The fact that a merger can give rise to large synergies does not guarantee that it will occur, only that there is a probability that it will occur. Spread sheet exercises in valuation can lead to disaster if the stochastic nature of the involved companies is not taken into account. AND basing the pricing of the M&A candidate on expected synergies is pure foolishness.

    References

    Bruner, Robert F. (2002), Does M&A Pay? A Survey of Evidence for the Decision-Maker. Journal of Applied Finance, Vol. 12, No. 1. Available at SSRN: http://ssrn.com/abstract=485884

    Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.

    The whole is more than the sum of its parts. Aristotle, Metaphysica

     

    Sirower, M. (1997) The Synergy Trap: How Companies Lose the Acquisition Game. New York. The Free Press.

  • You only live once

    You only live once

    This entry is part 4 of 4 in the series The fallacies of scenario analysis

    You only live once, but if you do it right, once is enough.
    — Mae West

    Let’s say that you are considering new investment opportunities for your company and that the sales department has guesstimated that the market for one of your products will most likely grow by a little less than 5 % per year. You then observe that the product already has a substantial market and that this in fifteen years’ time nearly will be doubled:

    Building a new plant to accommodate this market growth will be a large investment so you find that more information about the probability distribution for the products future sales is needed. Your sales department then “estimates” the market yearly growth to have a mean close to zero and a lower quartile of minus 5 % and an upper quartile of plus 7 %.

    Even with no market growth the investment is a tempting one since the market already is substantial and there is always a probability of increased market shares.

    As quartiles are given, you rightly calculate that there will be a 25 % probability that the growth will be above 7 %, but also that there will be a 25 % probability that it can be below minus 5 %. At the face of it, and with you being not too risk averse, this looks as a gamble worth taking.

    Then you are informed that the distribution will be heavily left skewed – opening for considerable downside risk. In fact it turns out that it looks like this:

    A little alarmed you order the sales department to come up with a Monte Carlo simulation giving a better view of the future possible paths of the market development.

    The return with the graph below giving the paths for the first ten runs in the simulation with the blue line giving average value and the green and red the 90 % and 10 % limits of the one thousand simulated outcomes:

    The blue line is the yearly ensemble  averages ((A set of multiple predictions that is all valid at the same time. The term “ensemble” is often used in physics and physics-influenced literature. In probability theory literature the term probability space is more prevalent.

    An ensemble provides reliable information on forecast uncertainties (e.g., probabilities) from the spread (diversity) amongst ensemble members.

    Also see: Ensemble forecasting; a numerical prediction method that is used to attempt to generate a representative sample of the possible future states of dynamic systems. Ensemble forecasting is a form of Monte Carlo analysis: multiple numerical predictions are conducted using slightly different initial conditions that are all plausible given the past and current set of observations. Often used in weather forecasting.));  that is the time series of average of outcomes. The series shows a small decline in market size but not at an alarming rate. The sales department’s advice is to go for the investment and try to conquer market shares.

    You then note that the ensemble average implies that you are able jump from path to path and since each is a different realization of the future that will not be possible – you only live once!

    You again call the sales department asking them to calculate each paths average growth rate (over time) – using their geometric mean – and report the average of these averages to you. When you plot both the ensemble and the time averages you find quite a large difference between them:

    The time average shows a much larger market decline than the ensemble average.

    It can be shown that the ensemble average always will overestimate (Peters, 2010) the growth and thus can falsely lead to wrong conclusions about the market development.

    If we look at the distribution of path end values we find that the lower quartile is 64 and the upper quartile is 118 with a median of 89:

    It thus turns out that the process behind the market development is non-ergodic ((The term ergodic is used to describe dynamical systems which have the same behavior averaged over time as averaged over space.))  or non-stationary ((Stationarity is a necessary, but not sufficient, condition for ergodicity. )). In the ergodic case both the ensemble and time averages would have been equal and the problem above would not have appeared.

    The investment decision that at first glance looked a simple one is now more complicated and can (should) not be decided based on market development alone.

    Since uncertainty increases the further we look into the future, we should never assume that we have ergodic situations. The implication is that in valuation or M&A analysis we should never use an “ensemble average” in the calculations, but always do a full simulation following each time path!

    References

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338

    Endnotes

  • The probability distribution of the bioethanol crush margin

    The probability distribution of the bioethanol crush margin

    This entry is part 1 of 2 in the series The Bio-ethanol crush margin

    A chain is no stronger than its weakest link.

    Introduction

    Producing bioethanol is a high risk endeavor with adverse price development and crumbling margins.

    In the following we will illustrate some of the risks the bioethanol producer is facing using corn  as feedstock. However, these risks will persist regardless of the feedstock and production process chosen. The elements in the discussion below can therefore be applied to any and all types of bioethanol production:

    1.    What average yield (kg ethanol per kg feedstock) can we expect?  And  what is the shape of the yield distribution?
    2.    What will the future price ratio of feedstock to ethanol be? And what volatility can we expect?

    The crush margin ((The relationship between prices in the cash market is commonly referred to as the Gross Production Margin.))  measures the difference between the sales proceeds of finished bioethanol and its feedstock ((It can also be considered as the productions throughput; the rate at which the system converts raw materials to money. Throughput is net sales less variable cost, generally the cost of the most important raw materials. (see: Throughput Accounting)).

    With current technology, one bushel of corn can be converted into approx. 2.75 gallons of corn and 17 pounds of DDG (distillers’ dried grains). The crush margin (or gross processing margin) is then:

    1. Crush margin = 0.0085 x DDG price + 2.8 x ethanol price – corn price

    Since from 65 % to 75 % of the variable cost in bioethanol production is cost of corn, the crush margin is an important metric especially since the margin in addition shall cover all other expenses like energy, electricity, interest, transportation, labor etc. and – in the long term the facility’s fixed costs.

    The following graph taken from the CME report: Trading the corn for ethanol crush, (CME, 2010) gives the margin development in 2009 and the first months of 2010:

    This graph gives a good picture of the uncertainties that faces the bioethanol producers, and can be a helpful tool when hedging purchases of corn and sale of the products ((The historical chart going back to APR 2005 is available at the CBOT web site)).

    The Crush Spread, Crush Profit Margin and Crush Ratio

    There are a number of other ways to formulate the crush risk (CME, July 11. 2011):

    The CBOT defines the “Crush Spread” as the Estimated Gross Margin per Bushel of Corn. It is calculated as follows:

    2. Crush Spread = (Ethanol price per gallon X 2.8) – Corn price per bushel, or as

    3. Crush Profit margin = Ethanol price – (Corn price/2.8).

    Understanding these relationships is invaluable in trading ethanol stocks ((We will return to this in a later post.)).

    By rearranging the crush spread equation, we can express the spread as its ratio to the product price (simplifying by keeping bi-products like DDG etc. out of the equation):

    4. Crush ratio = Crush spread/Ethanol price = y – p,

    Where: y = EtOH Yield (gal)/ bushel corn and p = Corn price/Ethanol price.

    We will in the following look at the stochastic nature of y and p and thus the uncertainty in forecasting the crush ratio.

    The crush spread and thus the crush ratio is calculated using data from the same period. They therefore give the result of an unhedged operation. Even if the production period is short – two to three days – it will be possible to hedge both the corn and ethanol prices. But to do that in a consistent and effective way we have to look into the inherent volatility in the operations.

    Ethanol yield

    The ethanol yield is usually set to 2.682 gal/bushel corn, assuming 15.5 % humidity. The yield is however a stochastic variable contributing to the uncertainty in the crush ratio forecasts. As only starch in corn can be converted to ethanol we need to know the content of extractable starch in a standard bushel of corn – corrected for normal loss and moisture.  In the following we will lean heavily on the article: “A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch”, by Tad W. Patzek (Patzek, 2006) which fits our purpose perfectly. All relevant references can be found in the article.

    The aim of his article was to establish the mean extractable starch in hybrid corn and the mean highest possible yield of ethanol from starch. We however are also interested in the probability distributions for these variables – since no production company will ever experience the mean values (ensembles) and since the average return over time always will be less than the return using ensemble means ((We will return to this in a later post))  (Peters, 2010).

    The purpose of this exercise is after all to establish a model that can be used as support for decision making in regard to investment and hedging in the bioethanol industry over time.

    From (Patzek, 2006) we have that the extractable starch (%) can be described as approx. having a normal distribution with mean 66.18 % and standard deviation of 1.13:

    The nominal grain loss due to dirt etc. can also be described as approx. having a normal distribution with mean 3 % and a standard deviation of 0.7:

    The probability distribution for the theoretical ethanol yield (kg/kg corn) can then be found by Monte Carlo simulation ((See formula #3 in (Patzek, 2006))  as:

    – having an approx. normal distribution with mean 0.364 kg EtHO/kg of dry grain and standard deviation of 0.007. On average we will need 2.75 kg of clean dry grain to produce one kilo or 1.74 liter of ethanol ((With a specific density of 0.787 kg/l)).

    Since we now have a distribution for ethanol yield (y) as kilo of ethanol per kilo of corn we will in the following use price per kilo both for ethanol and corn, adjusting for the moisture (natural logarithm of moisture in %) in corn:

    We can also use this to find the EtHO yield starting with wet corn and using gal/bushel corn as unit (Patzek, 2006):

    giving as theoretical value a mean of 2.64 gal/wet bushel with a standard deviation of 0.05 – which is significantly lower than the “official” figure of 2.8 gal/wet bushel used in the CBOT calculations. More important to us however is the fact that we easily can get yields much lower than expected and thus a real risk of lower earnings than expected. Have in mind that to get a yield above 2.64 gallons of ethanol per bushel of corn all steps in the process must continuously be at or close to their maximum efficiency – which with high probability never will happen.

    Corn and ethanol prices

    Looking at the price developments since 2005 it is obvious that both the corn and ethanol prices have a large variability ($/kg and dry corn):

    The long term trends show a disturbing development with decreasing ethanol price, increasing corn prices  and thus an increasing price ratio:

    “Risk is like fire: If controlled, it will help you; if uncontrolled, it will rise up and destroy you.”

    Theodore Roosevelt

    The unhedged crush ratio

    Since the crush ratio on average is:

    Crush ratio = 0.364 – p, where:
    0.364 = Average EtOH Yield (kg EtHO/kg of dry grain) and
    p = Corn price/Ethanol price

    The price ratio (p) has to be less than 0.364 for the crush ratio in the outset to be positive. As of January 2011 the price ratios has overstepped that threshold and have for the first months of 2011 stayed above that.

    To get a picture of the risk an unhedged bioethanol producer faces only from normal variation in yield and forecasted variation in the price ratio we will make a simple forecast for April 2011 using the historic time series information on trend and seasonal factors:

    The forecasted probability distribution for the April price ratio is given in the frequency graph below:

    This represents the price risk the producer will face. We find that the mean value for the price ratio will be 0.323 with a standard deviation of 0.043. By using this and the distribution for ethanol yield we can by Monte Carlo simulation forecast the April distribution for the crush ratio:

    As we see, will negative values for the crush ratio be well inside the field of possible outcomes:

    The actual value of the average price ratio for April turned out to be 0.376 with a daily maximum of 0.384 and minimum of 0.363. This implies that the April crush ratio with 90 % probability would have been between -0.005 and -0.199, with only the income from DDGs to cover the deficit and all other costs.

    Hedging the crush ratio

    The distribution for the price ratio forecast above clearly points out the necessity of price ratio hedging (Johnson, 1960) and (Stein, 1961).
    The time series chart above shows both a negative trend development and seasonal variations in the price ratio. In the short run there is nothing much to do about the trend development, but in the longer run will probably other feedstock and better processes change the trend development (Shapouri et al., 2002).

    However, what immediately stand out are the possibilities to exploit the seasonal fluctuations in both markets:

    Ideally, raw material is purchased in the months seasonal factors are low and ethanol sold the months seasonal factor are high. In practice, this is not possible, restrictions on manufacturing; warehousing, market presence, liquidity, working capital and costs set limits to the producer’s degrees of freedom (Dalgran, 2009).

    Fortunately, there are a number of tools in both the physical and financial markets available to manage price risks; forwards and futures contracts, options, swaps, cash-forward, and index and basis contracts. All are available for the producers who understand financial hedging instruments and are willing to participate in this market. See: (Duffie, 1989), (Hull, 2003) and (Bjørk, 2009).

    The objective is to change the margin distributions shape (red) from having a large part of its left tail on the negative part of the margin axis to one resembling the green curve below where the negative part have been removed, but most of the upside (right tail) has been preserved, that is to: eliminate negative margins, reduce variability, maintain the upside potential and thus reduce the probability of operating at a net loss:

    Even if the ideal solution does not exist, large number of solutions through combinations of instruments can provide satisfactory results. In principle, it does not matter where these instruments exist, since both the commodity and financial markets are interconnected to each other. From a strategic standpoint, the purpose is to exploit fluctuations in the market to capture opportunities while mitigating unwanted risks (Mallory, et al., 2010).

    Strategic Risk Management

    To manage price risk in commodity markets is a complex topic. There are many strategic, economic and technical factors that must be understood before a hedging program can be implemented.

    Since all the hedging instruments have a cost and since only future outcomes ranges and not exact prices, can be forecasted in the individual markets, costs and effectiveness is uncertain.

    In addition, the degrees of desired protection have to be determined. Are we seeking to ensure only a positive margin, or a positive EBITDA, or a positive EBIT? With what probability and to what cost?

    A systematic risk management process is required to tailor an integrated risk management program for each individual bioethanol plant:

    The choice of instruments will define different strategies that will affect company liquidity and working capital and ultimately company value. Since the effect of each of these strategies will be of stochastic nature it will only be possible to distinguish between them using the concept of stochastic dominance. (selecting strategy)

    Models that can describe the business operations and underlying risk can be a starting point, to such an understanding. Linked to balance simulation they will provide invaluable support to decisions on the scope and timing of hedging programs.

    It is only when the various hedging strategies are simulated through the balance so that the effect on equity value can be considered that the best strategy with respect to costs and security level can be determined – and it is with this that S@R can help.

    References

    Bjørk, T.,(2009). Arbitrage Theory in Continuous Time. Oxford University Press, Oxford.

    CME Group., (2010).Trading the corn for ethanol crush,
    http://www.cmegroup.com/trading/agricultural/corn-for-ethanol-crush.html

    CME Group., (July 11. 2011). Ethanol Outlook Report, , http://cmegroup.barchart.com/ethanol/

    Dalgran, R.,A., (2009) Inventory and Transformation Hedging Effectiveness in Corn Crushing. Journal of Agricultural and Resource Economics 34 (1): 154-171.

    Duffie, D., (1989). Futures Markets. Prentice Hall, Englewood Cliffs, NJ.

    Hull, J. (2003). Options, Futures, and Other Derivatives (5th edn). Prentice Hall, Englewood Cliffs, N.J.

    Johnson, L., L., (1960). The Theory of Hedging and Speculation in Commodity Futures, Review of Economic Studies , XXVII, pp. 139-151.

    Mallory, M., L., Hayes, D., J., & Irwin, S., H. (2010). How Market Efficiency and the Theory of Storage Link Corn and Ethanol Markets. Center for Agricultural and Rural Development Iowa State University Working Paper 10-WP 517.

    Patzek, T., W., (2004). Sustainability of the Corn-Ethanol Biofuel Cycle, Department of Civil and Environmental Engineering, U.C. Berkeley, Berkeley, CA.

    Patzek, T., W., (2006). A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch, Natural Resources Research, Vol. 15, No. 3.

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338.

    Shapouri,H., Duffield,J.,A., & Wang, M., (2002). The Energy Balance of Corn Ethanol: An Update. U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses. Agricultural Economic Report No. 814.

    Stein, J.L. (1961). The Simultaneous Determination of Spot and Futures Prices. American Economic Review, vol. 51, p.p. 1012-1025.

    Footnotes

  • Planning under Uncertainty

    Planning under Uncertainty

    This entry is part 3 of 6 in the series Balance simulation

     

    ‘Would you tell me, please, which way I ought to go from here?’ (asked Alice)
    ‘That depends a good deal on where you want to get to,’ said the Cat.
    ‘I don’t much care where—‘said Alice.
    ‘Then it doesn’t matter which way you go,’ said the Cat.
    –    Lewis Carroll, Alice’s Adventures in Wonderland

    Let’s say that the board have sketched a future desired state (value of equity) of the company and that you are left to find if it is possible to get there and if so – the road to take. The first part implies to find out if the desired state belongs to a set of feasible future states to your company. If it does you will need a road map to get there, if it does not you will have to find out what additional means you will need to get there and if it is possible to acquire those.

    The current state (equity value of) your company is in itself uncertain since it depends on future sales, costs and profit – variable that usually are highly uncertain. The desired future state is even more so since you need to find strategies (roads) that can take you there and of those the one best suited to the situation. The ‘best strategies’ will be those that with highest probability and lowest costs will give you the desired state that is, that has the desired state or a better one as a very probable outcome:

    Each of the ‘best strategies’ will have many different combinations of values for the variables –that describe the company – that can give the desired state(s). Using Monte Carlo simulations this means that a few, some or many of the thousands of runs – or realizations of future states-will give equity value outcomes that fulfill the required state. What we need then is to find how each of these has come about – the transition – and select the most promising ones.

    The S@R balance simulation model has the ability to make intermediate stops when the desired state(s) has been reached giving the opportunity to take out complete reports describing the state(s) and how it was reached and by what path of transitional states.

    The flip side of this is that we can use the same model and the same assumptions to take out similar reports on how undesirable states were reached – and their path of transitional states. This set of reports will clearly describe the risks underlying the strategy and how and when they might occur.

    The dominant strategy will then be the one that has the desired state or a better one as a very probable outcome and that have at the same time the least probability of highly undesirable outcomes (the stochastic dominant strategy):

    Mulling over possible target- or scenario analysis; calculating backwards the value of each variable required to meet the target is a waste of time since both the environment is stochastic and a number of different paths (time-lines) can lead to the desired state:

    And even if you could do the calculations, what would the probabilities be?

    Carroll, L., (2010). Alice‘s Adventures in Wonderland -Original Version. City: Cosimo Classics.

  • Uncertainty modeling

    Uncertainty modeling

    This entry is part 2 of 3 in the series What We Do

    Prediction is very difficult, especially about the future.
    Niels Bohr. Danish physicist (1885 – 1962)

    Strategy @ Risks models provide the possibility to study risk and uncertainties related to operational activities;  cost, prices, suppliers,  markets, sales channels etc. financial issues like; interest rates risk, exchange rates risks, translation risk , taxes etc., strategic issues like investments in new or existing activities, valuation and M&As’ etc and for a wide range of budgeting purposes.

    All economic activities have an inherent volatility that is an integrated part of its operations. This means that whatever you do some uncertainty will always remain.

    The aim is to estimate the economic impact that such critical uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight: the ability to deal with uncertainties in an informed way and thus benefits above ordinary spread-sheet exercises.

    The results from these analyzes can be presented in form of B/S and P&L looking at the coming one to five (short term) or five to fifteen years (long term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ probabilities
    • Evaluate alternative strategic investment options at risk
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    S@R has set out to create models that can give answers to both deterministic and stochastic questions, by linking dedicated Ebitda models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or
    2. by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model – the ‘short cut’ method.

    The first approach implies setting up a dedicated Ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long term planning and strategy development.

    The second (‘the short cut’) uses coefficients of fabrications and their variations, and is a low effort (cost) alternative, usually using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans.  The data needed for the company’s economic environment (taxes, interest rates etc) will be the same in both alternatives:

    The ‘short cut’ approach is especially suited for quick appraisals of M&A cases where time and data is limited and where one wishes to limit efforts in an initial stage. Later the data and assumptions can be augmented to much more sophisticated analysis within the same ‘short cut’ framework. In this way analysis can be successively built in the direction the previous studies suggested.

    This also makes it a good tool for short-term (3-5 years) analysis and even for budget assessment. Since it will use a limited number of variables – usually less than twenty – describing the operations, it is easy to maintain and operate. The variables describing financial strategy and the economic environment come in addition, but will be easy to obtain.

    Used in budgeting it will give the opportunity to evaluate budget targets, their probable deviation from expected result and the probable upside or down side given the budget target (Upside/downside ratio).

    Done this way analysis can be run for subsidiaries across countries translating the P&L and Balance to any currency for benchmarking, investment appraisals, risk and opportunity assessments etc. The final uncertainty distributions can then be “aggregated’ to show global risk for the mother company.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    Since all runs (500 to 1000) in the simulation produces a complete P&L and Balance the uncertainty curve (distribution) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. can be produced.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic Ebitda model.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However the type of data will have to be agreed upon depending on the scope of analysis.

    Very often will key people from the controller group be adequate for this work and if they don’t have the direct knowledge they usually know who to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months.

    For S&R the time frame will depend on the availability of key personnel from the client and the availability of data. For the second alternative it can take from one to three weeks of normal work to three to six months for the first alternative for more complex models. The total time will also depend on the number of analysis that needs to be run and the type of reports that has to be delivered.

    S@R_ValueSim

    Selecting strategy

    Models like this are excellent for selection and assessment of strategies. Since we can find the probability distribution for equity value, changes in this brought by different strategies will form a basis for selection or adjustment of current strategy. Models including real option strategies are a natural extension of these simulation models:

    If there is a strategy with a curve to the right and under all other feasible strategies this will be the stochastic dominant one. If the curves crosses further calculations needs to be done before a stochastic dominant or preferable strategy can be found:

    Types of problems we aim to address:

    The effects of uncertainties on the P&L and Balance and the effects of the Boards strategies (market, hedging etc.) on future P&L and Balance sheets evaluating:

    • Market position and potential for growth
    • Effects of tax and capital cost
    • Strategies
    • Business units, country units or product lines –  capital allocation – compare risk, opportunity and expected profitability
    • Valuations, capital cost and debt requirements, individually and effect on company
    • The future cash-flow volatility of company and the individual BU’s
    • Investments, M&A actions, their individual value, necessary commitments and impact on company
    • Etc.

    The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against uncertain factors.

    Used in budgeting, this will improve budget stability through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.

    This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and Choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.

    A severe depression like that of 1920-1921 is outside the range of probability. –The Harvard Economic Society, 16 November 1929

  • Stochastic Balance Simulation

    Stochastic Balance Simulation

    This entry is part 1 of 6 in the series Balance simulation

    Introduction

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on a single values forecasts; the expected or average value of the input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong (Savage, 2002).  In addition deterministic models will miss the important dimension of uncertainty – that gives both the different risks facing the company and the opportunities they produce.

    In contrast, a stochastic model will be calculated a large number of times with different values for the input variable drawn from all possible values of the individual variables. Each run will then give a probable realization of future cash flow or of the company’s equity value etc. With thousands of runs we can plot the relative frequencies of the calculated values:

    and thus, we have succeeded in generating the probability distribution for the company’s equity value. In insurance this type of technique is often called Dynamic Financial Analysis (DFA) which actually is a fitting name.

    The Balance Simulation Model

    The main tool in the S&R toolbox is the balance model. The starting point is the company’s balance, which is treated as the simulations opening balance. In the case of a greenfield project – new factories, power plants, airports, etc. built from scratch – the opening balance is empty.

    The successive balances are then built from the Profit & Loss, by simulation of the company’s operation thru an EBITDA model mimicking the real life operations. Investments can be driven by demand (capacity calculations) or by investment programs giving the necessary or planned production capacity. The model will throughout the simulation raise debt (short and/or long term) or equity (domestic or foreign) according to the financial strategy set out by the company and the difference between cash outflow and inflow adjusted for the minimum cash level.

    Since this is a dynamic model, it will raise equity when losses occur and/or the maximum Debt/equity ratio has been exceeded. On the other hand it will repay loans, pay dividend, repurchase shares or purchase excess marketable securities (cash above the need for the operations) – all in line with the board’s shareholder strategy.

    The ledger and Double-entry Bookkeeping

    The activity described in the EBITDA model; investments, purchase of raw materials, production, payment of wages, income from sales, payment of special taxes on investments etc. is registered as transactions in the ledger, following a standard chart of accounts with double-entry bookkeeping. In a similar fashion are all financial transactions; loans repayments, cash, taxes paid and deferred, Agio and Disagio, etc. posted in the ledger. Currently, approximately 400 accounts are in use.

    The Trial Balance and the Financial Statements

    The trial balance (Post-Closing) is compiled and checked for balance between total debts and total credits. The income statement is then prepared using revenue and expense accounts from the trial balance and the balance sheet is prepared from the asset and liability accounts by including net income with the other equity accounts – using the International Financial Reporting Standards (IFRS).

    The general purpose of producing the trial balance is to ensure that the entries in the ledger are mathematically correct. Have in mind that every run in a simulation will produce a number of entries in the ledger and that they might differ not only in size but also in type depending on the realized states of the company’s operations (see above). We therefore need to be sure that the final financial statements – for every run – are correctly produced, since they will be the basis for all further financial analysis of the company.

    There are of course other sources of errors in book keeping; compensating errors, errors of omission, errors of principle etc. but after many years of use – with millions of runs – we feel confident that the ledger and financial statements are produced correctly. The point is that serious problems need serious models.

    However there are more benefits to be had from simulating the ledger and trial balance:

    1. It increases the models transparency; the trial balance can be printed out and audited. Together with the models extensive reporting and error/consistency control, it is no longer a ‘black box’ to the user.
    2. It makes it easy to plug inn new EBITDA models for other types of industry giving an automated check for consistency with the main balance simulation model.
    3. It is used to ensure correct solving of all implicit equations in the model, the most obvious is of course the interest and bank balance equation (interest depends on the bank balance and the bank balance depends on the interest) but others like translation hedging and limits set by the company’s financial strategy, create large and complicated systems of simultaneous equations.
    4. The trial balance changes from year to year are also used to ensure correct year to year balance transition.

    Financial Analysis, Financial Measures and Valuation

    Given the framework described above financial analysis can be performed and the expected value, variability and probability distributions for the different types of ratios; profitability, liquidity, activity, debt and equity etc. can be calculated and given as graphs. All important measures are calculated at least twice from different starting points to ensure consistency and correct solving of implicit equations.

    The following table shows the reconciliation of Economic Profit, initially calculated from (ROIC-WACC) multiplied with Invested capital:

    The motivation for doing all these consistency controls – in all nearly one hundred – lies in previously experience from Cash Flow/ Valuation models written in Excel. The level of detail is more often than not so low that there is no way to establish if they are right or wrong.

    More interesting than ratios, are the yearly distributions for EBITDA, EBIT, NOPLAT, Profit (loss) for the period, Free cash Flow, Economic profit, ROIC, Wacc, Debt and Equity and Equity value etc. giving a visual picture of the uncertainties and risks the company faces:

    Financial analysis is the conversion of financial data into useful information for decision making. Therefore, virtually any use of financial statements or other financial data for some purpose is financial analysis and is the primary focus of accounting and finance. Financial analysis can be internal (e.g., decision analysis by a company using internal data to understand or improve management and operating results) or external (e.g., comprehensive analysis for the purposes of commercial lending, mergers and acquisition or investment activities). The key is how to analysis available data to make correct decisions.

     

    Input

    As input the model needs parameter values and operational data. The parameter values fall in seven groups:

    1. Parameters describing investors preferences; Market risk premium etc.
    2. Parameters describing the company’s financial strategy; Leverage, Long/Short-term Debt ratio, Expected Foreign/ Domestic Debt Ratio, Economic Depreciation, Maximum Dividend Pay-out Ratio, Translation Hedging Strategy etc.
    3. Parameters describing the economic regime under which it operates: Taxes, Depreciation Scheme etc.
    4. Opening Balance etc.

    Since the model have to produces stochastic forecasts of interest(s) and exchange rates it will need for every currency involved (included lower and upper 5% probability limit):

    1. The Yield curves,
    2. Expected yearly inflation
    3. Depending on the forecast method(s) chosen for the exchange rates; the different currencies expected risk premiums or real exchange rates etc.

    Since there is a large number of parameters they are usually read from an excel template but the program will if necessary ask for missing or report inconsistent values of the parameters.

    The company’s operations are best described through an EBITDA model even if prices, costs and production coefficients and their variability can be read from an excel template. A dedicated EBITDA model will always give the opportunity to give a more detailed and in some cases complex description of the operations, include forecast and demand models, ‘exotic’ taxes, real options strategies etc., etc.

    Output

    S@R has set out to create models that can give answers to both deterministic and stochastic questions the tables will answer most deterministic issues while graphs must be used to answer the risk and uncertainty related questions:

    [TABLE=6]

    1.    In all 27 different reports with more than 70 pages describing operations and the economics of operations.
    2.    In addition the probability distributions for all input and output variables are produced.

    Use

    By linking dedicated EBITDA models to holistic balance simulation, taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:
    1.    by a using a EBITDA model to describe the companies operations or
    2.    by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model.

    The first approach implies setting up a dedicated EBITDA performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.
    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modeling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    Strategy@Risk takes advantage of a program language developed and used for financial risk simulation. We have used the program language for over 25years, and developed a series of simulation models for industry, banks and financial institutions.

    The language has as one of its strengths, to be able to solve implicit equations in multiple dimensions. For the specific problems we seek to solve, this is a necessity that provides the necessary degrees of freedom to formulate the approach to problems.

    The Strategy@Risk tools have highly advance properties:

    • Using models written in dedicated financial simulation language (with code and data separated; see The risk of spreadsheet errors).
    • Solving implicit systems of equations giving unique WACC calculated for every period ensuring that “Free Cash Flow” always equals “Economic Profit” value.
    • Programs and models in “windows end-user” style.
    • Extended test for consistency in input, calculations and results.
    • Transparent reporting of assumptions and results.

    References

    Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November 2002, pp. 20-21

    Mukherjee, Mukherjee (2003). Financial Accounting. New York: Harper Perennial, ISBN 9780070581555.