Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Decision making – Page 3 – Strategy @ Risk

Category: Decision making

  • You only live once

    You only live once

    This entry is part 4 of 4 in the series The fallacies of scenario analysis

    You only live once, but if you do it right, once is enough.
    — Mae West

    Let’s say that you are considering new investment opportunities for your company and that the sales department has guesstimated that the market for one of your products will most likely grow by a little less than 5 % per year. You then observe that the product already has a substantial market and that this in fifteen years’ time nearly will be doubled:

    Building a new plant to accommodate this market growth will be a large investment so you find that more information about the probability distribution for the products future sales is needed. Your sales department then “estimates” the market yearly growth to have a mean close to zero and a lower quartile of minus 5 % and an upper quartile of plus 7 %.

    Even with no market growth the investment is a tempting one since the market already is substantial and there is always a probability of increased market shares.

    As quartiles are given, you rightly calculate that there will be a 25 % probability that the growth will be above 7 %, but also that there will be a 25 % probability that it can be below minus 5 %. At the face of it, and with you being not too risk averse, this looks as a gamble worth taking.

    Then you are informed that the distribution will be heavily left skewed – opening for considerable downside risk. In fact it turns out that it looks like this:

    A little alarmed you order the sales department to come up with a Monte Carlo simulation giving a better view of the future possible paths of the market development.

    The return with the graph below giving the paths for the first ten runs in the simulation with the blue line giving average value and the green and red the 90 % and 10 % limits of the one thousand simulated outcomes:

    The blue line is the yearly ensemble  averages ((A set of multiple predictions that is all valid at the same time. The term “ensemble” is often used in physics and physics-influenced literature. In probability theory literature the term probability space is more prevalent.

    An ensemble provides reliable information on forecast uncertainties (e.g., probabilities) from the spread (diversity) amongst ensemble members.

    Also see: Ensemble forecasting; a numerical prediction method that is used to attempt to generate a representative sample of the possible future states of dynamic systems. Ensemble forecasting is a form of Monte Carlo analysis: multiple numerical predictions are conducted using slightly different initial conditions that are all plausible given the past and current set of observations. Often used in weather forecasting.));  that is the time series of average of outcomes. The series shows a small decline in market size but not at an alarming rate. The sales department’s advice is to go for the investment and try to conquer market shares.

    You then note that the ensemble average implies that you are able jump from path to path and since each is a different realization of the future that will not be possible – you only live once!

    You again call the sales department asking them to calculate each paths average growth rate (over time) – using their geometric mean – and report the average of these averages to you. When you plot both the ensemble and the time averages you find quite a large difference between them:

    The time average shows a much larger market decline than the ensemble average.

    It can be shown that the ensemble average always will overestimate (Peters, 2010) the growth and thus can falsely lead to wrong conclusions about the market development.

    If we look at the distribution of path end values we find that the lower quartile is 64 and the upper quartile is 118 with a median of 89:

    It thus turns out that the process behind the market development is non-ergodic ((The term ergodic is used to describe dynamical systems which have the same behavior averaged over time as averaged over space.))  or non-stationary ((Stationarity is a necessary, but not sufficient, condition for ergodicity. )). In the ergodic case both the ensemble and time averages would have been equal and the problem above would not have appeared.

    The investment decision that at first glance looked a simple one is now more complicated and can (should) not be decided based on market development alone.

    Since uncertainty increases the further we look into the future, we should never assume that we have ergodic situations. The implication is that in valuation or M&A analysis we should never use an “ensemble average” in the calculations, but always do a full simulation following each time path!

    References

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338

    Endnotes

  • The probability distribution of the bioethanol crush margin

    The probability distribution of the bioethanol crush margin

    This entry is part 1 of 2 in the series The Bio-ethanol crush margin

    A chain is no stronger than its weakest link.

    Introduction

    Producing bioethanol is a high risk endeavor with adverse price development and crumbling margins.

    In the following we will illustrate some of the risks the bioethanol producer is facing using corn  as feedstock. However, these risks will persist regardless of the feedstock and production process chosen. The elements in the discussion below can therefore be applied to any and all types of bioethanol production:

    1.    What average yield (kg ethanol per kg feedstock) can we expect?  And  what is the shape of the yield distribution?
    2.    What will the future price ratio of feedstock to ethanol be? And what volatility can we expect?

    The crush margin ((The relationship between prices in the cash market is commonly referred to as the Gross Production Margin.))  measures the difference between the sales proceeds of finished bioethanol and its feedstock ((It can also be considered as the productions throughput; the rate at which the system converts raw materials to money. Throughput is net sales less variable cost, generally the cost of the most important raw materials. (see: Throughput Accounting)).

    With current technology, one bushel of corn can be converted into approx. 2.75 gallons of corn and 17 pounds of DDG (distillers’ dried grains). The crush margin (or gross processing margin) is then:

    1. Crush margin = 0.0085 x DDG price + 2.8 x ethanol price – corn price

    Since from 65 % to 75 % of the variable cost in bioethanol production is cost of corn, the crush margin is an important metric especially since the margin in addition shall cover all other expenses like energy, electricity, interest, transportation, labor etc. and – in the long term the facility’s fixed costs.

    The following graph taken from the CME report: Trading the corn for ethanol crush, (CME, 2010) gives the margin development in 2009 and the first months of 2010:

    This graph gives a good picture of the uncertainties that faces the bioethanol producers, and can be a helpful tool when hedging purchases of corn and sale of the products ((The historical chart going back to APR 2005 is available at the CBOT web site)).

    The Crush Spread, Crush Profit Margin and Crush Ratio

    There are a number of other ways to formulate the crush risk (CME, July 11. 2011):

    The CBOT defines the “Crush Spread” as the Estimated Gross Margin per Bushel of Corn. It is calculated as follows:

    2. Crush Spread = (Ethanol price per gallon X 2.8) – Corn price per bushel, or as

    3. Crush Profit margin = Ethanol price – (Corn price/2.8).

    Understanding these relationships is invaluable in trading ethanol stocks ((We will return to this in a later post.)).

    By rearranging the crush spread equation, we can express the spread as its ratio to the product price (simplifying by keeping bi-products like DDG etc. out of the equation):

    4. Crush ratio = Crush spread/Ethanol price = y – p,

    Where: y = EtOH Yield (gal)/ bushel corn and p = Corn price/Ethanol price.

    We will in the following look at the stochastic nature of y and p and thus the uncertainty in forecasting the crush ratio.

    The crush spread and thus the crush ratio is calculated using data from the same period. They therefore give the result of an unhedged operation. Even if the production period is short – two to three days – it will be possible to hedge both the corn and ethanol prices. But to do that in a consistent and effective way we have to look into the inherent volatility in the operations.

    Ethanol yield

    The ethanol yield is usually set to 2.682 gal/bushel corn, assuming 15.5 % humidity. The yield is however a stochastic variable contributing to the uncertainty in the crush ratio forecasts. As only starch in corn can be converted to ethanol we need to know the content of extractable starch in a standard bushel of corn – corrected for normal loss and moisture.  In the following we will lean heavily on the article: “A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch”, by Tad W. Patzek (Patzek, 2006) which fits our purpose perfectly. All relevant references can be found in the article.

    The aim of his article was to establish the mean extractable starch in hybrid corn and the mean highest possible yield of ethanol from starch. We however are also interested in the probability distributions for these variables – since no production company will ever experience the mean values (ensembles) and since the average return over time always will be less than the return using ensemble means ((We will return to this in a later post))  (Peters, 2010).

    The purpose of this exercise is after all to establish a model that can be used as support for decision making in regard to investment and hedging in the bioethanol industry over time.

    From (Patzek, 2006) we have that the extractable starch (%) can be described as approx. having a normal distribution with mean 66.18 % and standard deviation of 1.13:

    The nominal grain loss due to dirt etc. can also be described as approx. having a normal distribution with mean 3 % and a standard deviation of 0.7:

    The probability distribution for the theoretical ethanol yield (kg/kg corn) can then be found by Monte Carlo simulation ((See formula #3 in (Patzek, 2006))  as:

    – having an approx. normal distribution with mean 0.364 kg EtHO/kg of dry grain and standard deviation of 0.007. On average we will need 2.75 kg of clean dry grain to produce one kilo or 1.74 liter of ethanol ((With a specific density of 0.787 kg/l)).

    Since we now have a distribution for ethanol yield (y) as kilo of ethanol per kilo of corn we will in the following use price per kilo both for ethanol and corn, adjusting for the moisture (natural logarithm of moisture in %) in corn:

    We can also use this to find the EtHO yield starting with wet corn and using gal/bushel corn as unit (Patzek, 2006):

    giving as theoretical value a mean of 2.64 gal/wet bushel with a standard deviation of 0.05 – which is significantly lower than the “official” figure of 2.8 gal/wet bushel used in the CBOT calculations. More important to us however is the fact that we easily can get yields much lower than expected and thus a real risk of lower earnings than expected. Have in mind that to get a yield above 2.64 gallons of ethanol per bushel of corn all steps in the process must continuously be at or close to their maximum efficiency – which with high probability never will happen.

    Corn and ethanol prices

    Looking at the price developments since 2005 it is obvious that both the corn and ethanol prices have a large variability ($/kg and dry corn):

    The long term trends show a disturbing development with decreasing ethanol price, increasing corn prices  and thus an increasing price ratio:

    “Risk is like fire: If controlled, it will help you; if uncontrolled, it will rise up and destroy you.”

    Theodore Roosevelt

    The unhedged crush ratio

    Since the crush ratio on average is:

    Crush ratio = 0.364 – p, where:
    0.364 = Average EtOH Yield (kg EtHO/kg of dry grain) and
    p = Corn price/Ethanol price

    The price ratio (p) has to be less than 0.364 for the crush ratio in the outset to be positive. As of January 2011 the price ratios has overstepped that threshold and have for the first months of 2011 stayed above that.

    To get a picture of the risk an unhedged bioethanol producer faces only from normal variation in yield and forecasted variation in the price ratio we will make a simple forecast for April 2011 using the historic time series information on trend and seasonal factors:

    The forecasted probability distribution for the April price ratio is given in the frequency graph below:

    This represents the price risk the producer will face. We find that the mean value for the price ratio will be 0.323 with a standard deviation of 0.043. By using this and the distribution for ethanol yield we can by Monte Carlo simulation forecast the April distribution for the crush ratio:

    As we see, will negative values for the crush ratio be well inside the field of possible outcomes:

    The actual value of the average price ratio for April turned out to be 0.376 with a daily maximum of 0.384 and minimum of 0.363. This implies that the April crush ratio with 90 % probability would have been between -0.005 and -0.199, with only the income from DDGs to cover the deficit and all other costs.

    Hedging the crush ratio

    The distribution for the price ratio forecast above clearly points out the necessity of price ratio hedging (Johnson, 1960) and (Stein, 1961).
    The time series chart above shows both a negative trend development and seasonal variations in the price ratio. In the short run there is nothing much to do about the trend development, but in the longer run will probably other feedstock and better processes change the trend development (Shapouri et al., 2002).

    However, what immediately stand out are the possibilities to exploit the seasonal fluctuations in both markets:

    Ideally, raw material is purchased in the months seasonal factors are low and ethanol sold the months seasonal factor are high. In practice, this is not possible, restrictions on manufacturing; warehousing, market presence, liquidity, working capital and costs set limits to the producer’s degrees of freedom (Dalgran, 2009).

    Fortunately, there are a number of tools in both the physical and financial markets available to manage price risks; forwards and futures contracts, options, swaps, cash-forward, and index and basis contracts. All are available for the producers who understand financial hedging instruments and are willing to participate in this market. See: (Duffie, 1989), (Hull, 2003) and (Bjørk, 2009).

    The objective is to change the margin distributions shape (red) from having a large part of its left tail on the negative part of the margin axis to one resembling the green curve below where the negative part have been removed, but most of the upside (right tail) has been preserved, that is to: eliminate negative margins, reduce variability, maintain the upside potential and thus reduce the probability of operating at a net loss:

    Even if the ideal solution does not exist, large number of solutions through combinations of instruments can provide satisfactory results. In principle, it does not matter where these instruments exist, since both the commodity and financial markets are interconnected to each other. From a strategic standpoint, the purpose is to exploit fluctuations in the market to capture opportunities while mitigating unwanted risks (Mallory, et al., 2010).

    Strategic Risk Management

    To manage price risk in commodity markets is a complex topic. There are many strategic, economic and technical factors that must be understood before a hedging program can be implemented.

    Since all the hedging instruments have a cost and since only future outcomes ranges and not exact prices, can be forecasted in the individual markets, costs and effectiveness is uncertain.

    In addition, the degrees of desired protection have to be determined. Are we seeking to ensure only a positive margin, or a positive EBITDA, or a positive EBIT? With what probability and to what cost?

    A systematic risk management process is required to tailor an integrated risk management program for each individual bioethanol plant:

    The choice of instruments will define different strategies that will affect company liquidity and working capital and ultimately company value. Since the effect of each of these strategies will be of stochastic nature it will only be possible to distinguish between them using the concept of stochastic dominance. (selecting strategy)

    Models that can describe the business operations and underlying risk can be a starting point, to such an understanding. Linked to balance simulation they will provide invaluable support to decisions on the scope and timing of hedging programs.

    It is only when the various hedging strategies are simulated through the balance so that the effect on equity value can be considered that the best strategy with respect to costs and security level can be determined – and it is with this that S@R can help.

    References

    Bjørk, T.,(2009). Arbitrage Theory in Continuous Time. Oxford University Press, Oxford.

    CME Group., (2010).Trading the corn for ethanol crush,
    http://www.cmegroup.com/trading/agricultural/corn-for-ethanol-crush.html

    CME Group., (July 11. 2011). Ethanol Outlook Report, , http://cmegroup.barchart.com/ethanol/

    Dalgran, R.,A., (2009) Inventory and Transformation Hedging Effectiveness in Corn Crushing. Journal of Agricultural and Resource Economics 34 (1): 154-171.

    Duffie, D., (1989). Futures Markets. Prentice Hall, Englewood Cliffs, NJ.

    Hull, J. (2003). Options, Futures, and Other Derivatives (5th edn). Prentice Hall, Englewood Cliffs, N.J.

    Johnson, L., L., (1960). The Theory of Hedging and Speculation in Commodity Futures, Review of Economic Studies , XXVII, pp. 139-151.

    Mallory, M., L., Hayes, D., J., & Irwin, S., H. (2010). How Market Efficiency and the Theory of Storage Link Corn and Ethanol Markets. Center for Agricultural and Rural Development Iowa State University Working Paper 10-WP 517.

    Patzek, T., W., (2004). Sustainability of the Corn-Ethanol Biofuel Cycle, Department of Civil and Environmental Engineering, U.C. Berkeley, Berkeley, CA.

    Patzek, T., W., (2006). A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch, Natural Resources Research, Vol. 15, No. 3.

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338.

    Shapouri,H., Duffield,J.,A., & Wang, M., (2002). The Energy Balance of Corn Ethanol: An Update. U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses. Agricultural Economic Report No. 814.

    Stein, J.L. (1961). The Simultaneous Determination of Spot and Futures Prices. American Economic Review, vol. 51, p.p. 1012-1025.

    Footnotes

  • The tool that would improve everybody’s toolkit

    The tool that would improve everybody’s toolkit

    Edge, which every year ((http://www.edge.org/questioncenter.html))   invites scientists, philosophers, writers, thinkers and artists to opine on a major question of the moment, asked this year: “What scientific concept would improve everybody’s cognitive toolkit?”

    The questions are designed to provoke fascinating, yet inspiring answers, and are typically open-ended, such as:  “What will change everything” (2008), “What are you optimistic about?” (2007), and “How is the internet changing the way you think?” (Last’s years question). Often these questions ((Since 1998))  are turned into paperback books.

    This year many of the 151 contributors pointed to Risk and Uncertainty in their answers. In the following we bring excerpt from some of the answers. We will however advice the interested reader to look up the complete answers:

    A Probability Distribution

    The notion of a probability distribution would, I think, be a most useful addition to the intellectual toolkit of most people.

    Most quantities of interest, most projections, most numerical assessments are not point estimates. Rather they are rough distributions — not always normal, sometimes bi-modal, sometimes exponential, sometimes something else.

    Related ideas of mean, median, and variance are also important, of course, but the simple notion of a distribution implicitly suggests these and weans people from the illusion that certainty and precise numerical answers are always attainable.

    JOHN ALLEN PAULOS, Professor of Mathematics, Temple University, Philadelphia.

    Randomness

    The First Law of Randomness: There is such a thing as randomness.
    The Second Law of Randomness: Some events are impossible to predict.
    The Third Law of Randomness: Random events behave predictably in aggregate even if they’re not predictable individually

    CHARLES SEIFE, Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Proofiness: The Dark Arts of Mathematical Deception.

    The Uselessness of Certainty

    Every knowledge, even the most solid, carries a margin of uncertainty. (I am very sure about my own name … but what if I just hit my head and got momentarily confused?) Knowledge itself is probabilistic in nature, a notion emphasized by some currents of philosophical pragmatism. Better understanding of the meaning of probability, and especially realizing that we never have, nor need, ‘scientifically proven’ facts, but only a sufficiently high degree of probability, in order to take decisions and act, would improve everybody’ conceptual toolkit.

    CARLO ROVELLI, Physicist, University of Aix-Marseille, France; Author, The First Scientist: Anaximander and the Nature of Science.

    Uncertainty

    Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So too in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even understanding the difficulty of obtaining reliable estimates of uncertainties usually means bad public policy.

    LAWRENCE KRAUSS, Physicist, Foundation Professor & Director, Origins Project, Arizona State University; Author, A Universe from Nothing; Quantum Man: Richard Feynman’s Life in Science.

    Risk Literacy

    Literacy — the ability to read and write — is the precondition for an informed citizenship in a participatory democracy. But knowing how to read and write is no longer enough. The breakneck speed of technological innovation has made risk literacy as indispensable in the 21st century as reading and writing were in the 20th century. Risk literacy is the ability to deal with uncertainties in an informed way.

    GERD GIGERENZER, Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings.

    Living is fatal

    The ability to reason clearly in the face of uncertainty. If everybody could learn to deal better with the unknown, then it would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps), but the chances for humanity as a whole.

    SETH LLOYD, Quantum Mechanical Engineer, MIT; Author, Programming the Universe.

    Uncalculated Risk

    We humans are terrible at dealing with probability. We are not merely bad at it, but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our wellbeing. This incompetence is reflected in our language, in which the common words used to convey likelihood are “probably” and “usually” — vaguely implying a 50% to 100% chance. Going beyond crude expression requires awkwardly geeky phrasing, such as “with 70% certainty,” likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness — the inability to deal with probability — may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.

    GARRETT LISI, Independent Theoretical Physicist

    And there is more … much more at the Edge site

  • Planning under Uncertainty

    Planning under Uncertainty

    This entry is part 3 of 6 in the series Balance simulation

     

    ‘Would you tell me, please, which way I ought to go from here?’ (asked Alice)
    ‘That depends a good deal on where you want to get to,’ said the Cat.
    ‘I don’t much care where—‘said Alice.
    ‘Then it doesn’t matter which way you go,’ said the Cat.
    –    Lewis Carroll, Alice’s Adventures in Wonderland

    Let’s say that the board have sketched a future desired state (value of equity) of the company and that you are left to find if it is possible to get there and if so – the road to take. The first part implies to find out if the desired state belongs to a set of feasible future states to your company. If it does you will need a road map to get there, if it does not you will have to find out what additional means you will need to get there and if it is possible to acquire those.

    The current state (equity value of) your company is in itself uncertain since it depends on future sales, costs and profit – variable that usually are highly uncertain. The desired future state is even more so since you need to find strategies (roads) that can take you there and of those the one best suited to the situation. The ‘best strategies’ will be those that with highest probability and lowest costs will give you the desired state that is, that has the desired state or a better one as a very probable outcome:

    Each of the ‘best strategies’ will have many different combinations of values for the variables –that describe the company – that can give the desired state(s). Using Monte Carlo simulations this means that a few, some or many of the thousands of runs – or realizations of future states-will give equity value outcomes that fulfill the required state. What we need then is to find how each of these has come about – the transition – and select the most promising ones.

    The S@R balance simulation model has the ability to make intermediate stops when the desired state(s) has been reached giving the opportunity to take out complete reports describing the state(s) and how it was reached and by what path of transitional states.

    The flip side of this is that we can use the same model and the same assumptions to take out similar reports on how undesirable states were reached – and their path of transitional states. This set of reports will clearly describe the risks underlying the strategy and how and when they might occur.

    The dominant strategy will then be the one that has the desired state or a better one as a very probable outcome and that have at the same time the least probability of highly undesirable outcomes (the stochastic dominant strategy):

    Mulling over possible target- or scenario analysis; calculating backwards the value of each variable required to meet the target is a waste of time since both the environment is stochastic and a number of different paths (time-lines) can lead to the desired state:

    And even if you could do the calculations, what would the probabilities be?

    Carroll, L., (2010). Alice‘s Adventures in Wonderland -Original Version. City: Cosimo Classics.

  • Uncertainty modeling

    Uncertainty modeling

    This entry is part 2 of 3 in the series What We Do

    Prediction is very difficult, especially about the future.
    Niels Bohr. Danish physicist (1885 – 1962)

    Strategy @ Risks models provide the possibility to study risk and uncertainties related to operational activities;  cost, prices, suppliers,  markets, sales channels etc. financial issues like; interest rates risk, exchange rates risks, translation risk , taxes etc., strategic issues like investments in new or existing activities, valuation and M&As’ etc and for a wide range of budgeting purposes.

    All economic activities have an inherent volatility that is an integrated part of its operations. This means that whatever you do some uncertainty will always remain.

    The aim is to estimate the economic impact that such critical uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight: the ability to deal with uncertainties in an informed way and thus benefits above ordinary spread-sheet exercises.

    The results from these analyzes can be presented in form of B/S and P&L looking at the coming one to five (short term) or five to fifteen years (long term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ probabilities
    • Evaluate alternative strategic investment options at risk
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    S@R has set out to create models that can give answers to both deterministic and stochastic questions, by linking dedicated Ebitda models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or
    2. by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model – the ‘short cut’ method.

    The first approach implies setting up a dedicated Ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long term planning and strategy development.

    The second (‘the short cut’) uses coefficients of fabrications and their variations, and is a low effort (cost) alternative, usually using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans.  The data needed for the company’s economic environment (taxes, interest rates etc) will be the same in both alternatives:

    The ‘short cut’ approach is especially suited for quick appraisals of M&A cases where time and data is limited and where one wishes to limit efforts in an initial stage. Later the data and assumptions can be augmented to much more sophisticated analysis within the same ‘short cut’ framework. In this way analysis can be successively built in the direction the previous studies suggested.

    This also makes it a good tool for short-term (3-5 years) analysis and even for budget assessment. Since it will use a limited number of variables – usually less than twenty – describing the operations, it is easy to maintain and operate. The variables describing financial strategy and the economic environment come in addition, but will be easy to obtain.

    Used in budgeting it will give the opportunity to evaluate budget targets, their probable deviation from expected result and the probable upside or down side given the budget target (Upside/downside ratio).

    Done this way analysis can be run for subsidiaries across countries translating the P&L and Balance to any currency for benchmarking, investment appraisals, risk and opportunity assessments etc. The final uncertainty distributions can then be “aggregated’ to show global risk for the mother company.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    Since all runs (500 to 1000) in the simulation produces a complete P&L and Balance the uncertainty curve (distribution) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. can be produced.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic Ebitda model.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However the type of data will have to be agreed upon depending on the scope of analysis.

    Very often will key people from the controller group be adequate for this work and if they don’t have the direct knowledge they usually know who to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months.

    For S&R the time frame will depend on the availability of key personnel from the client and the availability of data. For the second alternative it can take from one to three weeks of normal work to three to six months for the first alternative for more complex models. The total time will also depend on the number of analysis that needs to be run and the type of reports that has to be delivered.

    S@R_ValueSim

    Selecting strategy

    Models like this are excellent for selection and assessment of strategies. Since we can find the probability distribution for equity value, changes in this brought by different strategies will form a basis for selection or adjustment of current strategy. Models including real option strategies are a natural extension of these simulation models:

    If there is a strategy with a curve to the right and under all other feasible strategies this will be the stochastic dominant one. If the curves crosses further calculations needs to be done before a stochastic dominant or preferable strategy can be found:

    Types of problems we aim to address:

    The effects of uncertainties on the P&L and Balance and the effects of the Boards strategies (market, hedging etc.) on future P&L and Balance sheets evaluating:

    • Market position and potential for growth
    • Effects of tax and capital cost
    • Strategies
    • Business units, country units or product lines –  capital allocation – compare risk, opportunity and expected profitability
    • Valuations, capital cost and debt requirements, individually and effect on company
    • The future cash-flow volatility of company and the individual BU’s
    • Investments, M&A actions, their individual value, necessary commitments and impact on company
    • Etc.

    The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against uncertain factors.

    Used in budgeting, this will improve budget stability through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.

    This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and Choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.

    A severe depression like that of 1920-1921 is outside the range of probability. –The Harvard Economic Society, 16 November 1929

  • Solving Uncertainty in Simulation Models

    Solving Uncertainty in Simulation Models

    I shall be telling this with a sigh
    Somewhere ages and ages hence:
    Two roads diverged in a wood, and I–
    I took the one less travelled by,
    And that has made all the difference

    …Robert Frost, 1916


    Uncertainty in your operations is most likely complex and will need systematic treatment through simulation modeling. S@R carries out thorough analysis of companies risk and uncertainties with aim of producing good decision support tools. Making sure the client takes a huge step forward from scenario analysis.

    The Four Levels of Uncertainty

    The uncertainty that remains after the best possible analysis has been done is what we call residual uncertainty (Courtney, Kirkland & Viguerie, 1997).

    In our world ‘the best possible analysis’ means that we have a model ‘good enough’ to describe the business under study. The question then is – do we need to take into account the uncertainties that always will be inherent in its operations and markets?  And if we have to, is it possible?

    A useful distinction between the different situations that can arise is given by Courtney et al.  as four levels of residual uncertainty (see figure below, McKinsey Quarterly, Dec. 2008):

    1. A Clear-Enough Future; managers can develop a single forecast of the future that is precise enough for strategy development.
    2. Alternate Futures; the future can be described as one of a few discrete scenarios. Analysis cannot identify which outcome will occur, although it may help establish probabilities.
    3. A Range of Futures; a range of potential futures can be identified. That range is defined by a limited number of key variables, but the actual outcome may lie anywhere along a continuum bounded by that range.
    4. True Ambiguity; multiple dimensions of uncertainty interact to create an environment that is virtually impossible to predict.

    In real life there is however a problem with identifying the level we are facing. The definition of 1th level uncertainty indicates that the residual uncertainty is irrelevant to the strategic decisions under study. But how is it possible to know this before an uncertainty analysis has been performed?

    The answer has to be that the best possible analysis performed has been a risk/uncertainty analysis taking into account all known uncertainties in the business’s environment and that of all the business’s feasible strategies one is always best (1th order stochastic dominance).

    The best strategy will then be the one giving a probability distribution for the business’s equity value that is located to the right and under the distributions for all other strategies. In this case the resulting equity value is of less importance since it anyway will be larger than under any other strategy. With this established, the actual analysis can be performed as a deterministic calculation.

    For the 2nd level uncertainties with alternate futures, a scenario analysis is often advocated. However the same applies to each alternative future as for the 1th level uncertainties (also see scenario analysis). In addition some assumptions have to be made on the probabilities of each of the alternative futures.

    As an example we can take a company analyzing investment in production facilities in two alternative countries. In one country there is a sovereign risk of a future new tax scenario and if it is imposed two different scenarios is possible. In the other country there is a fixed tax scenario – not expected to change. In this case you will need at least three (maximum five) models all taking into account the inherent risk in the business, giving the probability distribution for equity value for;

    1. current operations,
    2. current operations + Investment in the country with no sovereign risk, and
    3. current operations + Investment in the country with sovereign risk;
      1. no new tax scenario and
      2. with each of the two different new tax scenarios.

    The reason for different models even if the operations in the new facility will be the same regardless of country, lies in the fact that the business strategy might differ between countries and the investment strategy might differ for different tax scenarios. The model with sovereign risk will switch between the different tax scenarios models, according to the probability of their occurrence – generating the distribution for equity value given the sovereign risk.

    To invest, at least one of the equity distributions for ‘Current operations + Investment’ should be located to the right and under the distributions for ‘Current operations’ (or be stochastic dominant). Likewise, the best investment alternative will have an equity distribution located to the right and under the distributions for the other alternative (or be stochastic dominant).

    Having the equity distribution for the dominant strategy, opens for measurement of the strategy’s inherent risk beyond the use of simple value at risk calculations, putting emphasis on the possibility of large losses and further unwanted capital infusions.

    As we now can see, directly applying a standard scenario analysis can quickly lead decision makers astray.

    The above classification for the two first levels can in general only be performed after a full the risk/uncertainty analysis and can never be used ex ante to select the appropriate method.

    The 3rd level uncertainties describes the normal situation where all exogenous variables have a range of possible values. Assuming that we can find (estimate or guesstimate) the probability distribution over that range, we can attack the problem by Monte Carlo simulation and calculate the probability distributions for our endogenous variables.

    The 4th level uncertainties comprises at least two different situations; where there are unknown but knowable probabilities and where there are unknown and unknowable probabilities:

    Ambiguity is uncertainty about probability, created by missing information that is relevant and could be known (Camerer & Weber, 1992).

    This leads us to a more comprehensive discussion of the situations that will arise in decision making processes:

    More generally, we propose that in most decision problems, “choice” is nothing but the terminal act of a problem-solving activity, preceded by the formulation of the problem itself, the identification of the relevant information, the application of pre-existing competences or the development of new ones to the problem solution and, finally, the identification of alternative courses of action. (Dosi & Egidi, 1991)

    The origin of uncertainty

    Uncertainty may have two origins:

    1. the lack of all the information which would be necessary to make decisions with certain outcomes (substantive uncertainty), and
    2. limitations on the computational and knowledge based capabilities, given the available information (procedural uncertainty).

    The first source of uncertainty comes from information incompleteness, and the second from the inability to recognize, interpret and act on the relevant information, even when it is available – knowledge incompleteness.

    To distinguish between the two different situations giving Courtney‘s 4th level uncertainty we will follow Dosi & Egidi:

    1. Weak substantive uncertainty (analogous to Knight’s “risk”) is all circumstances where uncertainty simply derives from lack of information about the occurrence of a particular event – with a certain known (or at least knowable) probability distribution, and
    2. Strong substantive uncertainty (analogous to Knight and Keynes “uncertainty”) is all cases involving unknown events or the impossibility, even in principle, of defining the probability distributions of the events themselves.

    Types of Uncertainty

    Uncertainty estimation usually includes the estimation of the uncertainty of the output parameters by estimating the uncertainty of the input parameters. This is done by estimating a probability distribution of the error. Hence, it is pretty much “straight forward” as long as the input parameters have values. However, the uncertainty of a model may not only be estimated via the parameters, there may also be uncertainty in the structure of a model, e.g. which variable and parameters are important in the model.

    Adopting the distinction between parametric- and structural uncertainty (Kyläheiko et al., 2002) we can further specify model uncertainty:

    1. Parametric; uncertainty or imperfect knowledge about the parameters in the decision model, and
    2. Structural (epistemic); uncertainty or imperfect knowledge about the structure of the model.

    Combining the above we can describe the types of risk and uncertainty facing both the decision maker and the decision support model as in the following picture:

    The purpose is then to solve weak substantive parametric and structural uncertainty using good methods and models. The model will constitute a mix of facts ((In a simulation the opening balance is usually considered as certain, but the balance sheet often contains highly uncertain items. In fact auditors should give interval estimates for the most critical items in the yearly balance report)) (certain values), risks with known (objective) probability distributions, uncertainties given by subjective probability distributions and a script of the firm’s operations.

    Modeling

    However, models will always have some structural uncertainty – even if it would be possible to remove all by introducing more and more variable and relations. Occam’s Razor can usually be applied with good results; select the model that introduces the fewest assumptions and postulates the fewest entities while still sufficiently answering the question. Borrowing from multidimensional scaling the term ‘stress’ – as the violation done to the actual decision structure by removing parameters or variable from the model – we can visualize this by the following figure:

    Reducing the dimensionality of the model will not necessarily reduce or move (distort) the endogenous variables event space since correlation exists between variable omitted and variable kept in the model and – depending on estimation methods – the standard errors of estimated relationships will increase, maintaining the original model variability.

    Strategy

    Maybe the world and the uncertainties we face haven’t changed all that much as a result of the financial crisis, but our perception of risks has. That means there is a real opportunity to rethink the way we make strategic decisions, the way we plan under uncertainty. (Courtney, McKinsey Quarterly, Dec. 2008)

    The development of strategy requires the courage to accept uncertainty. Strategists must accept that they will not have all of the information and not see the full spectrum of possible events, yet be committed to create and implement strategy. The uncertainty that exists is not only a product of not having complete information and being able to predict future events, it also is a product of the events generated by dynamic and thinking competitors.

    By its nature, uncertainty invariably involves the estimation and acceptance of risk. Risk is equally common to action and inaction. Risk may be related to gain; greater potential gain often requires greater risk. However, we should clearly understand that the acceptance of risk does not equate to the imprudent willingness to gamble the entire likelihood of success on improbable events.

    One important step in the direction of better and more informed decision making is the removal of procedural uncertainty by having good models capable of framing the environment of the circumstances under which the decisions are made – giving the best possible analysis.

    It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail. (Maslow, 1966)

    S@R carries out thorough analysis of companies risk and uncertainties with aim of producing good decision support tools. Making sure the client takes a huge step forward from scenario analysis.

    References

    A fresh look at strategy under uncertainty: An interview, McKinsey Quarterly, December 2008.

    http://www.mckinseyquarterly.com/fresh_look_at_strategy_under_uncertainty_2256.

    Camerer, C. & Weber, M., (1992). Recent Developments in Modelling Preferences: Uncertainty and Ambiguity, Journal of Risk and Uncertainty, Springer, vol. 5(4), 325-70.

    Courtney, H., (2001). 20/20 Foresight. Boston: Harvard Business School Press.

    Courtney, H. G., Kirkland, J., & Viguerie, P. S., (1997). Strategy Under Uncertainty. Harvard Business Review, 75(6), 67-79.

    Dequech, D., (2000), Fundamental Uncertainty and Ambiguity, Eastern Economic Journal, 26(1), 41-60.

    Dosi, G & Egidi, M, (1991). Substantive and Procedural Uncertainty: An Exploration of Economic Behaviours in Changing Environments, Journal of Evolutionary Economics, Springer, 1(2), 145-68.

    Frost, R., (1916). Mountain interval. Henry Holt And Company.

    Keynes, J., (2004). A Treatise on Probability. New York: Dover Publications.

    Knight, F. (1921). Risk, Uncertainty and Profit. Boston: Houghton Mifflin.

    Kylaheiko K., Sandstrom J. & Virkkunen V., (2002). Dynamic capability view in terms of real options. International Journal of Production Economics, Volume 80 (1), 65-83(19).

    Maslow, A., (1966). The Psychology of Science. South Bend: Gateway Editions, Ltd.

    Endnotes