Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Corporate risk analysis – Page 2 – Strategy @ Risk

Category: Corporate risk analysis

  • Forecasting sales and forecasting uncertainty

    Forecasting sales and forecasting uncertainty

    This entry is part 1 of 4 in the series Predictive Analytics

     

    Introduction

    There are a large number of methods used for forecasting ranging from judgmental (expert forecasting etc.) thru expert systems and time series to causal methods (regression analysis etc.).

    Most are used to give single point forecast or at most single point forecasts for a limited number of scenarios.  We will in the following take a look at the un-usefulness of such single point forecasts.

    As example we will use a simple forecast ‘model’ for net sales for a large multinational company. It turns out that there is a good linear relation between the company’s yearly net sales in million euro and growth rates (%) in world GDP:

    with a correlation coefficient R= 0.995. The relation thus accounts for almost 99% of the variation in the sales data. The observed data is given as green dots in the graph below, and the regression as the green line. The ‘model’ explains expected sales as constant equal 1638M and with 53M in increased or decreased sales per percent increase or decrease in world GDP:

    The International Monetary Fund (IMF) that kindly provided the historical GDP growth rates also gives forecasts for expected future change in the World GDP growth rate (WEO, April 2012) – for the next five years. When we put these forecasts into the ‘model’ we ends up with forecasts for net sales for 2012 to 2016 as depicted by the yellow dots in the graph above.

    So mission accomplished!  …  Or is it really?

    We know that the probability for getting a single-point forecast right is zero even when assuming that the forecast of the GDP growth rate is correct – so the forecasts we so far have will certainly be wrong, but how wrong?

    “Some even persist in using forecasts that are manifestly unreliable, an attitude encountered by the future Nobel laureate Kenneth Arrow when he was a young statistician during the Second World War. When Arrow discovered that month-long weather forecasts used by the army were worthless, he warned his superiors against using them. He was rebuffed. “The Commanding General is well aware the forecasts are no good,” he was told. “However, he needs them for planning purposes.” (Gardner & Tetlock, 2011)

    Maybe we should take a closer look at possible forecast errors, input data and the final forecast.

    The prediction band

    Given the regression we can calculate a forecast band for future observations of sales given forecasts of the future GDP growth rate. That is the region where we with a certain probability will expect new values of net sales to fall. In the graph below the green area give the 95% forecast band:

    Since the variance of the predictions increases the further new forecasts for the GDP growth rate lies from the mean of the sample values (used to compute the regression), the band will widen as we move to either side of this mean. The band will also widen with decreasing correlation (R) and sample size (the number of observations the regression is based on).

    So even if the fit to the data is good, our regression is based on a very small sample giving plenty of room for prediction errors. In fact a 95% confidence interval for 2012, with an expected GDP growth rate of 3.5%, is net sales 1824M plus/minus 82M. Even so the interval is still only approx. 9% of the expected value.

    Now we have shown that the model gives good forecasts, calculated the confidence interval(s) and shown that the expected relative error(s) with high probability will be small!

    So the mission is finally accomplished!  …  Or is it really?

    The forecasts we have made is based on forecasts of future world GDP growth rates, but how certain are they?

    The GDP forecasts

    Forecasting the future growth in GDP for any country is at best difficult and much more so for the GDP growth for the entire world. The IMF has therefore supplied the baseline forecasts with a fan chart ((  The Inflation Report Projections: Understanding the Fan Chart By Erik Britton, Paul Fisher and John Whitley, BoE Quarterly Bulletin, February 1998, pages 30-37.)) picturing the uncertainty in their estimates.

    This fan chart ((Figure 1.12. from:, World Economic Outlook (April 2012), International Monetary Fund, Isbn  9781616352462))  shows as blue colored bands the uncertainty around the WEO baseline forecast with 50, 70, and 90 percent confidence intervals ((As shown, the 70 percent confidence interval includes the 50 percent interval, and the 90 percent confidence interval includes the 50 and 70 percent intervals. See Appendix 1.2 in the April 2009 World Economic Outlook for details.)) :

    There is also another band on the chart, implied but un-seen, indicating a 10% chance of something “unpredictable”. The fan chart thus covers only 90% of the IMF’s estimates of the future probable growth rates.

    The table below shows the actual figures for the forecasted GDP growth (%) and the limits of the confidence intervals:

    Lower

    Baseline

    Upper

    90%

    70%

    50%

    50%

    70%

    90%

    2012

    2.5

    2.9

    3.1

    .5

    3.8

    4.0

    4.3

    2013

    2.1

    2.8

    3.3

    4.1

    4.8

    5.2

    5.9

    The IMF has the following comments to the figures:

    “Risks around the WEO projections have diminished, consistent with market indicators, but they remain large and tilted to the downside. The various indicators do not point in a consistent direction. Inflation and oil price indicators suggest downside risks to growth. The term spread and S&P 500 options prices, however, point to upside risks.”

    Our approximation of the distribution that can have produced the fan chart for 2012 as given in the World Economic Outlook for April 2012 is shown below:

    This distribution has:  mean 3.43%, standard deviation 0.54, minimum 1.22 and maximum 4.70 – it is skewed with a left tail. The distribution thus also encompasses the implied but un-seen band in the chart.

    Now we are ready for serious forecasting!

    The final sales forecasts

    By employing the same technique that we used to calculate the forecast band we can by Monte Carlo simulation compute the 2012 distribution of net sales forecasts, given the distribution of GDP growth rates and by using the expected variance for the differences between forecasts using the regression and new observations. The figure below describes the forecast process:

    We however are not only using the 90% interval for The GDP growth rate or the 95% forecast band, but the full range of the distributions. The final forecasts of net sales are given as a histogram in the graph below:

    This distribution of forecasted net sales has:  mean sales 1820M, standard deviation 81, minimum sales 1590M and maximum sales 2055M – and it is slightly skewed with a left tail.

    So what added information have we got from the added effort?

    Well, we now know that there is only a 20% probability for net sales to be lower than 1755 or above 1890. The interval from 1755M to 1890M in net sales will then with 60% probability contain the actual sales in 2012 – see graph below giving the cumulative sales distribution:

    We also know that we with 90% probability will see actual net sales in 2012 between 1720M and 1955M.But most important is that we have visualized the uncertainty in the sales forecasts and that contingency planning for both low and high sales should be performed.

    An uncertain past

    The Bank of England’s fan chart from 2008 showed a wide range of possible futures, but it also showed the uncertainty about where we were then – see that the black line showing National Statistics data for the past has probability bands around it:

    This indicates that the values for past GDP growth rates are uncertain (stochastic) or contains measurement errors. This of course also holds for the IMF historic growth rates, but they are not supplying this type of information.

    If the growth rates can be considered stochastic the results above will still hold, if the conditional distribution for net sales given the GDP growth rate still fulfills the standard assumptions for using regression methods. If not other methods of estimation must be considered.

    Black Swans

    But all this uncertainty was still not enough to contain what was to become reality – shown by the red line in the graph above.

    How wrong can we be? Often more wrong than we like to think. This is good – as in useful – to know.

    “As Donald Rumsfeld once said: it’s not only what we don’t know – the known unknowns – it’s what we don’t know we don’t know.”

    While statistic methods may lead us to a reasonably understanding of some phenomenon that does not always translate into an accurate practical prediction capability. When that is the case, we find ourselves talking about risk, the likelihood that some unfavorable or favorable event will take place. Risk assessment is then necessitated and we are left only with probabilities.

    A final word

    Sales forecast models are an integrated part of our enterprise simulation models – as parts of the models predictive analytics. Predictive analytics can be described as statistic modeling enabling the prediction of future events or results ((in this case the probability distribution of future net sales)) , using present and past information and data.

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the management process. The ability to quickly and accurately detect changes in key external and internal variables and adjust tactics accordingly can make all the difference between success and failure:

    1. Forecasts must integrate both external and internal drivers of business and the financial results.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5. Never relay on single point or scenario forecasting.

    The forecasts are usually done in three stages, first by forecasting the market for that particular product(s), then the firm’s market share(s) ending up with a sales forecast. If the firm has activities in different geographic markets then the exercise has to be repeated in each market, having in mind the correlation between markets:

    1. All uncertainty about the different market sizes, market shares and their correlation will finally end up contributing to the uncertainty in the forecast for the firm’s total sales.
    2. This uncertainty combined with the uncertainty from other forecasted variables like interest rates, exchange rates, taxes etc. will eventually be manifested in the probability distribution for the firm’s equity value.

    The ‘model’ we have been using in the example have never been tested out of sample. Its usefulness as a forecast model is therefore still debatable.

    References

    Gardner, D & Tetlock, P., (2011), Overcoming Our Aversion to Acknowledging Our Ignorance, http://www.cato-unbound.org/2011/07/11/dan-gardner-and-philip-tetlock/overcoming-our-aversion-to-acknowledging-our-ignorance/

    World Economic Outlook Database, April 2012 Edition; http://www.imf.org/external/pubs/ft/weo/2012/01/weodata/index.aspx

    Endnotes

     

     

  • “How can you be better than us understand our business risk?”

    “How can you be better than us understand our business risk?”

    This is a question we often hear and the simple answer is that we don’t! But by using our methods and models we can use your knowledge in such a way that it can be systematically measured and accumulated throughout the business and be presented in easy to understand graphs to the management and board.

    The main reason for this lies in how we can treat uncertainties ((Variance is used as measure of uncertainty or risk.)) in the variables and in the ability to handle uncertainties stemming from variables from different departments simultaneously.

    Risk is usually compartmentalized in “silos” and regarded as proprietary to the department and – not as a risk correlated or co-moving with other risks in the company caused by common underlying events influencing their outcome:

    When Queen Elizabeth visited the London School of Economics in autumn 2008 she asked why no one had foreseen the crisis. The British Academy Forum replied to the Queen in a letter six months later. Included in the letter was the following:

    One of our major banks, now mainly in public ownership, reputedly had 4000 risk managers. But the difficulty was seeing the risk to the system as a whole rather than to any specific financial instrument or loan (…) they frequently lost sight of the bigger picture ((The letter from the British Academy to the Queen is available at: http://media.ft.com/cms/3e3b6ca8-7a08-11de-b86f-00144feabdc0.pdf)).

    To be precise we are actually not simulating risk in and of itself, risk just is a bi-product from simulation of a company’s financial and operational (economic) activities. Since the variables describing these activities is of stochastic nature, which is to say contains uncertainty, all variables in the P&L and Balance sheet will contain uncertainty. They can as such best be described by the shape of their frequency distribution – found after thousands of simulations. And it is the shape of these distributions that describes the uncertainty in the variables.

    Most ERM activities are focused on changing the left or downside tail – the tail that describes what normally is called risk.

    We however are also interested in the right tail or upside tail, the tail that describes possible outcomes increasing company value. Together they depict the uncertainty the company faces:

    S@R thus treats company risk holistic by modeling risks (uncertainty) as parts of the overall operational and financial activities. We are thus able to “add up” the risks – to a consolidated level.

    Having the probability distribution for e.g. the company’s equity value gives us the opportunity to apply risk measures to describe the risk facing the shareholders or the risk added or subtracted by different strategies like investments or risk mitigation tools.

    Since this can’t be done with ordinary addition (( The variance of the sum of two stochastic variables is the sum of their variance plus the covariance between them.)) (or subtraction) we have to use Monte Carlo simulation.

    The value added by this are:

    1.  A method for assessing changes in strategy; investments, new markets, new products etc.
    2. A heightening of risk awareness in management across an organization’s diverse businesses.
    3. A consistent measure of risk allowing executive management and board reporting and response across a diverse organization.
    4. A measure of risk (including credit and market risk) for the organization that can be compared with capital required by regulators, rating agencies and investors.
    5. A measure of risk by organization unit, product, channel and customer segment which allows risk adjusted returns to be assessed, and scarce capital to be rationally allocated.
    6.  A framework from which the organization can decide its risk mitigation requirements rationally.
    7. A measure of risk versus return that allows businesses and in particular new businesses (including mergers and acquisitions) to be assessed in terms of contribution to growth in shareholder value.

    The independent risk experts are often essential for consistency and integrity. They can also add value to the process by sharing risk and risk management knowledge gained both externally and elsewhere in the organization. This is not just a measurement exercise, but an investment in risk management culture.

    Forecasting

    All business planning are built on forecasts of market sizes, market shares, prices and costs. They are usually given as low, mean and high scenarios without specifying the relationship between the variables. It is easy to show that when you combine such forecasts you can end up very wrong (( https://www.strategy-at-risk.com/2009/05/04/the-fallacies-of-scenario-analysis/)). However the 5 %, 50 % and 95 % values from the scenarios can be used to produce a probability distribution for the variable and the simultaneous effect of these distributions can be calculated using Monte Carlo simulation, giving for instance the probability distribution for profit or cash flow from that market. This can again be used to consolidate the company’s cash flow or profit etc.

    Controls and Mitigation

    Controls and mitigation play a significant part in reducing the likelihood of a risk event or the amount of loss should one occur. They however have a material cost. One of the drivers of measuring risk is to support a more rational analysis of the costs and benefits of controls and.
    The result after controls and mitigation becomes the final or residual risk distribution for the company.

    Distributing Diversification Benefits

    At each level of aggregation within a business diversification benefits accrue, representing the capacity to leverage the risk capital against a larger range of non-perfectly correlated risks. How should these diversification benefits be distributed to the various businesses?

    This is not an academic matter, as the residual risk capital ((Bodoff, N. M.,  Capital Allocation by Percentile Layer VOLUME 3/ISSUE 1 CASUALTY ACTUARIAL SOCIETY, pp 13-30, http://www.variancejournal.org/issues/03-01/13.pdf

    Erel, Isil, Myers, Stewart C. and Read, James, Capital Allocation (May 28, 2009). Fisher College of Business Working Paper No. 2009-03-010. Available at SSRN: http://ssrn.com/abstract=1411190 or fttp://dx.doi.org/10.2139/ssrn.1411190))  attributed to each business segment is critical in determining its shareholder value creation and thus its strategic worth to the enterprise. Getting this wrong could lead the organization to discourage its better value creating segments and encourage ones that dissipate shareholder value.

    The simplest is the pro-rata approach which distributes the diversification benefits on a pro-rata basis down the various segment hierarchies (organizational unit, product, customer segment etc.).

    A more right approach that can be built into the Monte Carlo simulation is the contributory method which takes into account the extent to which a segment of the organization’s business is correlated with or contrary to the major risks that make up the company’s overall risk. This rewards counter cyclical businesses and others that diversify the company’s risk profile.

    Aggregation with market & credit risk

    For many parts of an organization there may be no market or credit risk – for areas, such as sales and manufacturing, operational and business risk covers all of their risks.

    But at the company level the operational and business risk needs to be integrated with market and credit risk to establish the overall measure of risk being run by the company. And it is this combined risk capital measure that needs to be apportioned out to the various businesses or segments to form the basis for risk adjusted performance measures.

    It is not enough just to add the operational, credit and market risks together. This would over count the risk – the risk domains are by no means perfectly correlated, which a simple addition would imply. A sharp hit in one risk domain does not imply equally sharp hits in the others.

    Yet they are not independent either. A sharp economic downturn will affect credit and many operational risks and probably a number of market risks as well.

    The combination of these domains can be handled in a similar way to correlations within operational risk, provided aggregate risk distributions and correlation factors can be estimated for both credit and market risk.

    Correlation risk

    Markets that are part of the same sector or group are usually very highly correlated or move together. Correlation risk is the risk associated with having several positions in too many similar markets. By using Monte Carlo simulation as described above this risk can be calculated and added to the company’s risks distribution that will take part in forming the company’s yearly profit or equity value distribution. And this is the information that the management and board will need.

    Decision making

    The distribution for equity value (see above) can then be used for decision purposes. By making changes to the assumptions about the variables distributions (low, medium and high values) or production capacities etc. this new equity distribution can be compared with the old to find the changes created by the changes in assumptions etc.:

    A versatile tool

    This is not only a tool for C-level decision-making but also for controllers, treasury, budgeting etc.:

    The results from these analyses can be presented in form of B/S and P&L looking at the coming one to five (short-term) or five to fifteen years (long-term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ Evaluate alternative strategic investment options
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    If you always have a picture of what really can happen you are forewarned and thus forearmed to adverse events and better prepared to take advantage of favorable events.go-on-look-behind-the-curtainFrom Indexed: Go-on-look-behind-the-curtain ((From Indexed: http://thisisindexed.com/2012/02/go-on-look-behind-the-curtain/))

     Footnotes

  • Be prepared for a bumpy ride

    Be prepared for a bumpy ride

    Imagine you’re nicely settled down in your airline seat on a transatlantic flight – comfort-able, with a great feeling. Then the captain comes on and welcomes everybody on board and continues, “It’s the first time I fly this type of machine, so wish me luck!” Still feeling great? ((Inspired by an article from BTS: http://www.bts.com/news-insights/strategy-execution-blog/Why_are_Business_Simulations_so_Effective.aspx))

    Running a company in today’s interconnected and volatile world has become extremely complicated; surely far more than flying an airliner. You probably don’t have all the indicators, dashboard system and controls as on a flight deck. And business conditions are likely to change for more than flight conditions ever will. Today we live with an information overload. Data streaming at us almost everywhere we turn. How can we cope? How do we make smart decisions?

    Pilots train over and over again. They spend hour after hour in flight simulators before being allowed to sit as co-pilots on a real passenger flight. Fortunately, for us passengers, flight hours normally pass by, day after day, without much excitement. Time to hit the simulator again and train engine fires, damaged landing gear, landing on water, passenger evacuation etc. becoming both mentally and practically prepared to manage the worst.

    Why aren’t we running business simulations to the same extent? Accounting, financial models and budgeting is more an art than science, many times founded on theories from the last century. (Not to mention Pacioli’s Italian accounting from 1491.) While the theory of behavioural economics progresses we must use the best tools we can get to better understand financial risks and opportunities and how to improve and refine value creation. The true job we’re set to do.

    How is it done? Like Einstein – seeking simplicity, as far as it goes. Finding out which pieces of information that is most crucial to the success and survival of the business. For major corporations these can be drawn down from the hundreds to some twenty key variables. (These variables are not set in stone once and for all, but need to be redefined in accordance with the business situation we foresee in the near future.)

    At Allevo our focal point is on Risk Governance at large and helping organisations implement Enterprise Risk Management (ERM) frame¬works and processes, specifically assisting boards and executive management to exercise their Risk Oversight duties. Fundamental to good risk management practice is to understand end articulate the organisation’s (i.e. the Board’s) appetite for risk. Without understanding the appetite and tolerance levels for various risks it’s hard to measure, aggregate and prioritize them. How much are we willing to spend on new ventures and opportunities? How much can we afford to lose? How do we calculate the trade-offs?

    There are two essential elements of Risk Appetite: risk capacity and risk capability.

    By risk capacity we mean the financial ability to take on new opportunities with their inherent risks (i.e. availability of cash and funding across the strategy period). By risk capability is meant the non-financial resources of the organisation. Do we have the know¬ledge and resources to take on new ventures? Cash and funding is fundamental and comes first.

    Does executive management and the board really understand the strengths and vulnerabilities hiding in the balance sheet or in the P&L-account? Many may have a gut feeling, mostly the CFO and the treasury department. But shouldn’t the executive team and the board (including the Audit Committee, and the Risk Committee if there is one) also really know?

    At Allevo we have aligned with Strategy@Risk Ltd to do business simulations. They have experiences from all kinds of industries; especially process industries where they even helped optimize manufacturing processes. They have simulated airports and flight patterns for a whole country. For companies with high level of raw material and commodity risks they simulate optimum hedging strategies. But their main contribution, in our opinion, is their ability to simulate your organisation’s balance sheet and P&L accounts. They have created a simulation tool that can be applied to a whole corporation. It needs only to be adjusted to your specific operations and business environ¬ments, which is done through inter-views and a few workshops with your own people that have the best knowledge of your business (operations, finances, markets, strategy etc.).

    When the key variables have been identified, it’s time to run the first Monte Carlo simulations to find out if the model fits with recent actual experiences and otherwise feels reliable.

    No model can ever predict the future. What we want to do is to find the key strengths and weaknesses in your operations and in your balance sheet. By running sensitivity analysis we can first of all understand which the key variables are. We want to focus what’s important, and leave alone those variables that have little effect on outcomes.

    Now, it’s time for the most important part. Considering how the selected variables can vary and interact over time. The future contains an inconceivable amount of different outcomes ((There are probably more different futures than ways of dealing 52 playing cards. Don’t you think? Well there are only 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 ways to shuffle a deck of 52 cards (8.1 x 1067 ))). What does that say about budgeting with discrete numbers?)). The question is how can we achieve the outcomes that we desire and avoid the ones that we dread the most?

    Running 10,000 simulations (i.e. closing each and every annual account over 10,000 years) we can stop the simulation when reaching a desired level of outcome and investigate the position of the key variables. Likewise when nasty results appear, we stop again and recording the underlying position of each variable.

    The simulations generate an 80-page standard report (which, once again, can feel like information overload). But once you’ve got a feeling for the sensitivity of the business you could instead do specific “what if?” analysis of scenarios of special interest to yourself, the executive team or to the board.

    Finally, the model equates the probability distribution of the organisation’s Enterprise Value going forward. The key for any business is to grow Enterprise Value.

    Simulations show how the likelihood of increasing or losing value varies with different strategies. This part of the simulation tool could be extremely important in strategy selection.

    If you wish to go into more depth on how simulations can support you and your organisation, please visit

    www.allevo.se or www.strategy-at-risk.com

    There you’ll find a great depth of material to chose from; or call us direct and we’ll schedule a quick on-site presentation.

    Have a good flight, and …

    Happy landing!

  • You only live once

    You only live once

    This entry is part 4 of 4 in the series The fallacies of scenario analysis

    You only live once, but if you do it right, once is enough.
    — Mae West

    Let’s say that you are considering new investment opportunities for your company and that the sales department has guesstimated that the market for one of your products will most likely grow by a little less than 5 % per year. You then observe that the product already has a substantial market and that this in fifteen years’ time nearly will be doubled:

    Building a new plant to accommodate this market growth will be a large investment so you find that more information about the probability distribution for the products future sales is needed. Your sales department then “estimates” the market yearly growth to have a mean close to zero and a lower quartile of minus 5 % and an upper quartile of plus 7 %.

    Even with no market growth the investment is a tempting one since the market already is substantial and there is always a probability of increased market shares.

    As quartiles are given, you rightly calculate that there will be a 25 % probability that the growth will be above 7 %, but also that there will be a 25 % probability that it can be below minus 5 %. At the face of it, and with you being not too risk averse, this looks as a gamble worth taking.

    Then you are informed that the distribution will be heavily left skewed – opening for considerable downside risk. In fact it turns out that it looks like this:

    A little alarmed you order the sales department to come up with a Monte Carlo simulation giving a better view of the future possible paths of the market development.

    The return with the graph below giving the paths for the first ten runs in the simulation with the blue line giving average value and the green and red the 90 % and 10 % limits of the one thousand simulated outcomes:

    The blue line is the yearly ensemble  averages ((A set of multiple predictions that is all valid at the same time. The term “ensemble” is often used in physics and physics-influenced literature. In probability theory literature the term probability space is more prevalent.

    An ensemble provides reliable information on forecast uncertainties (e.g., probabilities) from the spread (diversity) amongst ensemble members.

    Also see: Ensemble forecasting; a numerical prediction method that is used to attempt to generate a representative sample of the possible future states of dynamic systems. Ensemble forecasting is a form of Monte Carlo analysis: multiple numerical predictions are conducted using slightly different initial conditions that are all plausible given the past and current set of observations. Often used in weather forecasting.));  that is the time series of average of outcomes. The series shows a small decline in market size but not at an alarming rate. The sales department’s advice is to go for the investment and try to conquer market shares.

    You then note that the ensemble average implies that you are able jump from path to path and since each is a different realization of the future that will not be possible – you only live once!

    You again call the sales department asking them to calculate each paths average growth rate (over time) – using their geometric mean – and report the average of these averages to you. When you plot both the ensemble and the time averages you find quite a large difference between them:

    The time average shows a much larger market decline than the ensemble average.

    It can be shown that the ensemble average always will overestimate (Peters, 2010) the growth and thus can falsely lead to wrong conclusions about the market development.

    If we look at the distribution of path end values we find that the lower quartile is 64 and the upper quartile is 118 with a median of 89:

    It thus turns out that the process behind the market development is non-ergodic ((The term ergodic is used to describe dynamical systems which have the same behavior averaged over time as averaged over space.))  or non-stationary ((Stationarity is a necessary, but not sufficient, condition for ergodicity. )). In the ergodic case both the ensemble and time averages would have been equal and the problem above would not have appeared.

    The investment decision that at first glance looked a simple one is now more complicated and can (should) not be decided based on market development alone.

    Since uncertainty increases the further we look into the future, we should never assume that we have ergodic situations. The implication is that in valuation or M&A analysis we should never use an “ensemble average” in the calculations, but always do a full simulation following each time path!

    References

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338

    Endnotes

  • The probability distribution of the bioethanol crush margin

    The probability distribution of the bioethanol crush margin

    This entry is part 1 of 2 in the series The Bio-ethanol crush margin

    A chain is no stronger than its weakest link.

    Introduction

    Producing bioethanol is a high risk endeavor with adverse price development and crumbling margins.

    In the following we will illustrate some of the risks the bioethanol producer is facing using corn  as feedstock. However, these risks will persist regardless of the feedstock and production process chosen. The elements in the discussion below can therefore be applied to any and all types of bioethanol production:

    1.    What average yield (kg ethanol per kg feedstock) can we expect?  And  what is the shape of the yield distribution?
    2.    What will the future price ratio of feedstock to ethanol be? And what volatility can we expect?

    The crush margin ((The relationship between prices in the cash market is commonly referred to as the Gross Production Margin.))  measures the difference between the sales proceeds of finished bioethanol and its feedstock ((It can also be considered as the productions throughput; the rate at which the system converts raw materials to money. Throughput is net sales less variable cost, generally the cost of the most important raw materials. (see: Throughput Accounting)).

    With current technology, one bushel of corn can be converted into approx. 2.75 gallons of corn and 17 pounds of DDG (distillers’ dried grains). The crush margin (or gross processing margin) is then:

    1. Crush margin = 0.0085 x DDG price + 2.8 x ethanol price – corn price

    Since from 65 % to 75 % of the variable cost in bioethanol production is cost of corn, the crush margin is an important metric especially since the margin in addition shall cover all other expenses like energy, electricity, interest, transportation, labor etc. and – in the long term the facility’s fixed costs.

    The following graph taken from the CME report: Trading the corn for ethanol crush, (CME, 2010) gives the margin development in 2009 and the first months of 2010:

    This graph gives a good picture of the uncertainties that faces the bioethanol producers, and can be a helpful tool when hedging purchases of corn and sale of the products ((The historical chart going back to APR 2005 is available at the CBOT web site)).

    The Crush Spread, Crush Profit Margin and Crush Ratio

    There are a number of other ways to formulate the crush risk (CME, July 11. 2011):

    The CBOT defines the “Crush Spread” as the Estimated Gross Margin per Bushel of Corn. It is calculated as follows:

    2. Crush Spread = (Ethanol price per gallon X 2.8) – Corn price per bushel, or as

    3. Crush Profit margin = Ethanol price – (Corn price/2.8).

    Understanding these relationships is invaluable in trading ethanol stocks ((We will return to this in a later post.)).

    By rearranging the crush spread equation, we can express the spread as its ratio to the product price (simplifying by keeping bi-products like DDG etc. out of the equation):

    4. Crush ratio = Crush spread/Ethanol price = y – p,

    Where: y = EtOH Yield (gal)/ bushel corn and p = Corn price/Ethanol price.

    We will in the following look at the stochastic nature of y and p and thus the uncertainty in forecasting the crush ratio.

    The crush spread and thus the crush ratio is calculated using data from the same period. They therefore give the result of an unhedged operation. Even if the production period is short – two to three days – it will be possible to hedge both the corn and ethanol prices. But to do that in a consistent and effective way we have to look into the inherent volatility in the operations.

    Ethanol yield

    The ethanol yield is usually set to 2.682 gal/bushel corn, assuming 15.5 % humidity. The yield is however a stochastic variable contributing to the uncertainty in the crush ratio forecasts. As only starch in corn can be converted to ethanol we need to know the content of extractable starch in a standard bushel of corn – corrected for normal loss and moisture.  In the following we will lean heavily on the article: “A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch”, by Tad W. Patzek (Patzek, 2006) which fits our purpose perfectly. All relevant references can be found in the article.

    The aim of his article was to establish the mean extractable starch in hybrid corn and the mean highest possible yield of ethanol from starch. We however are also interested in the probability distributions for these variables – since no production company will ever experience the mean values (ensembles) and since the average return over time always will be less than the return using ensemble means ((We will return to this in a later post))  (Peters, 2010).

    The purpose of this exercise is after all to establish a model that can be used as support for decision making in regard to investment and hedging in the bioethanol industry over time.

    From (Patzek, 2006) we have that the extractable starch (%) can be described as approx. having a normal distribution with mean 66.18 % and standard deviation of 1.13:

    The nominal grain loss due to dirt etc. can also be described as approx. having a normal distribution with mean 3 % and a standard deviation of 0.7:

    The probability distribution for the theoretical ethanol yield (kg/kg corn) can then be found by Monte Carlo simulation ((See formula #3 in (Patzek, 2006))  as:

    – having an approx. normal distribution with mean 0.364 kg EtHO/kg of dry grain and standard deviation of 0.007. On average we will need 2.75 kg of clean dry grain to produce one kilo or 1.74 liter of ethanol ((With a specific density of 0.787 kg/l)).

    Since we now have a distribution for ethanol yield (y) as kilo of ethanol per kilo of corn we will in the following use price per kilo both for ethanol and corn, adjusting for the moisture (natural logarithm of moisture in %) in corn:

    We can also use this to find the EtHO yield starting with wet corn and using gal/bushel corn as unit (Patzek, 2006):

    giving as theoretical value a mean of 2.64 gal/wet bushel with a standard deviation of 0.05 – which is significantly lower than the “official” figure of 2.8 gal/wet bushel used in the CBOT calculations. More important to us however is the fact that we easily can get yields much lower than expected and thus a real risk of lower earnings than expected. Have in mind that to get a yield above 2.64 gallons of ethanol per bushel of corn all steps in the process must continuously be at or close to their maximum efficiency – which with high probability never will happen.

    Corn and ethanol prices

    Looking at the price developments since 2005 it is obvious that both the corn and ethanol prices have a large variability ($/kg and dry corn):

    The long term trends show a disturbing development with decreasing ethanol price, increasing corn prices  and thus an increasing price ratio:

    “Risk is like fire: If controlled, it will help you; if uncontrolled, it will rise up and destroy you.”

    Theodore Roosevelt

    The unhedged crush ratio

    Since the crush ratio on average is:

    Crush ratio = 0.364 – p, where:
    0.364 = Average EtOH Yield (kg EtHO/kg of dry grain) and
    p = Corn price/Ethanol price

    The price ratio (p) has to be less than 0.364 for the crush ratio in the outset to be positive. As of January 2011 the price ratios has overstepped that threshold and have for the first months of 2011 stayed above that.

    To get a picture of the risk an unhedged bioethanol producer faces only from normal variation in yield and forecasted variation in the price ratio we will make a simple forecast for April 2011 using the historic time series information on trend and seasonal factors:

    The forecasted probability distribution for the April price ratio is given in the frequency graph below:

    This represents the price risk the producer will face. We find that the mean value for the price ratio will be 0.323 with a standard deviation of 0.043. By using this and the distribution for ethanol yield we can by Monte Carlo simulation forecast the April distribution for the crush ratio:

    As we see, will negative values for the crush ratio be well inside the field of possible outcomes:

    The actual value of the average price ratio for April turned out to be 0.376 with a daily maximum of 0.384 and minimum of 0.363. This implies that the April crush ratio with 90 % probability would have been between -0.005 and -0.199, with only the income from DDGs to cover the deficit and all other costs.

    Hedging the crush ratio

    The distribution for the price ratio forecast above clearly points out the necessity of price ratio hedging (Johnson, 1960) and (Stein, 1961).
    The time series chart above shows both a negative trend development and seasonal variations in the price ratio. In the short run there is nothing much to do about the trend development, but in the longer run will probably other feedstock and better processes change the trend development (Shapouri et al., 2002).

    However, what immediately stand out are the possibilities to exploit the seasonal fluctuations in both markets:

    Ideally, raw material is purchased in the months seasonal factors are low and ethanol sold the months seasonal factor are high. In practice, this is not possible, restrictions on manufacturing; warehousing, market presence, liquidity, working capital and costs set limits to the producer’s degrees of freedom (Dalgran, 2009).

    Fortunately, there are a number of tools in both the physical and financial markets available to manage price risks; forwards and futures contracts, options, swaps, cash-forward, and index and basis contracts. All are available for the producers who understand financial hedging instruments and are willing to participate in this market. See: (Duffie, 1989), (Hull, 2003) and (Bjørk, 2009).

    The objective is to change the margin distributions shape (red) from having a large part of its left tail on the negative part of the margin axis to one resembling the green curve below where the negative part have been removed, but most of the upside (right tail) has been preserved, that is to: eliminate negative margins, reduce variability, maintain the upside potential and thus reduce the probability of operating at a net loss:

    Even if the ideal solution does not exist, large number of solutions through combinations of instruments can provide satisfactory results. In principle, it does not matter where these instruments exist, since both the commodity and financial markets are interconnected to each other. From a strategic standpoint, the purpose is to exploit fluctuations in the market to capture opportunities while mitigating unwanted risks (Mallory, et al., 2010).

    Strategic Risk Management

    To manage price risk in commodity markets is a complex topic. There are many strategic, economic and technical factors that must be understood before a hedging program can be implemented.

    Since all the hedging instruments have a cost and since only future outcomes ranges and not exact prices, can be forecasted in the individual markets, costs and effectiveness is uncertain.

    In addition, the degrees of desired protection have to be determined. Are we seeking to ensure only a positive margin, or a positive EBITDA, or a positive EBIT? With what probability and to what cost?

    A systematic risk management process is required to tailor an integrated risk management program for each individual bioethanol plant:

    The choice of instruments will define different strategies that will affect company liquidity and working capital and ultimately company value. Since the effect of each of these strategies will be of stochastic nature it will only be possible to distinguish between them using the concept of stochastic dominance. (selecting strategy)

    Models that can describe the business operations and underlying risk can be a starting point, to such an understanding. Linked to balance simulation they will provide invaluable support to decisions on the scope and timing of hedging programs.

    It is only when the various hedging strategies are simulated through the balance so that the effect on equity value can be considered that the best strategy with respect to costs and security level can be determined – and it is with this that S@R can help.

    References

    Bjørk, T.,(2009). Arbitrage Theory in Continuous Time. Oxford University Press, Oxford.

    CME Group., (2010).Trading the corn for ethanol crush,
    http://www.cmegroup.com/trading/agricultural/corn-for-ethanol-crush.html

    CME Group., (July 11. 2011). Ethanol Outlook Report, , http://cmegroup.barchart.com/ethanol/

    Dalgran, R.,A., (2009) Inventory and Transformation Hedging Effectiveness in Corn Crushing. Journal of Agricultural and Resource Economics 34 (1): 154-171.

    Duffie, D., (1989). Futures Markets. Prentice Hall, Englewood Cliffs, NJ.

    Hull, J. (2003). Options, Futures, and Other Derivatives (5th edn). Prentice Hall, Englewood Cliffs, N.J.

    Johnson, L., L., (1960). The Theory of Hedging and Speculation in Commodity Futures, Review of Economic Studies , XXVII, pp. 139-151.

    Mallory, M., L., Hayes, D., J., & Irwin, S., H. (2010). How Market Efficiency and the Theory of Storage Link Corn and Ethanol Markets. Center for Agricultural and Rural Development Iowa State University Working Paper 10-WP 517.

    Patzek, T., W., (2004). Sustainability of the Corn-Ethanol Biofuel Cycle, Department of Civil and Environmental Engineering, U.C. Berkeley, Berkeley, CA.

    Patzek, T., W., (2006). A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch, Natural Resources Research, Vol. 15, No. 3.

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338.

    Shapouri,H., Duffield,J.,A., & Wang, M., (2002). The Energy Balance of Corn Ethanol: An Update. U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses. Agricultural Economic Report No. 814.

    Stein, J.L. (1961). The Simultaneous Determination of Spot and Futures Prices. American Economic Review, vol. 51, p.p. 1012-1025.

    Footnotes

  • Plans based on average assumptions ……

    Plans based on average assumptions ……

    This entry is part 3 of 4 in the series The fallacies of scenario analysis

     

    The Flaw of Averages states that: Plans based on the assumption that average conditions will occur are usually wrong. (Savage, 2002)

    Many economists use what they believe to be most likely ((Most likely estimates are often made in-house based on experience and knowledge about their operations.)) or average values ((Forecasts for many types of variable can be bought from suppliers of ‘consensus forecasts’.))  (Timmermann, 2006) (Gavin & Pande, 2008) as input for the exogenous variables in their spreadsheet calculations.

    We know however that:

    1. the probability for any variable to have outcomes equal to any of these values is close to zero,
    1. and that the probability of having outcomes for all the (exogenous) variables in the spreadsheet model equal to their average is virtually zero.

    So why do they do it? They obviously lack the necessary tools to calculate with uncertainty!

    But if a small deviation from the most likely value is admissible, how often will the use of a single estimate like the most probable value be ‘correct’?

    We can try to answer that by looking at some probability distributions that may represent the ‘mechanism’ generating some of these variables.

    Let’s assume that we are entering into a market with a new product, we know of course the maximum upper and lower limit of our future possible market share, but not the actual number so we guess it to be the average value = 0,5. Since we have no prior knowledge we have to assume that the market share is uniformly distributed between zero and one:

    If we then plan sales and production for a market share between 0, 4 and 0, 5 – we would out of a hundred trials only have guessed the market share correctly 13 times. In fact we would have overestimated the market share 31 times and underestimated it 56 times.

    Let’s assume a production process where the acceptable deviation from some fixed measurement is 0,5 mm and where the actual deviation have a normal distribution with expected deviation equal to zero, but with a standard deviation of one:

    Using the average deviation to calculate the expected error rate will falsely lead to us to believe it to be zero, while it in fact in the long run will be 64 %.

    Let’s assume that we have a contract for drilling a tunnel, and that the cost will depend on the hardness of the rock to be drilled. The contract states that we will have to pay a minimum of $ 0.5M and a maximum of $ 2M, with the most likely cost being $ 1M. The contract and our imperfect knowledge of the geology make us assume the cost distribution to be triangular:

    Using the average ((The bin containing the average in the histogram.)) as an estimate for expected cost will give a correct answer in only 14 out of a 100 trials, with cost being lower in 45 and higher in 41.

    Now, let’s assume that we are performing deep sea drilling for oil and that we have a single estimate for the cost to be $ 500M. However we expect the cost deviation to be distributed as in the figure below, with a typical small negative cost deviation and on average a small positive deviation:

    So, for all practical purposes this is considered as a low economic risk operation. What they have failed to do is to look at the tails of the cost deviation distribution that turns out to be Cauchy distributed with long tails, including the possibility of catastrophic events:

    The event far out on the right tail might be considered a Black Swan (Taleb, 2007), but as we now know they happen from time to time.

    So even more important than the fact that using a single estimate will prove you wrong most of the times it will also obscure what you do not know – the risk of being wrong.

    Don’t worry about the average, worry about how large the variations are, how frequent they occur and why they exists. (Fung, 2010)

    Rather than “Give me a number for my report,” what every executive should be saying is “Give me a distribution for my simulation.”(Savage, 2002)

    References

    Gavin,W.,T. & Pande,G.(2008), FOMC Consensus Forecasts, Federal Reserve Bank of St. Louis Review, May/June 2008, 90(3, Part 1), pp. 149-63.

    Fung, K., (2010). Numbers Rule Your World. New York: McGraw-Hill.

    Savage, L., S.,(2002). The Flaw of Averages. Harvard Business Review, (November), 20-21.

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Taleb, N., (2007). The Black Swan. New York: Random House.

    Timmermann, A.,(2006).  An Evaluation of the World Economic Outlook Forecasts, IMF Working Paper WP/06/59, www.imf.org/external/pubs/ft/wp/2006/wp0659.pdf

    Endnotes