Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
FRM – Strategy @ Risk

Tag: FRM

  • M&A: When two plus two is five or three or …

    M&A: When two plus two is five or three or …

    When two plus two is five (Orwell, 1949)

    Introduction

    Mergers & Acquisitions (M&A) is a way for companies to expand rapidly and much faster than organic growth – that is coming from existing businesses – would have allowed. M&A’s have for decades been a trillion-dollar business, but empirical studies reports that a significant proportion must be considered as failures.

    The conventional wisdom – is that the majority of deals fail to add shareholder value to the acquiring company. According to this research, only 30-50% of deals are considered to be successful (See Bruner, 2002).

    If most deals fail, why do companies keep doing them? Is it because they think the odds won’t apply to them, or are executives more concerned with extending its influence and company growth (empire building) and not with increasing their shareholder (s) value?

    Many writers argue that these are the main reasons driving the M&A activities, with the implication that executives are basically greedy (because their compensation is often tied to the size of the company) – or incompetent.

    To be able to create shareholder value the M&A must give rise to some forms of synergy. Synergy is the ability of the merged companies to generate higher shareholder value (wealth) than the standalone entities. That is; that the whole will be greater than the sum it’s of parts.

    For many of the observed M&A’s however, the opposite have been the truth – value have been destroyed; the whole have turned out to be less than the sum of its parts (dysergy).

    “When asked to name just one big merger that had lived up to expectations, Leon Cooperman, former co-chairman of Goldman Sachs’ Investment Policy Committee, answered: I’m sure there are success stories out there, but at this moment I draw a blank.” (Sirower, 1997)

    The “apparent” M&A failures have also been attributed to both methodological and measurement problems, stating that evidence – as cost saving or revenue enhancement brought by the M&A is difficult to obtain after the fact. This might also apply to some of the success stories.

    What is surprising in most (all?) of the studies of M&A success and failures is the lack understanding of the stochastic nature of business activities. For any company it is impossible to estimate with certainty its equity value, the best we can do is to estimate a range of values and the probability that the true value will fall inside this range. The merger two companies amplify this, and the discussion of possible synergies or dysergies can only be understood in the context of randomness (stochasticity) ((See: the IFA.com – Probability Machine, Galton Board, Randomness and Fair Price Simulator, Quincunx at http://www.youtube.com/watch?v=AUSKTk9ENzg)).

    [tube] http://www.youtube.com/watch?v=AUSKTk9ENzg, 400,300 [/tube]

    The M&A cases

    Let’s assume that we have two companies A and B that are proposed merged. We have the distribution for each company’s equity value (shareholders value) for both companies and we can calculate the equity distribution for the merged company. Company A’s value is estimated to be in the range of 0 to 150M with expected value 90M. Company B’s value is estimated to be in the range of -40 to 200M with expected value 140M. (See figure below)

    If we merge the two companies assuming no synergy or dysergy we get the value (shareholder) distribution shown by the green curve in the figure. The merged company will have a value in the range of 65 to 321M, with an expected value of 230M. Since there is no synergy/dysergy no value have been created or destroyed by the merger.

    For company B no value would be added in the merger if A was bought at a price equal to or higher than the expected value of the company.  If it was bought at a price less than expected value, then there is a probability that the wealth of the shareholders of company B will increase. But even then it is not with certainty. All increase of wealth to the shareholders of company B will be at the expenses of the shareholders of company A and vice versa.

    Case 1

    If we assume that there is a “connection” between the companies, such that an increase in one of the company’s revenues also will increase the revenues in the other, we will have a synergy that can be exploited.

    This situation is depicted in the figure below. The green curve gives the case with no synergy and the blue the case described above. The difference between them is the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is high and even negative (dysergy) when revenues is low.

    If we produce a frequency diagram of the sizes of the possible synergies it will look as the diagram below. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Case 2

    If we assume that the “connection” between the companies is such that a reduction in one of the company’s revenues streams will reduce the total production costs, we again have a synergy that can be exploited.
    This situation is depicted in the figure below. The green curve gives the case with no synergy and the red the case described above. The difference between them is again the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is lower and even negative (dysergy) when revenues is high.

    In this case, the merger acts as a hedge against revenue losses at the cost of parts of the upside created by the merger. This should not deter the participants from a merger since there is only a 30 % probability that this will happen.

    The graph above again gives the frequency diagram for the sizes of the possible synergies. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Conclusion

    The elusiveness of synergies in many M&A cases can be explained by the natural randomness in business activities. The fact that a merger can give rise to large synergies does not guarantee that it will occur, only that there is a probability that it will occur. Spread sheet exercises in valuation can lead to disaster if the stochastic nature of the involved companies is not taken into account. AND basing the pricing of the M&A candidate on expected synergies is pure foolishness.

    References

    Bruner, Robert F. (2002), Does M&A Pay? A Survey of Evidence for the Decision-Maker. Journal of Applied Finance, Vol. 12, No. 1. Available at SSRN: http://ssrn.com/abstract=485884

    Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.

    The whole is more than the sum of its parts. Aristotle, Metaphysica

     

    Sirower, M. (1997) The Synergy Trap: How Companies Lose the Acquisition Game. New York. The Free Press.

  • Corn and ethanol futures hedge ratios

    Corn and ethanol futures hedge ratios

    This entry is part 2 of 2 in the series The Bio-ethanol crush margin

     

    A large amount of literature has been published discussing hedging techniques and a number of different hedging models and statistical refinements to the OLS model that we will use in the following. For a comprehensive review see “Futures hedge ratios: a review,” (Chen et al., 2003).

    We are here looking for hedge models and hedge ratio estimations techniques that are “good enough” and that can fit into valuation models using Monte Carlo simulation.

    The ultimately purpose is to study hedging strategies using P&L and Balance simulation to forecast the probability distribution for the company’s equity value. By comparing the distributions for the different strategies, we will be able to select the hedging strategy that best fits the boards risk appetite /risk aversion and that at the same time “maximizes” the company value.

    Everything should be made as simple as possible, but not simpler. – Einstein, Reader’s Digest. Oct. 1977.

    To use futures contracts for hedging we have to understand the objective: a futures contract serves as a price-fixing mechanism. In their simplest form, futures prices are prices set today to be paid in the future for goods. If properly designed and implemented, hedge profits will offset the loss from an adverse price moves. In a like fashion, hedge losses will also eliminate effects of a favorable price change. Ultimately, the success of any hedge program rests on the implementation of a correctly sized futures position.

    The minimum variation hedge

    This is often referred to as – the volatility-minimizing hedge for one unit of exposure. It can be found by minimizing the variance of the hedge payoff at maturity.

    For an ideal hedge, we would like the change in the futures price (Delta F) to match as exactly as possible the change in the value of the asset (Delta S) we wish to hedge, i.e.:

    Delta S = Delta F

    The expected payoff from the hedge will be equal to the value of the cash position at maturity plus the payoff of the hedge (Johnson, 1960) or:

    E(H) = X_S delim{[} {E (S2)-S1} {]} + X_F delim{[} {E (F2)-F1 }{]}

    With spot position XS, a short futures market holding XF, spot price S1 and expected spot price at maturity E (S2), current future contract price F1 andexpected future price E (F2) – excluding transaction costs.

    What we want is to find the value of the futures position that reduces the variability of price changes to the lowest possible level.

    The minimum-variance hedge ratio is then defined as the number of futures per unit of the spot asset that will minimize the variance of the hedged portfolio returns.

    The variance of the portfolio return is: ((The variance of the un-hedged position is: Var (U) =X^2_S Var (Delta S))):

    Var (H) =X^2_ S Var (Delta S) + X^2_F Var (Delta F) + 2 X_S X_F Covar (Delta S, Delta F)

    Where Var (Delta S) is the variance in the future price change, Var (Delta F) is the variance of the change in the spot price and Covar (Delta S, Delta F) the covariance between the spot and future price changes. Letting h =  X_F/X_S represent the proportion of the spot position hedged, minimum value of Var (H) can then be found ((by minimizing Var (H) as a function of h)) as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)} or equivalently as: h*= {Corr (Delta S, Delta F)} {Var(Delta S)}/{Var (Delta F)}

    Where Corr (Delta S, Delta F) is the correlation between the spot and future price changes while  assuming that XS is exogenous determined or fixed.

    Estimating the hedge coefficient

    It is also possible to estimate the optimal hedge (h*) using regression analysis. The basic equation is:

    Delta S = a + h Delta F + varepsilon

    with varepsilon as the change in spot price not explained by the regression model. Since the basic OLS regression for this equation estimates the value of h* as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)}

    we can use this regression to find the solution that minimizes the objective function E(H). This is one of the reasons that use of the objective function E (H) is so appealing. ((Note that other and very different objective functions could have chosen.))

    We can then use the coefficient of determination, or R^2 , as an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position – the hedge effectiveness. (Ederington, 1979) ((Not taking into account variation margins etc.)).

    The basis

    Basis is defined as the difference between the spot price (S) and the futures price (F). When the expected change in the future contract price is equal to the expected change in the spot price, the optimal variance minimizing strategy is to set  h*=1. However, for most future contract markets the future price does not perfectly parallel the spot price, causing an element of basis risk to directly affect the hedging decision.

    A negative basis is called contango and a positive basis backwardation:

    1. When the spot price increases by more than the futures price, the basis increases and is said to “strengthen the basis” (when unexpected, this is favorable for a short hedge and unfavorable for a long hedge).
    2. When the futures price increases by more than the spot price, the basis declines and this is said to “weaken the basis” (when unexpected, this is favorable for a long hedge and unfavorable for a short hedge).

    There will usually be a different basis for each contract.

    The number of futures contracts

    The variance minimizing number of futures contracts N* will be:

    N*=h*{ X_S}/{Q_F}

    Where Q_F  is the size of one futures market contract. Since futures contracts are marked to market every day, the daily losses are debited and daily gains credited the parties accounts – settlement variations – i.e. the contracts are closed every day. The account will have to be replenished if the account falls below the maintenance margin (margin call). If the account is above the initial margin withdrawals can be made from the account.

    Ignoring the incremental income effects from investing variation margin gains (or borrowing to cover variation margin losses), we want the hedge to generate h*Delta F. Appreciating that there is an incremental effect, we want to accrue interest on a “tailed” hedge such that (Kawaller, 1997):

    h*Delta F =Delta F  (1+r)^n  or
    ĥ = h*/(1+r)^n  or h*/(1+ r*n/365) if time to maturity is less than one year.

    Where:
    r = interest rate and
    n = number of days remaining to maturity of the futures contract.

    This amounts to adjusting the hedge by a present value factor. Tailing converts the futures position into a forward position. It negates the effect of daily resettlement, in which profits and losses are realized before the day the hedge is lifted.

    For constant interest rates the tailed hedge (for h* < 1.) rises over time to reach the exposure at the maturity of the hedge. Un-tailed the hedge will over-hedge the exposure and increase the hedger’s risk.  Tailing the hedge is especially of importance when the interest rate is high and the time to maturity long.

    An appropriate interest rate would be one that reflects the average of the firm’s cost of capital (WACC) and the rate it would earn on its investments (ROIC) both which will be stochastic variable in the simulation. The first would be relevant in cases when the futures contracts generate losses, while the second when the futures contracts generate gains. In practice some average of these rates are used. ((See FAS 133 and later amendments))
    There are traditionally two approaches to tailing:

    1. Re-balance the tail each day. In this case the tailed hedge ratio is adjusted each day to maturity of the futures contract. In this approach the adjustment declines each day, until at expiration there is no adjustment.
    2. Use a constant tail (average): ĥ= h*/(1 + 0.5*r*N /365) where N is the original number of days remaining to maturity. In this shortcut, the adjustment is made at the time the hedge is put on, and not changed. The hedge will start with being too big and ends with being too small, but will on average be correct.

    For investors where trading is active, the first approach is more convenient, for inactive traders, the second is often used.

    Since our models always incorporate stochastic interest rates, hedges discounted with the appropriate rates are calculated. This amounts to solving the set of stochastic simultaneous equations created by the hedge and the WACC/ROIC calculations since the hedges will change their probability distributions. Note that the tailed hedge ratio will be a stochastic variable, and that minimizing the variance of the hedge will not necessarily maximize the company value. The value of – ĥ – that maximizes company value can only be found by simulation given the board’s risk appetite / risk aversion.

    The Spot and Futures Price movements

    At any time there are a number of futures contracts for the same commodity simultaneously being priced. The only difference between them is the delivery month. A continuous contract takes the individual contracts in the futures market and splices them together. The resulting continuous series ((The simplest method of splicing is to tack successive delivery months onto each other. Although the prices in the history are real, the chart will also preserve the price gaps that are present between expiring deliveries and those that replace them.)) allows us to study the price history in the market from a single chart. The following graphs show the price movements ((To avoid price gap problems, many prefer to base analysis on adjusted contracts that eliminate roll-over gaps. There are two basic ways to adjust a series.
    Forward-adjusting works by beginning with the true price for the first delivery and then adjusting each successive set of prices up or down depending on whether the roll-over gap is positive or negative.
    Back-adjusting reverses the process. Current price are always real but historical prices are adjusted up or down. This is the often preferred method, since the series always will show the latest actual price. However, there is no perfect method producing a continuous price series satisfying all needs.)) for the spliced corn contracts C-2010U to 2011N and the spliced ethanol contracts EH-2010U to 2011Q.

    In the graphs the spot price is given by the blue line and the corresponding futures price by the red line.

    For the corn futures, we can see that there is a difference between the spot and the futures price – the basis ((The reasons for the price difference are transportation costs between delivery locations, storage costs and availability, and variations between local and worldwide supply and demand of a given commodity. In any event, this difference in price plays an important part in what is being actually pay for the commodity when you hedge.))  – but that the price movements of the futures follow the spot price closely or – vice versa.

    The spliced contracts for bioethanol are a little different from the corn contracts. The delivery location is the same and the curves are juxtaposed very close to each other. Here are however other differences.

    The regression – the futures assay

    The selected futures contracts give us five parallel samples for the relation between the corn spot and futures price, and six for the relation between the ethanol spot and ethanol futures price. For every day in the period 8/2/2010 to 7/14/2011 we have from one to five observations of the corn relation (five replications) and from 8/5/2010 to 8/3/2011 we have one to twelve observations of the ethanol relation. Since we follow a set of contracts, the number of daily observations of the corn futures prices starts with five (twelve for the ethanol futures) and ends with only one as the contracts matures.  We could of course also have selected a sample giving an equal number of observations every day.

    There are three likely models which could be fit:

    1. Simple regression on the individual data points,
    2. Simple regression on the daily means,and
    3. Weighted regression on the daily means using the number of observations as the weight.

    When the number of daily observations is equal all three models will have the same parameter estimates. The weighted and individual regressions will always have the same parameter estimates, but when the sample sizes are unequal these will be different from the unweighted means regression. Whether the weighted or unweighted model should be used when the number of daily observations is unequal will depend on the situation.

    Since we now have replications of the relation between spot and the futures price we have the opportunity to test for lack of fit from the straight line model.

    In our case using this approach have a small drawback. We are looking for the regression of the spot price changes against the price changes in the futures contract. This model however will give us the inverse: the regression of the price changes in the futures contract against the changes in spot price. The inverse of the slope of this regression, which is what we are looking for, will in general not give the correct answer (Thonnard, 2006).  So we will use this approach (model#3) to test for linearity and then model #1 with all data for estimation of the slope.

    Ideally we would like to find stable (efficient) hedge ratios in the sense that they can be used for more than one hedge and over a longer period of time, thus greatly simplifying the workload for ethanol producing companies.

    All prices, both spot and futures in the following, have been converted from $/gallon (ethanol) or $/bushel (corn) to $/kg.

    The Corn hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the corn futures prices on the changes in corn spot prices (model#3):

    The analysis of variance cautions us that the lack of fit to a linear model for all contracts is significant. However the sum of squares due to this is very small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a linear function of the changes in the spot prices and the hedge ratios found as efficient. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    Nevertheless, this linear model will have to be monitored closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:

    Delta S = 0.0001 + 1.0073 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0073

    First, since the adjusted  R-square value (0.9838) is an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position, a hedge based on this regression coefficient (slope) should be highly effective.

    The ratio of the variance of the hedged position and the un-hedged position is equal to 1-R2.  The variance of a hedged position based on this hedge ratio will be 12.7 % of the unhedged position.

    We have thus eliminated 87.3 % of the variance of the unhedged position. For a simple model like this this can be considered as a good result.

    In the figure the thick black line gives the 95% confidence limits and the yellow area the 95% prediction limits. As we can see, the relationship between the daily price changes is quite tight thus promising the possibility of effective hedges.

    Second, due to the differencing the basis caused by the difference in delivery location have disappeared, and even if the constant term is significant, it is so small that it with little loss can be considered zero.

    The R-square values would have been higher for the regressions on the means than for the regression above. This is because the total variability in the data would have been reduced by using means (note that the total degrees of freedom is reduced for the regressions on means).  A regression on the means will thus always suggest greater predictive ability than a regression on individual data because it predicts mean values, not individual values.

    The Ethanol hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the ethanol futures prices on the changes in ethanol spot prices (model#3):

    The analysis of variance again cautions us that the lack of fit to a linear model for all contracts is significant.  In this case it is approximately ten times higher than for the corn contracts.

    However the sum of squares due to this is small small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a close to linear function of the changes in the spot prices and the hedge ratios found as “good enough”. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    In this graph we can clearly see the deviation from a strictly linear model. The assumption of a linear model for the changes in ethanol spot and futures prices will have to be monitored very closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:
    Delta S = 0.0135 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0135

    The adjusted  R-square value (0.8105) estimating the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position is high even with the “lack of linearity”. A hedge based on this regression coefficient (slope) should then still be highly effective.

    The variance of a hedged position based on this hedge ratio will be 43.7 % of the unhedged position. It is not as good as for the corn contracts, but will still give a healthy reduction in the ethanol price risk facing the company.

    As this turned out, we can use both of these estimation methods for the hedge ratio as basis for strategy simulations, but one question remains unanswered: will this minimize the variance of the crush  ratio?

    References

    Understanding Basis, Chicago Board of Trade, 2004.  http://www.gofutures.com/pdfs/Understanding-Basis.pdf

    http://www.cmegroup.com/trading/agricultural/files/AC-406_DDG_CornCrush_042010.pdf

    Bond, Gary E. (1984). “The Effects of Supply and Interest Rate Shocks in Commodity Futures Markets,” American Journal of Agricultural Economics, 66, pp. 294-301.

    Chen, S. Lee, C.F. and Shrestha, K (2003) “Futures hedge ratios: a review,” The Quarterly Review of Economics and Finance, 43 pp. 433–465

    Ederington, Louis H. (1979). “The Hedging Performance of the New Futures Markets,” Journal of Finance, 34, pp. 157-70

    Einstein, Albert (1923). Sidelights on Relativity (Geometry and Experience). P. Dutton., Co.

    Figlewski, S., Lanskroner, Y. and Silber, W. L. (1991) “Tailing the Hedge: Why and How,” Journal of Futures Markets, 11: pp. 201-212.

    Johnson, Leland L.  (1960). ” The Theory of Hedging and Speculation in Commodity Futures,” Review of Economic Studies, 27, pp. 139-51.

    Kawaller, I. G. (1997 ) ”Tailing Futures Hedges/Tailing Spreads,” The Journal of Derivatives, Vol. 5, No. 2, pp. 62-70.

    Li, A. and Lien, D. D. (2003) “Futures Hedging Under Mark-to-Market Risk,” Journal of Futures Markets, Vol. 23, No. 4.

    Myers Robert J. and Thompson Stanley R. (1989) “Generalized Optimal Hedge Ratio Estimation,” American Journal of Agricultural Economics, Vol. 71, No. 4, pp. 858-868.

    Thonnard, M., (2006), Confidence Intervals in Inverse Regression. Diss. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Web. 5 Apr. 2013. <http://alexandria.tue.nl/extra1/afstversl/wsk-i/thonnard2006.pdf>.

    Stein, Jerome L.  (1961). “The Simultaneous Determination of Spot and Futures Prices,” American Economic Review, 51, pp. 1012-25.

    Endnotes

  • The probability distribution of the bioethanol crush margin

    The probability distribution of the bioethanol crush margin

    This entry is part 1 of 2 in the series The Bio-ethanol crush margin

    A chain is no stronger than its weakest link.

    Introduction

    Producing bioethanol is a high risk endeavor with adverse price development and crumbling margins.

    In the following we will illustrate some of the risks the bioethanol producer is facing using corn  as feedstock. However, these risks will persist regardless of the feedstock and production process chosen. The elements in the discussion below can therefore be applied to any and all types of bioethanol production:

    1.    What average yield (kg ethanol per kg feedstock) can we expect?  And  what is the shape of the yield distribution?
    2.    What will the future price ratio of feedstock to ethanol be? And what volatility can we expect?

    The crush margin ((The relationship between prices in the cash market is commonly referred to as the Gross Production Margin.))  measures the difference between the sales proceeds of finished bioethanol and its feedstock ((It can also be considered as the productions throughput; the rate at which the system converts raw materials to money. Throughput is net sales less variable cost, generally the cost of the most important raw materials. (see: Throughput Accounting)).

    With current technology, one bushel of corn can be converted into approx. 2.75 gallons of corn and 17 pounds of DDG (distillers’ dried grains). The crush margin (or gross processing margin) is then:

    1. Crush margin = 0.0085 x DDG price + 2.8 x ethanol price – corn price

    Since from 65 % to 75 % of the variable cost in bioethanol production is cost of corn, the crush margin is an important metric especially since the margin in addition shall cover all other expenses like energy, electricity, interest, transportation, labor etc. and – in the long term the facility’s fixed costs.

    The following graph taken from the CME report: Trading the corn for ethanol crush, (CME, 2010) gives the margin development in 2009 and the first months of 2010:

    This graph gives a good picture of the uncertainties that faces the bioethanol producers, and can be a helpful tool when hedging purchases of corn and sale of the products ((The historical chart going back to APR 2005 is available at the CBOT web site)).

    The Crush Spread, Crush Profit Margin and Crush Ratio

    There are a number of other ways to formulate the crush risk (CME, July 11. 2011):

    The CBOT defines the “Crush Spread” as the Estimated Gross Margin per Bushel of Corn. It is calculated as follows:

    2. Crush Spread = (Ethanol price per gallon X 2.8) – Corn price per bushel, or as

    3. Crush Profit margin = Ethanol price – (Corn price/2.8).

    Understanding these relationships is invaluable in trading ethanol stocks ((We will return to this in a later post.)).

    By rearranging the crush spread equation, we can express the spread as its ratio to the product price (simplifying by keeping bi-products like DDG etc. out of the equation):

    4. Crush ratio = Crush spread/Ethanol price = y – p,

    Where: y = EtOH Yield (gal)/ bushel corn and p = Corn price/Ethanol price.

    We will in the following look at the stochastic nature of y and p and thus the uncertainty in forecasting the crush ratio.

    The crush spread and thus the crush ratio is calculated using data from the same period. They therefore give the result of an unhedged operation. Even if the production period is short – two to three days – it will be possible to hedge both the corn and ethanol prices. But to do that in a consistent and effective way we have to look into the inherent volatility in the operations.

    Ethanol yield

    The ethanol yield is usually set to 2.682 gal/bushel corn, assuming 15.5 % humidity. The yield is however a stochastic variable contributing to the uncertainty in the crush ratio forecasts. As only starch in corn can be converted to ethanol we need to know the content of extractable starch in a standard bushel of corn – corrected for normal loss and moisture.  In the following we will lean heavily on the article: “A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch”, by Tad W. Patzek (Patzek, 2006) which fits our purpose perfectly. All relevant references can be found in the article.

    The aim of his article was to establish the mean extractable starch in hybrid corn and the mean highest possible yield of ethanol from starch. We however are also interested in the probability distributions for these variables – since no production company will ever experience the mean values (ensembles) and since the average return over time always will be less than the return using ensemble means ((We will return to this in a later post))  (Peters, 2010).

    The purpose of this exercise is after all to establish a model that can be used as support for decision making in regard to investment and hedging in the bioethanol industry over time.

    From (Patzek, 2006) we have that the extractable starch (%) can be described as approx. having a normal distribution with mean 66.18 % and standard deviation of 1.13:

    The nominal grain loss due to dirt etc. can also be described as approx. having a normal distribution with mean 3 % and a standard deviation of 0.7:

    The probability distribution for the theoretical ethanol yield (kg/kg corn) can then be found by Monte Carlo simulation ((See formula #3 in (Patzek, 2006))  as:

    – having an approx. normal distribution with mean 0.364 kg EtHO/kg of dry grain and standard deviation of 0.007. On average we will need 2.75 kg of clean dry grain to produce one kilo or 1.74 liter of ethanol ((With a specific density of 0.787 kg/l)).

    Since we now have a distribution for ethanol yield (y) as kilo of ethanol per kilo of corn we will in the following use price per kilo both for ethanol and corn, adjusting for the moisture (natural logarithm of moisture in %) in corn:

    We can also use this to find the EtHO yield starting with wet corn and using gal/bushel corn as unit (Patzek, 2006):

    giving as theoretical value a mean of 2.64 gal/wet bushel with a standard deviation of 0.05 – which is significantly lower than the “official” figure of 2.8 gal/wet bushel used in the CBOT calculations. More important to us however is the fact that we easily can get yields much lower than expected and thus a real risk of lower earnings than expected. Have in mind that to get a yield above 2.64 gallons of ethanol per bushel of corn all steps in the process must continuously be at or close to their maximum efficiency – which with high probability never will happen.

    Corn and ethanol prices

    Looking at the price developments since 2005 it is obvious that both the corn and ethanol prices have a large variability ($/kg and dry corn):

    The long term trends show a disturbing development with decreasing ethanol price, increasing corn prices  and thus an increasing price ratio:

    “Risk is like fire: If controlled, it will help you; if uncontrolled, it will rise up and destroy you.”

    Theodore Roosevelt

    The unhedged crush ratio

    Since the crush ratio on average is:

    Crush ratio = 0.364 – p, where:
    0.364 = Average EtOH Yield (kg EtHO/kg of dry grain) and
    p = Corn price/Ethanol price

    The price ratio (p) has to be less than 0.364 for the crush ratio in the outset to be positive. As of January 2011 the price ratios has overstepped that threshold and have for the first months of 2011 stayed above that.

    To get a picture of the risk an unhedged bioethanol producer faces only from normal variation in yield and forecasted variation in the price ratio we will make a simple forecast for April 2011 using the historic time series information on trend and seasonal factors:

    The forecasted probability distribution for the April price ratio is given in the frequency graph below:

    This represents the price risk the producer will face. We find that the mean value for the price ratio will be 0.323 with a standard deviation of 0.043. By using this and the distribution for ethanol yield we can by Monte Carlo simulation forecast the April distribution for the crush ratio:

    As we see, will negative values for the crush ratio be well inside the field of possible outcomes:

    The actual value of the average price ratio for April turned out to be 0.376 with a daily maximum of 0.384 and minimum of 0.363. This implies that the April crush ratio with 90 % probability would have been between -0.005 and -0.199, with only the income from DDGs to cover the deficit and all other costs.

    Hedging the crush ratio

    The distribution for the price ratio forecast above clearly points out the necessity of price ratio hedging (Johnson, 1960) and (Stein, 1961).
    The time series chart above shows both a negative trend development and seasonal variations in the price ratio. In the short run there is nothing much to do about the trend development, but in the longer run will probably other feedstock and better processes change the trend development (Shapouri et al., 2002).

    However, what immediately stand out are the possibilities to exploit the seasonal fluctuations in both markets:

    Ideally, raw material is purchased in the months seasonal factors are low and ethanol sold the months seasonal factor are high. In practice, this is not possible, restrictions on manufacturing; warehousing, market presence, liquidity, working capital and costs set limits to the producer’s degrees of freedom (Dalgran, 2009).

    Fortunately, there are a number of tools in both the physical and financial markets available to manage price risks; forwards and futures contracts, options, swaps, cash-forward, and index and basis contracts. All are available for the producers who understand financial hedging instruments and are willing to participate in this market. See: (Duffie, 1989), (Hull, 2003) and (Bjørk, 2009).

    The objective is to change the margin distributions shape (red) from having a large part of its left tail on the negative part of the margin axis to one resembling the green curve below where the negative part have been removed, but most of the upside (right tail) has been preserved, that is to: eliminate negative margins, reduce variability, maintain the upside potential and thus reduce the probability of operating at a net loss:

    Even if the ideal solution does not exist, large number of solutions through combinations of instruments can provide satisfactory results. In principle, it does not matter where these instruments exist, since both the commodity and financial markets are interconnected to each other. From a strategic standpoint, the purpose is to exploit fluctuations in the market to capture opportunities while mitigating unwanted risks (Mallory, et al., 2010).

    Strategic Risk Management

    To manage price risk in commodity markets is a complex topic. There are many strategic, economic and technical factors that must be understood before a hedging program can be implemented.

    Since all the hedging instruments have a cost and since only future outcomes ranges and not exact prices, can be forecasted in the individual markets, costs and effectiveness is uncertain.

    In addition, the degrees of desired protection have to be determined. Are we seeking to ensure only a positive margin, or a positive EBITDA, or a positive EBIT? With what probability and to what cost?

    A systematic risk management process is required to tailor an integrated risk management program for each individual bioethanol plant:

    The choice of instruments will define different strategies that will affect company liquidity and working capital and ultimately company value. Since the effect of each of these strategies will be of stochastic nature it will only be possible to distinguish between them using the concept of stochastic dominance. (selecting strategy)

    Models that can describe the business operations and underlying risk can be a starting point, to such an understanding. Linked to balance simulation they will provide invaluable support to decisions on the scope and timing of hedging programs.

    It is only when the various hedging strategies are simulated through the balance so that the effect on equity value can be considered that the best strategy with respect to costs and security level can be determined – and it is with this that S@R can help.

    References

    Bjørk, T.,(2009). Arbitrage Theory in Continuous Time. Oxford University Press, Oxford.

    CME Group., (2010).Trading the corn for ethanol crush,
    http://www.cmegroup.com/trading/agricultural/corn-for-ethanol-crush.html

    CME Group., (July 11. 2011). Ethanol Outlook Report, , http://cmegroup.barchart.com/ethanol/

    Dalgran, R.,A., (2009) Inventory and Transformation Hedging Effectiveness in Corn Crushing. Journal of Agricultural and Resource Economics 34 (1): 154-171.

    Duffie, D., (1989). Futures Markets. Prentice Hall, Englewood Cliffs, NJ.

    Hull, J. (2003). Options, Futures, and Other Derivatives (5th edn). Prentice Hall, Englewood Cliffs, N.J.

    Johnson, L., L., (1960). The Theory of Hedging and Speculation in Commodity Futures, Review of Economic Studies , XXVII, pp. 139-151.

    Mallory, M., L., Hayes, D., J., & Irwin, S., H. (2010). How Market Efficiency and the Theory of Storage Link Corn and Ethanol Markets. Center for Agricultural and Rural Development Iowa State University Working Paper 10-WP 517.

    Patzek, T., W., (2004). Sustainability of the Corn-Ethanol Biofuel Cycle, Department of Civil and Environmental Engineering, U.C. Berkeley, Berkeley, CA.

    Patzek, T., W., (2006). A Statistical Analysis of the Theoretical Yield of Ethanol from Corn Starch, Natural Resources Research, Vol. 15, No. 3.

    Peters, O. (2010). Optimal leverage from non-ergodicity. Quantitative Finance, doi:10.1080/14697688.2010.513338.

    Shapouri,H., Duffield,J.,A., & Wang, M., (2002). The Energy Balance of Corn Ethanol: An Update. U.S. Department of Agriculture, Office of the Chief Economist, Office of Energy Policy and New Uses. Agricultural Economic Report No. 814.

    Stein, J.L. (1961). The Simultaneous Determination of Spot and Futures Prices. American Economic Review, vol. 51, p.p. 1012-1025.

    Footnotes

  • The tool that would improve everybody’s toolkit

    The tool that would improve everybody’s toolkit

    Edge, which every year ((http://www.edge.org/questioncenter.html))   invites scientists, philosophers, writers, thinkers and artists to opine on a major question of the moment, asked this year: “What scientific concept would improve everybody’s cognitive toolkit?”

    The questions are designed to provoke fascinating, yet inspiring answers, and are typically open-ended, such as:  “What will change everything” (2008), “What are you optimistic about?” (2007), and “How is the internet changing the way you think?” (Last’s years question). Often these questions ((Since 1998))  are turned into paperback books.

    This year many of the 151 contributors pointed to Risk and Uncertainty in their answers. In the following we bring excerpt from some of the answers. We will however advice the interested reader to look up the complete answers:

    A Probability Distribution

    The notion of a probability distribution would, I think, be a most useful addition to the intellectual toolkit of most people.

    Most quantities of interest, most projections, most numerical assessments are not point estimates. Rather they are rough distributions — not always normal, sometimes bi-modal, sometimes exponential, sometimes something else.

    Related ideas of mean, median, and variance are also important, of course, but the simple notion of a distribution implicitly suggests these and weans people from the illusion that certainty and precise numerical answers are always attainable.

    JOHN ALLEN PAULOS, Professor of Mathematics, Temple University, Philadelphia.

    Randomness

    The First Law of Randomness: There is such a thing as randomness.
    The Second Law of Randomness: Some events are impossible to predict.
    The Third Law of Randomness: Random events behave predictably in aggregate even if they’re not predictable individually

    CHARLES SEIFE, Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Proofiness: The Dark Arts of Mathematical Deception.

    The Uselessness of Certainty

    Every knowledge, even the most solid, carries a margin of uncertainty. (I am very sure about my own name … but what if I just hit my head and got momentarily confused?) Knowledge itself is probabilistic in nature, a notion emphasized by some currents of philosophical pragmatism. Better understanding of the meaning of probability, and especially realizing that we never have, nor need, ‘scientifically proven’ facts, but only a sufficiently high degree of probability, in order to take decisions and act, would improve everybody’ conceptual toolkit.

    CARLO ROVELLI, Physicist, University of Aix-Marseille, France; Author, The First Scientist: Anaximander and the Nature of Science.

    Uncertainty

    Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So too in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even understanding the difficulty of obtaining reliable estimates of uncertainties usually means bad public policy.

    LAWRENCE KRAUSS, Physicist, Foundation Professor & Director, Origins Project, Arizona State University; Author, A Universe from Nothing; Quantum Man: Richard Feynman’s Life in Science.

    Risk Literacy

    Literacy — the ability to read and write — is the precondition for an informed citizenship in a participatory democracy. But knowing how to read and write is no longer enough. The breakneck speed of technological innovation has made risk literacy as indispensable in the 21st century as reading and writing were in the 20th century. Risk literacy is the ability to deal with uncertainties in an informed way.

    GERD GIGERENZER, Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings.

    Living is fatal

    The ability to reason clearly in the face of uncertainty. If everybody could learn to deal better with the unknown, then it would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps), but the chances for humanity as a whole.

    SETH LLOYD, Quantum Mechanical Engineer, MIT; Author, Programming the Universe.

    Uncalculated Risk

    We humans are terrible at dealing with probability. We are not merely bad at it, but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our wellbeing. This incompetence is reflected in our language, in which the common words used to convey likelihood are “probably” and “usually” — vaguely implying a 50% to 100% chance. Going beyond crude expression requires awkwardly geeky phrasing, such as “with 70% certainty,” likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness — the inability to deal with probability — may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.

    GARRETT LISI, Independent Theoretical Physicist

    And there is more … much more at the Edge site

  • Plans based on average assumptions ……

    Plans based on average assumptions ……

    This entry is part 3 of 4 in the series The fallacies of scenario analysis

     

    The Flaw of Averages states that: Plans based on the assumption that average conditions will occur are usually wrong. (Savage, 2002)

    Many economists use what they believe to be most likely ((Most likely estimates are often made in-house based on experience and knowledge about their operations.)) or average values ((Forecasts for many types of variable can be bought from suppliers of ‘consensus forecasts’.))  (Timmermann, 2006) (Gavin & Pande, 2008) as input for the exogenous variables in their spreadsheet calculations.

    We know however that:

    1. the probability for any variable to have outcomes equal to any of these values is close to zero,
    1. and that the probability of having outcomes for all the (exogenous) variables in the spreadsheet model equal to their average is virtually zero.

    So why do they do it? They obviously lack the necessary tools to calculate with uncertainty!

    But if a small deviation from the most likely value is admissible, how often will the use of a single estimate like the most probable value be ‘correct’?

    We can try to answer that by looking at some probability distributions that may represent the ‘mechanism’ generating some of these variables.

    Let’s assume that we are entering into a market with a new product, we know of course the maximum upper and lower limit of our future possible market share, but not the actual number so we guess it to be the average value = 0,5. Since we have no prior knowledge we have to assume that the market share is uniformly distributed between zero and one:

    If we then plan sales and production for a market share between 0, 4 and 0, 5 – we would out of a hundred trials only have guessed the market share correctly 13 times. In fact we would have overestimated the market share 31 times and underestimated it 56 times.

    Let’s assume a production process where the acceptable deviation from some fixed measurement is 0,5 mm and where the actual deviation have a normal distribution with expected deviation equal to zero, but with a standard deviation of one:

    Using the average deviation to calculate the expected error rate will falsely lead to us to believe it to be zero, while it in fact in the long run will be 64 %.

    Let’s assume that we have a contract for drilling a tunnel, and that the cost will depend on the hardness of the rock to be drilled. The contract states that we will have to pay a minimum of $ 0.5M and a maximum of $ 2M, with the most likely cost being $ 1M. The contract and our imperfect knowledge of the geology make us assume the cost distribution to be triangular:

    Using the average ((The bin containing the average in the histogram.)) as an estimate for expected cost will give a correct answer in only 14 out of a 100 trials, with cost being lower in 45 and higher in 41.

    Now, let’s assume that we are performing deep sea drilling for oil and that we have a single estimate for the cost to be $ 500M. However we expect the cost deviation to be distributed as in the figure below, with a typical small negative cost deviation and on average a small positive deviation:

    So, for all practical purposes this is considered as a low economic risk operation. What they have failed to do is to look at the tails of the cost deviation distribution that turns out to be Cauchy distributed with long tails, including the possibility of catastrophic events:

    The event far out on the right tail might be considered a Black Swan (Taleb, 2007), but as we now know they happen from time to time.

    So even more important than the fact that using a single estimate will prove you wrong most of the times it will also obscure what you do not know – the risk of being wrong.

    Don’t worry about the average, worry about how large the variations are, how frequent they occur and why they exists. (Fung, 2010)

    Rather than “Give me a number for my report,” what every executive should be saying is “Give me a distribution for my simulation.”(Savage, 2002)

    References

    Gavin,W.,T. & Pande,G.(2008), FOMC Consensus Forecasts, Federal Reserve Bank of St. Louis Review, May/June 2008, 90(3, Part 1), pp. 149-63.

    Fung, K., (2010). Numbers Rule Your World. New York: McGraw-Hill.

    Savage, L., S.,(2002). The Flaw of Averages. Harvard Business Review, (November), 20-21.

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Taleb, N., (2007). The Black Swan. New York: Random House.

    Timmermann, A.,(2006).  An Evaluation of the World Economic Outlook Forecasts, IMF Working Paper WP/06/59, www.imf.org/external/pubs/ft/wp/2006/wp0659.pdf

    Endnotes

  • Planning under Uncertainty

    Planning under Uncertainty

    This entry is part 3 of 6 in the series Balance simulation

     

    ‘Would you tell me, please, which way I ought to go from here?’ (asked Alice)
    ‘That depends a good deal on where you want to get to,’ said the Cat.
    ‘I don’t much care where—‘said Alice.
    ‘Then it doesn’t matter which way you go,’ said the Cat.
    –    Lewis Carroll, Alice’s Adventures in Wonderland

    Let’s say that the board have sketched a future desired state (value of equity) of the company and that you are left to find if it is possible to get there and if so – the road to take. The first part implies to find out if the desired state belongs to a set of feasible future states to your company. If it does you will need a road map to get there, if it does not you will have to find out what additional means you will need to get there and if it is possible to acquire those.

    The current state (equity value of) your company is in itself uncertain since it depends on future sales, costs and profit – variable that usually are highly uncertain. The desired future state is even more so since you need to find strategies (roads) that can take you there and of those the one best suited to the situation. The ‘best strategies’ will be those that with highest probability and lowest costs will give you the desired state that is, that has the desired state or a better one as a very probable outcome:

    Each of the ‘best strategies’ will have many different combinations of values for the variables –that describe the company – that can give the desired state(s). Using Monte Carlo simulations this means that a few, some or many of the thousands of runs – or realizations of future states-will give equity value outcomes that fulfill the required state. What we need then is to find how each of these has come about – the transition – and select the most promising ones.

    The S@R balance simulation model has the ability to make intermediate stops when the desired state(s) has been reached giving the opportunity to take out complete reports describing the state(s) and how it was reached and by what path of transitional states.

    The flip side of this is that we can use the same model and the same assumptions to take out similar reports on how undesirable states were reached – and their path of transitional states. This set of reports will clearly describe the risks underlying the strategy and how and when they might occur.

    The dominant strategy will then be the one that has the desired state or a better one as a very probable outcome and that have at the same time the least probability of highly undesirable outcomes (the stochastic dominant strategy):

    Mulling over possible target- or scenario analysis; calculating backwards the value of each variable required to meet the target is a waste of time since both the environment is stochastic and a number of different paths (time-lines) can lead to the desired state:

    And even if you could do the calculations, what would the probabilities be?

    Carroll, L., (2010). Alice‘s Adventures in Wonderland -Original Version. City: Cosimo Classics.