Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Decision making – Page 2 – Strategy @ Risk

Category: Decision making

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes

  • Inventory Management: Is profit maximization right for you?

    Inventory Management: Is profit maximization right for you?

    This entry is part 2 of 4 in the series Predictive Analytics

     

    Introduction

    In the following we will exemplify how sales forecasts can be used to set inventory levels in single or in multilevel warehousing. By inventory we will mean a stock or store of goods; finished goods, raw materials, purchased parts, and retail items. Since the problem discussed is the same for both production and inventory, the two terms will be used interchangeably.

    Good inventory management is essential to the successful operation for most organizations both because of the amount of money the inventory represents and the impact that inventories have on the daily operations.

    An inventory can have many purposes among them the ability:

    1. to support independence of operations,
    2. to meet both anticipated and variation in demand,
    3. to decouple components of production and allow flexibility in production scheduling and
    4. to hedge against price increases, or to take advantage of quantity discounts.

    The many advantages of stock keeping must however be weighted against the costs of keeping the inventory. This can best be described as the “too much/too little problem”; order too much and inventory is left over or order too little and sales are lost.

    This can be as a single-period (a onetime purchasing decision) or a multi-period problem, involving a single warehouse or multilevel warehousing geographically dispersed. The task can then be to minimize the organizations total cost, maximize the level of customer service, minimize ‘loss’ or maximize profit etc.

    Whatever the purpose, the calculation will have to be based on knowledge of the sales distribution. In addition, sales will usually have a seasonal variance creating a balance act between production, logistic and warehousing costs. In the example given below the sales forecasts will have to be viewed as a periodic forecast (month, quarter, etc.).

    We have intentionally selected a ‘simple problem’ to highlight the optimization process and the properties of the optimal solution. The last is seldom described in the standard texts.

    The News-vendor problem

    The news-vendor is facing a onetime purchasing decision; to maximize expected profit so that the expected loss on the Qth unit equals the expected gain on the Qth unit:

    I.  Co * F(Q) = Cu * (1-F(Q)) , where

    Co = The cost of ordering one more unit than what would have been ordered if demand had been known – or the increase in profit enjoyed by having ordered one fewer unit,

    Cu = The cost of ordering one fewer unit than what would have been ordered if demand had been known  – or the increase in profit enjoyed by having ordered one more unit, and

    F(Q) = Demand Probability for q<= Q. By rearranging terms in the above equation we find:

    II.  F(Q) = Cu/{Co+Cu}

    This ratio is often called the critical ratio (CR). The usual way of solving this is to assume that the demand is normal distributed giving Q as:

    III.    Q = m + z * s, where: z = {Q-m}/s , is normal distributed with zero mean and variance equal  one.

    Demand unfortunately, rarely haves a normal distribution and to make things worse we usually don’t know the exact distribution at all. We can only ‘find’ it by Monte Carlo simulation and thus have to find the Q satisfying the equation (I) by numerical methods.

    For the news-vendor the inventory level should be set to maximize profit given the sales distribution. This implies that the cost of lost sales will have to be weighed against the cost of adding more to the stock.

    If we for the moment assume that all these costs can be regarded as fixed and independent of the inventory level, then the product markup (% of cost) will determine the optimal inventory level:

    IV. Cu= Co * (1+ {Markup/100}) 

    In the example given here the critical ratio is approx. 0.8.  The question then is if the inventory levels indicated by that critical ratio always will be the best for the organization.

    Expected demand

    The following graph indicates the news-vendors demand distribution. Expected demand is 2096 units ((Median demand is 1819 units and the demand lies most typically in the range of 1500 to 2000 units)), but the distribution is heavily skewed to the right ((The demand distribution has a skewness of 0.78., with a coefficient of variation of 0.45, a lower quartile of 1432 units and an upper quartile of 2720 units.))  so there is a possibility of demand exceeding the expected demand:

    By setting the product markup – in the example below it is set to 300% – we can calculate profit and loss based on the demand forecast.

    Profit and Loss (of opportunity)

    In the following we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum  ¤4963 at a level of 2729 units:

    At that point we can expect to have some excess stock and in some cases also lost sales. But regardless, it is at this point that expected profit is maximized, so this gives the optimal stock level.

    Since we include both costs of sold and unsold items, the point giving expected maximum profit will be below the point minimizing expected loss –¤1460 at a production level of 2910 units.

    Given the optimal inventory level (2729 units) we find the actual sales frequency distribution as shown in the graph below. At this level we expect an average sale of 1920 units – ranging from 262 to 2729 units ((Having a lower quartile of 1430 units and an upper quartile of 2714 units.)).

    The graph shows that the distribution possesses two different modes ((The most common value in a set of observations.)) or two local maxima. This bimodality is created by the fact that the demand distribution is heavily skewed to the right so that demand exceeding 2729 units will imply 2729 units sold with the rest as lost sales.

    This bimodality will of course be reflected in the distribution of realized profits. Have in mind that the line (blue) giving maximum profit is an average of all realized profits during the Monte Carlo simulation given the demand distribution and the selected inventory level. We can therefore expect realized profit both below and above this average (¤4963) – as shown in the frequency graph below:

    Expected (average) profit is ¤4963, with a minimum of ¤1681 and a maximum of ¤8186, the range of realized profits is therefore very large ((Having a lower quartile of ¤2991 and an upper quartile of ¤8129.)) ¤9867.

    So even if we maximize profit we can expect a large variation in realized profits, there is no way that the original uncertainty in the demand distribution can be reduced or removed.

    Risk and Reward

    Increased profit comes at a price: increased risk. The graph below describes the situation; the blue curve shows how expected profit increases with the production or inventory (service) level. The spread between the green and red curves indicates the band where actual profit with 80% probability will fall. As is clear from the graph, this band increases in width as we move to the right indicating an increased upside (area up to the green line) but also an increased probability for a substantial downside (area down to the red line:

    For some companies – depending on the shape of the demand distribution – other concerns than profit maximization might therefore be of more importance – like predictability of results (profit). The act of setting inventory or production levels should accordingly be viewed as an element for the boards risk assessments.

    On the other hand will the uncertainty band around loss as the service level increases decrease. This of course lies in the fact that loss due to lost sales diminishes as the service level increases and the that the high markup easily covers the cost of over-production.

    Thus a strategy of ‘loss’ minimization will falsely give a sense of ‘risk minimization’, while it in reality increases the uncertainty of future realized profit.

    Product markup

    The optimal stock or production level will be a function of the product markup. A high markup will give room for a higher level of unsold items while a low level will necessitate a focus on cost reduction and the acceptance of stock-out:

    The relation between markup (%) and the production level is quadratic ((Markup (%) = 757.5 – 0.78*production level + 0.00023*production level2))  implying that markup will have to be increasingly higher, the further out on the right tail we fix the production level.

    The Optimal inventory (production) level

    If we put it all together we get the chart below. In this the green curve is the accumulated sales giving the probability of the level of sales and the brown curve the optimal stock or production level given the markup.

    The optimal stock level is then found by drawing a line from the right markup axis (right y-axis) to the curve (red) for optimal stock level, and down to the x-axis giving the stock level. By continuing the line from the markup axis to the probability axis (left y-axis) we find the probability level for stock-out (1-the cumulative probability) and the probability for having a stock level in excess of demand:

    By using the sales distribution we can find the optimal stock/production level given the markup and this would not have been possible with single point sales forecasts – that could have ended up almost anywhere on the curve for forecasted sales.

    Even if a single point forecast managed to find expected sales – as mean, mode or median – it would have given wrong answers about the optimal stock/production level, since the shape of the sales distribution would have been unknown.

    In this case with the sales distribution having a right tail the level would have been to low – or with low markup, to high. With a left skewed sales distribution the result would have been the other way around: The level would have been too high and with low markup probably too low.

    In the case of multilevel warehousing, the above analyses have to be performed on all levels and solved as a simultaneous system.

    The state of affairs at the point of maximum

    To have the full picture of the state of affairs at the point of maximum we have to take a look at what we can expect of over- and under-production. At the level giving maximum expected profit we will on

    average have an underproduction of 168 units, ranging from zero to nearly 3000 ((Having a coefficient of variation of almost 250%)). On the face of it this could easily be interpreted as having set the level to low, but as we shall see that is not the case.

    Since we have a high markup, lost sales will weigh heavily in the profit maximization and as a result of this we can expect to have unsold items in our stock at the end of the period. On average we will have a little over 800 units left in stock, ranging from zero to nearly 2500. The lower quartile is 14 units and the upper is 1300 units so in 75% of the cases we will have an overproduction of less than 1300 units. However in 25% of the cases the overproduction will be in the range from 1300 to 2500 units.

    Even with the possibility of ending up at the end of the period with a large number of unsold units, the strategy of profit maximization will on average give the highest profit. However, as we have seen, with a very high level of uncertainty about the actual profit being realized.

    Now, since a lower inventory level in this case only will reduce profit by a small amount but lower the confidence limit by a substantial amount, other strategies giving more predictability for the actual result should be considered.

  • “How can you be better than us understand our business risk?”

    “How can you be better than us understand our business risk?”

    This is a question we often hear and the simple answer is that we don’t! But by using our methods and models we can use your knowledge in such a way that it can be systematically measured and accumulated throughout the business and be presented in easy to understand graphs to the management and board.

    The main reason for this lies in how we can treat uncertainties ((Variance is used as measure of uncertainty or risk.)) in the variables and in the ability to handle uncertainties stemming from variables from different departments simultaneously.

    Risk is usually compartmentalized in “silos” and regarded as proprietary to the department and – not as a risk correlated or co-moving with other risks in the company caused by common underlying events influencing their outcome:

    When Queen Elizabeth visited the London School of Economics in autumn 2008 she asked why no one had foreseen the crisis. The British Academy Forum replied to the Queen in a letter six months later. Included in the letter was the following:

    One of our major banks, now mainly in public ownership, reputedly had 4000 risk managers. But the difficulty was seeing the risk to the system as a whole rather than to any specific financial instrument or loan (…) they frequently lost sight of the bigger picture ((The letter from the British Academy to the Queen is available at: http://media.ft.com/cms/3e3b6ca8-7a08-11de-b86f-00144feabdc0.pdf)).

    To be precise we are actually not simulating risk in and of itself, risk just is a bi-product from simulation of a company’s financial and operational (economic) activities. Since the variables describing these activities is of stochastic nature, which is to say contains uncertainty, all variables in the P&L and Balance sheet will contain uncertainty. They can as such best be described by the shape of their frequency distribution – found after thousands of simulations. And it is the shape of these distributions that describes the uncertainty in the variables.

    Most ERM activities are focused on changing the left or downside tail – the tail that describes what normally is called risk.

    We however are also interested in the right tail or upside tail, the tail that describes possible outcomes increasing company value. Together they depict the uncertainty the company faces:

    S@R thus treats company risk holistic by modeling risks (uncertainty) as parts of the overall operational and financial activities. We are thus able to “add up” the risks – to a consolidated level.

    Having the probability distribution for e.g. the company’s equity value gives us the opportunity to apply risk measures to describe the risk facing the shareholders or the risk added or subtracted by different strategies like investments or risk mitigation tools.

    Since this can’t be done with ordinary addition (( The variance of the sum of two stochastic variables is the sum of their variance plus the covariance between them.)) (or subtraction) we have to use Monte Carlo simulation.

    The value added by this are:

    1.  A method for assessing changes in strategy; investments, new markets, new products etc.
    2. A heightening of risk awareness in management across an organization’s diverse businesses.
    3. A consistent measure of risk allowing executive management and board reporting and response across a diverse organization.
    4. A measure of risk (including credit and market risk) for the organization that can be compared with capital required by regulators, rating agencies and investors.
    5. A measure of risk by organization unit, product, channel and customer segment which allows risk adjusted returns to be assessed, and scarce capital to be rationally allocated.
    6.  A framework from which the organization can decide its risk mitigation requirements rationally.
    7. A measure of risk versus return that allows businesses and in particular new businesses (including mergers and acquisitions) to be assessed in terms of contribution to growth in shareholder value.

    The independent risk experts are often essential for consistency and integrity. They can also add value to the process by sharing risk and risk management knowledge gained both externally and elsewhere in the organization. This is not just a measurement exercise, but an investment in risk management culture.

    Forecasting

    All business planning are built on forecasts of market sizes, market shares, prices and costs. They are usually given as low, mean and high scenarios without specifying the relationship between the variables. It is easy to show that when you combine such forecasts you can end up very wrong (( https://www.strategy-at-risk.com/2009/05/04/the-fallacies-of-scenario-analysis/)). However the 5 %, 50 % and 95 % values from the scenarios can be used to produce a probability distribution for the variable and the simultaneous effect of these distributions can be calculated using Monte Carlo simulation, giving for instance the probability distribution for profit or cash flow from that market. This can again be used to consolidate the company’s cash flow or profit etc.

    Controls and Mitigation

    Controls and mitigation play a significant part in reducing the likelihood of a risk event or the amount of loss should one occur. They however have a material cost. One of the drivers of measuring risk is to support a more rational analysis of the costs and benefits of controls and.
    The result after controls and mitigation becomes the final or residual risk distribution for the company.

    Distributing Diversification Benefits

    At each level of aggregation within a business diversification benefits accrue, representing the capacity to leverage the risk capital against a larger range of non-perfectly correlated risks. How should these diversification benefits be distributed to the various businesses?

    This is not an academic matter, as the residual risk capital ((Bodoff, N. M.,  Capital Allocation by Percentile Layer VOLUME 3/ISSUE 1 CASUALTY ACTUARIAL SOCIETY, pp 13-30, http://www.variancejournal.org/issues/03-01/13.pdf

    Erel, Isil, Myers, Stewart C. and Read, James, Capital Allocation (May 28, 2009). Fisher College of Business Working Paper No. 2009-03-010. Available at SSRN: http://ssrn.com/abstract=1411190 or fttp://dx.doi.org/10.2139/ssrn.1411190))  attributed to each business segment is critical in determining its shareholder value creation and thus its strategic worth to the enterprise. Getting this wrong could lead the organization to discourage its better value creating segments and encourage ones that dissipate shareholder value.

    The simplest is the pro-rata approach which distributes the diversification benefits on a pro-rata basis down the various segment hierarchies (organizational unit, product, customer segment etc.).

    A more right approach that can be built into the Monte Carlo simulation is the contributory method which takes into account the extent to which a segment of the organization’s business is correlated with or contrary to the major risks that make up the company’s overall risk. This rewards counter cyclical businesses and others that diversify the company’s risk profile.

    Aggregation with market & credit risk

    For many parts of an organization there may be no market or credit risk – for areas, such as sales and manufacturing, operational and business risk covers all of their risks.

    But at the company level the operational and business risk needs to be integrated with market and credit risk to establish the overall measure of risk being run by the company. And it is this combined risk capital measure that needs to be apportioned out to the various businesses or segments to form the basis for risk adjusted performance measures.

    It is not enough just to add the operational, credit and market risks together. This would over count the risk – the risk domains are by no means perfectly correlated, which a simple addition would imply. A sharp hit in one risk domain does not imply equally sharp hits in the others.

    Yet they are not independent either. A sharp economic downturn will affect credit and many operational risks and probably a number of market risks as well.

    The combination of these domains can be handled in a similar way to correlations within operational risk, provided aggregate risk distributions and correlation factors can be estimated for both credit and market risk.

    Correlation risk

    Markets that are part of the same sector or group are usually very highly correlated or move together. Correlation risk is the risk associated with having several positions in too many similar markets. By using Monte Carlo simulation as described above this risk can be calculated and added to the company’s risks distribution that will take part in forming the company’s yearly profit or equity value distribution. And this is the information that the management and board will need.

    Decision making

    The distribution for equity value (see above) can then be used for decision purposes. By making changes to the assumptions about the variables distributions (low, medium and high values) or production capacities etc. this new equity distribution can be compared with the old to find the changes created by the changes in assumptions etc.:

    A versatile tool

    This is not only a tool for C-level decision-making but also for controllers, treasury, budgeting etc.:

    The results from these analyses can be presented in form of B/S and P&L looking at the coming one to five (short-term) or five to fifteen years (long-term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ Evaluate alternative strategic investment options
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    If you always have a picture of what really can happen you are forewarned and thus forearmed to adverse events and better prepared to take advantage of favorable events.go-on-look-behind-the-curtainFrom Indexed: Go-on-look-behind-the-curtain ((From Indexed: http://thisisindexed.com/2012/02/go-on-look-behind-the-curtain/))

     Footnotes

  • Be prepared for a bumpy ride

    Be prepared for a bumpy ride

    Imagine you’re nicely settled down in your airline seat on a transatlantic flight – comfort-able, with a great feeling. Then the captain comes on and welcomes everybody on board and continues, “It’s the first time I fly this type of machine, so wish me luck!” Still feeling great? ((Inspired by an article from BTS: http://www.bts.com/news-insights/strategy-execution-blog/Why_are_Business_Simulations_so_Effective.aspx))

    Running a company in today’s interconnected and volatile world has become extremely complicated; surely far more than flying an airliner. You probably don’t have all the indicators, dashboard system and controls as on a flight deck. And business conditions are likely to change for more than flight conditions ever will. Today we live with an information overload. Data streaming at us almost everywhere we turn. How can we cope? How do we make smart decisions?

    Pilots train over and over again. They spend hour after hour in flight simulators before being allowed to sit as co-pilots on a real passenger flight. Fortunately, for us passengers, flight hours normally pass by, day after day, without much excitement. Time to hit the simulator again and train engine fires, damaged landing gear, landing on water, passenger evacuation etc. becoming both mentally and practically prepared to manage the worst.

    Why aren’t we running business simulations to the same extent? Accounting, financial models and budgeting is more an art than science, many times founded on theories from the last century. (Not to mention Pacioli’s Italian accounting from 1491.) While the theory of behavioural economics progresses we must use the best tools we can get to better understand financial risks and opportunities and how to improve and refine value creation. The true job we’re set to do.

    How is it done? Like Einstein – seeking simplicity, as far as it goes. Finding out which pieces of information that is most crucial to the success and survival of the business. For major corporations these can be drawn down from the hundreds to some twenty key variables. (These variables are not set in stone once and for all, but need to be redefined in accordance with the business situation we foresee in the near future.)

    At Allevo our focal point is on Risk Governance at large and helping organisations implement Enterprise Risk Management (ERM) frame¬works and processes, specifically assisting boards and executive management to exercise their Risk Oversight duties. Fundamental to good risk management practice is to understand end articulate the organisation’s (i.e. the Board’s) appetite for risk. Without understanding the appetite and tolerance levels for various risks it’s hard to measure, aggregate and prioritize them. How much are we willing to spend on new ventures and opportunities? How much can we afford to lose? How do we calculate the trade-offs?

    There are two essential elements of Risk Appetite: risk capacity and risk capability.

    By risk capacity we mean the financial ability to take on new opportunities with their inherent risks (i.e. availability of cash and funding across the strategy period). By risk capability is meant the non-financial resources of the organisation. Do we have the know¬ledge and resources to take on new ventures? Cash and funding is fundamental and comes first.

    Does executive management and the board really understand the strengths and vulnerabilities hiding in the balance sheet or in the P&L-account? Many may have a gut feeling, mostly the CFO and the treasury department. But shouldn’t the executive team and the board (including the Audit Committee, and the Risk Committee if there is one) also really know?

    At Allevo we have aligned with Strategy@Risk Ltd to do business simulations. They have experiences from all kinds of industries; especially process industries where they even helped optimize manufacturing processes. They have simulated airports and flight patterns for a whole country. For companies with high level of raw material and commodity risks they simulate optimum hedging strategies. But their main contribution, in our opinion, is their ability to simulate your organisation’s balance sheet and P&L accounts. They have created a simulation tool that can be applied to a whole corporation. It needs only to be adjusted to your specific operations and business environ¬ments, which is done through inter-views and a few workshops with your own people that have the best knowledge of your business (operations, finances, markets, strategy etc.).

    When the key variables have been identified, it’s time to run the first Monte Carlo simulations to find out if the model fits with recent actual experiences and otherwise feels reliable.

    No model can ever predict the future. What we want to do is to find the key strengths and weaknesses in your operations and in your balance sheet. By running sensitivity analysis we can first of all understand which the key variables are. We want to focus what’s important, and leave alone those variables that have little effect on outcomes.

    Now, it’s time for the most important part. Considering how the selected variables can vary and interact over time. The future contains an inconceivable amount of different outcomes ((There are probably more different futures than ways of dealing 52 playing cards. Don’t you think? Well there are only 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 ways to shuffle a deck of 52 cards (8.1 x 1067 ))). What does that say about budgeting with discrete numbers?)). The question is how can we achieve the outcomes that we desire and avoid the ones that we dread the most?

    Running 10,000 simulations (i.e. closing each and every annual account over 10,000 years) we can stop the simulation when reaching a desired level of outcome and investigate the position of the key variables. Likewise when nasty results appear, we stop again and recording the underlying position of each variable.

    The simulations generate an 80-page standard report (which, once again, can feel like information overload). But once you’ve got a feeling for the sensitivity of the business you could instead do specific “what if?” analysis of scenarios of special interest to yourself, the executive team or to the board.

    Finally, the model equates the probability distribution of the organisation’s Enterprise Value going forward. The key for any business is to grow Enterprise Value.

    Simulations show how the likelihood of increasing or losing value varies with different strategies. This part of the simulation tool could be extremely important in strategy selection.

    If you wish to go into more depth on how simulations can support you and your organisation, please visit

    www.allevo.se or www.strategy-at-risk.com

    There you’ll find a great depth of material to chose from; or call us direct and we’ll schedule a quick on-site presentation.

    Have a good flight, and …

    Happy landing!

  • M&A: When two plus two is five or three or …

    M&A: When two plus two is five or three or …

    When two plus two is five (Orwell, 1949)

    Introduction

    Mergers & Acquisitions (M&A) is a way for companies to expand rapidly and much faster than organic growth – that is coming from existing businesses – would have allowed. M&A’s have for decades been a trillion-dollar business, but empirical studies reports that a significant proportion must be considered as failures.

    The conventional wisdom – is that the majority of deals fail to add shareholder value to the acquiring company. According to this research, only 30-50% of deals are considered to be successful (See Bruner, 2002).

    If most deals fail, why do companies keep doing them? Is it because they think the odds won’t apply to them, or are executives more concerned with extending its influence and company growth (empire building) and not with increasing their shareholder (s) value?

    Many writers argue that these are the main reasons driving the M&A activities, with the implication that executives are basically greedy (because their compensation is often tied to the size of the company) – or incompetent.

    To be able to create shareholder value the M&A must give rise to some forms of synergy. Synergy is the ability of the merged companies to generate higher shareholder value (wealth) than the standalone entities. That is; that the whole will be greater than the sum it’s of parts.

    For many of the observed M&A’s however, the opposite have been the truth – value have been destroyed; the whole have turned out to be less than the sum of its parts (dysergy).

    “When asked to name just one big merger that had lived up to expectations, Leon Cooperman, former co-chairman of Goldman Sachs’ Investment Policy Committee, answered: I’m sure there are success stories out there, but at this moment I draw a blank.” (Sirower, 1997)

    The “apparent” M&A failures have also been attributed to both methodological and measurement problems, stating that evidence – as cost saving or revenue enhancement brought by the M&A is difficult to obtain after the fact. This might also apply to some of the success stories.

    What is surprising in most (all?) of the studies of M&A success and failures is the lack understanding of the stochastic nature of business activities. For any company it is impossible to estimate with certainty its equity value, the best we can do is to estimate a range of values and the probability that the true value will fall inside this range. The merger two companies amplify this, and the discussion of possible synergies or dysergies can only be understood in the context of randomness (stochasticity) ((See: the IFA.com – Probability Machine, Galton Board, Randomness and Fair Price Simulator, Quincunx at http://www.youtube.com/watch?v=AUSKTk9ENzg)).

    [tube] http://www.youtube.com/watch?v=AUSKTk9ENzg, 400,300 [/tube]

    The M&A cases

    Let’s assume that we have two companies A and B that are proposed merged. We have the distribution for each company’s equity value (shareholders value) for both companies and we can calculate the equity distribution for the merged company. Company A’s value is estimated to be in the range of 0 to 150M with expected value 90M. Company B’s value is estimated to be in the range of -40 to 200M with expected value 140M. (See figure below)

    If we merge the two companies assuming no synergy or dysergy we get the value (shareholder) distribution shown by the green curve in the figure. The merged company will have a value in the range of 65 to 321M, with an expected value of 230M. Since there is no synergy/dysergy no value have been created or destroyed by the merger.

    For company B no value would be added in the merger if A was bought at a price equal to or higher than the expected value of the company.  If it was bought at a price less than expected value, then there is a probability that the wealth of the shareholders of company B will increase. But even then it is not with certainty. All increase of wealth to the shareholders of company B will be at the expenses of the shareholders of company A and vice versa.

    Case 1

    If we assume that there is a “connection” between the companies, such that an increase in one of the company’s revenues also will increase the revenues in the other, we will have a synergy that can be exploited.

    This situation is depicted in the figure below. The green curve gives the case with no synergy and the blue the case described above. The difference between them is the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is high and even negative (dysergy) when revenues is low.

    If we produce a frequency diagram of the sizes of the possible synergies it will look as the diagram below. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Case 2

    If we assume that the “connection” between the companies is such that a reduction in one of the company’s revenues streams will reduce the total production costs, we again have a synergy that can be exploited.
    This situation is depicted in the figure below. The green curve gives the case with no synergy and the red the case described above. The difference between them is again the synergies created by the merger. The synergy at the dotted line is the synergy we can expect, but it might turn out to be higher if revenues is lower and even negative (dysergy) when revenues is high.

    In this case, the merger acts as a hedge against revenue losses at the cost of parts of the upside created by the merger. This should not deter the participants from a merger since there is only a 30 % probability that this will happen.

    The graph above again gives the frequency diagram for the sizes of the possible synergies. Have in mind that the average synergy value is not the value we would expect to find, but the average of all possible synergy values.

    Conclusion

    The elusiveness of synergies in many M&A cases can be explained by the natural randomness in business activities. The fact that a merger can give rise to large synergies does not guarantee that it will occur, only that there is a probability that it will occur. Spread sheet exercises in valuation can lead to disaster if the stochastic nature of the involved companies is not taken into account. AND basing the pricing of the M&A candidate on expected synergies is pure foolishness.

    References

    Bruner, Robert F. (2002), Does M&A Pay? A Survey of Evidence for the Decision-Maker. Journal of Applied Finance, Vol. 12, No. 1. Available at SSRN: http://ssrn.com/abstract=485884

    Orwell, George (1949). Nineteen Eighty-Four. A novel. London: Secker & Warburg.

    The whole is more than the sum of its parts. Aristotle, Metaphysica

     

    Sirower, M. (1997) The Synergy Trap: How Companies Lose the Acquisition Game. New York. The Free Press.

  • Corn and ethanol futures hedge ratios

    Corn and ethanol futures hedge ratios

    This entry is part 2 of 2 in the series The Bio-ethanol crush margin

     

    A large amount of literature has been published discussing hedging techniques and a number of different hedging models and statistical refinements to the OLS model that we will use in the following. For a comprehensive review see “Futures hedge ratios: a review,” (Chen et al., 2003).

    We are here looking for hedge models and hedge ratio estimations techniques that are “good enough” and that can fit into valuation models using Monte Carlo simulation.

    The ultimately purpose is to study hedging strategies using P&L and Balance simulation to forecast the probability distribution for the company’s equity value. By comparing the distributions for the different strategies, we will be able to select the hedging strategy that best fits the boards risk appetite /risk aversion and that at the same time “maximizes” the company value.

    Everything should be made as simple as possible, but not simpler. – Einstein, Reader’s Digest. Oct. 1977.

    To use futures contracts for hedging we have to understand the objective: a futures contract serves as a price-fixing mechanism. In their simplest form, futures prices are prices set today to be paid in the future for goods. If properly designed and implemented, hedge profits will offset the loss from an adverse price moves. In a like fashion, hedge losses will also eliminate effects of a favorable price change. Ultimately, the success of any hedge program rests on the implementation of a correctly sized futures position.

    The minimum variation hedge

    This is often referred to as – the volatility-minimizing hedge for one unit of exposure. It can be found by minimizing the variance of the hedge payoff at maturity.

    For an ideal hedge, we would like the change in the futures price (Delta F) to match as exactly as possible the change in the value of the asset (Delta S) we wish to hedge, i.e.:

    Delta S = Delta F

    The expected payoff from the hedge will be equal to the value of the cash position at maturity plus the payoff of the hedge (Johnson, 1960) or:

    E(H) = X_S delim{[} {E (S2)-S1} {]} + X_F delim{[} {E (F2)-F1 }{]}

    With spot position XS, a short futures market holding XF, spot price S1 and expected spot price at maturity E (S2), current future contract price F1 andexpected future price E (F2) – excluding transaction costs.

    What we want is to find the value of the futures position that reduces the variability of price changes to the lowest possible level.

    The minimum-variance hedge ratio is then defined as the number of futures per unit of the spot asset that will minimize the variance of the hedged portfolio returns.

    The variance of the portfolio return is: ((The variance of the un-hedged position is: Var (U) =X^2_S Var (Delta S))):

    Var (H) =X^2_ S Var (Delta S) + X^2_F Var (Delta F) + 2 X_S X_F Covar (Delta S, Delta F)

    Where Var (Delta S) is the variance in the future price change, Var (Delta F) is the variance of the change in the spot price and Covar (Delta S, Delta F) the covariance between the spot and future price changes. Letting h =  X_F/X_S represent the proportion of the spot position hedged, minimum value of Var (H) can then be found ((by minimizing Var (H) as a function of h)) as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)} or equivalently as: h*= {Corr (Delta S, Delta F)} {Var(Delta S)}/{Var (Delta F)}

    Where Corr (Delta S, Delta F) is the correlation between the spot and future price changes while  assuming that XS is exogenous determined or fixed.

    Estimating the hedge coefficient

    It is also possible to estimate the optimal hedge (h*) using regression analysis. The basic equation is:

    Delta S = a + h Delta F + varepsilon

    with varepsilon as the change in spot price not explained by the regression model. Since the basic OLS regression for this equation estimates the value of h* as:

    h*={Covar (Delta S, Delta F)} /{Var (Delta F)}

    we can use this regression to find the solution that minimizes the objective function E(H). This is one of the reasons that use of the objective function E (H) is so appealing. ((Note that other and very different objective functions could have chosen.))

    We can then use the coefficient of determination, or R^2 , as an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position – the hedge effectiveness. (Ederington, 1979) ((Not taking into account variation margins etc.)).

    The basis

    Basis is defined as the difference between the spot price (S) and the futures price (F). When the expected change in the future contract price is equal to the expected change in the spot price, the optimal variance minimizing strategy is to set  h*=1. However, for most future contract markets the future price does not perfectly parallel the spot price, causing an element of basis risk to directly affect the hedging decision.

    A negative basis is called contango and a positive basis backwardation:

    1. When the spot price increases by more than the futures price, the basis increases and is said to “strengthen the basis” (when unexpected, this is favorable for a short hedge and unfavorable for a long hedge).
    2. When the futures price increases by more than the spot price, the basis declines and this is said to “weaken the basis” (when unexpected, this is favorable for a long hedge and unfavorable for a short hedge).

    There will usually be a different basis for each contract.

    The number of futures contracts

    The variance minimizing number of futures contracts N* will be:

    N*=h*{ X_S}/{Q_F}

    Where Q_F  is the size of one futures market contract. Since futures contracts are marked to market every day, the daily losses are debited and daily gains credited the parties accounts – settlement variations – i.e. the contracts are closed every day. The account will have to be replenished if the account falls below the maintenance margin (margin call). If the account is above the initial margin withdrawals can be made from the account.

    Ignoring the incremental income effects from investing variation margin gains (or borrowing to cover variation margin losses), we want the hedge to generate h*Delta F. Appreciating that there is an incremental effect, we want to accrue interest on a “tailed” hedge such that (Kawaller, 1997):

    h*Delta F =Delta F  (1+r)^n  or
    ĥ = h*/(1+r)^n  or h*/(1+ r*n/365) if time to maturity is less than one year.

    Where:
    r = interest rate and
    n = number of days remaining to maturity of the futures contract.

    This amounts to adjusting the hedge by a present value factor. Tailing converts the futures position into a forward position. It negates the effect of daily resettlement, in which profits and losses are realized before the day the hedge is lifted.

    For constant interest rates the tailed hedge (for h* < 1.) rises over time to reach the exposure at the maturity of the hedge. Un-tailed the hedge will over-hedge the exposure and increase the hedger’s risk.  Tailing the hedge is especially of importance when the interest rate is high and the time to maturity long.

    An appropriate interest rate would be one that reflects the average of the firm’s cost of capital (WACC) and the rate it would earn on its investments (ROIC) both which will be stochastic variable in the simulation. The first would be relevant in cases when the futures contracts generate losses, while the second when the futures contracts generate gains. In practice some average of these rates are used. ((See FAS 133 and later amendments))
    There are traditionally two approaches to tailing:

    1. Re-balance the tail each day. In this case the tailed hedge ratio is adjusted each day to maturity of the futures contract. In this approach the adjustment declines each day, until at expiration there is no adjustment.
    2. Use a constant tail (average): ĥ= h*/(1 + 0.5*r*N /365) where N is the original number of days remaining to maturity. In this shortcut, the adjustment is made at the time the hedge is put on, and not changed. The hedge will start with being too big and ends with being too small, but will on average be correct.

    For investors where trading is active, the first approach is more convenient, for inactive traders, the second is often used.

    Since our models always incorporate stochastic interest rates, hedges discounted with the appropriate rates are calculated. This amounts to solving the set of stochastic simultaneous equations created by the hedge and the WACC/ROIC calculations since the hedges will change their probability distributions. Note that the tailed hedge ratio will be a stochastic variable, and that minimizing the variance of the hedge will not necessarily maximize the company value. The value of – ĥ – that maximizes company value can only be found by simulation given the board’s risk appetite / risk aversion.

    The Spot and Futures Price movements

    At any time there are a number of futures contracts for the same commodity simultaneously being priced. The only difference between them is the delivery month. A continuous contract takes the individual contracts in the futures market and splices them together. The resulting continuous series ((The simplest method of splicing is to tack successive delivery months onto each other. Although the prices in the history are real, the chart will also preserve the price gaps that are present between expiring deliveries and those that replace them.)) allows us to study the price history in the market from a single chart. The following graphs show the price movements ((To avoid price gap problems, many prefer to base analysis on adjusted contracts that eliminate roll-over gaps. There are two basic ways to adjust a series.
    Forward-adjusting works by beginning with the true price for the first delivery and then adjusting each successive set of prices up or down depending on whether the roll-over gap is positive or negative.
    Back-adjusting reverses the process. Current price are always real but historical prices are adjusted up or down. This is the often preferred method, since the series always will show the latest actual price. However, there is no perfect method producing a continuous price series satisfying all needs.)) for the spliced corn contracts C-2010U to 2011N and the spliced ethanol contracts EH-2010U to 2011Q.

    In the graphs the spot price is given by the blue line and the corresponding futures price by the red line.

    For the corn futures, we can see that there is a difference between the spot and the futures price – the basis ((The reasons for the price difference are transportation costs between delivery locations, storage costs and availability, and variations between local and worldwide supply and demand of a given commodity. In any event, this difference in price plays an important part in what is being actually pay for the commodity when you hedge.))  – but that the price movements of the futures follow the spot price closely or – vice versa.

    The spliced contracts for bioethanol are a little different from the corn contracts. The delivery location is the same and the curves are juxtaposed very close to each other. Here are however other differences.

    The regression – the futures assay

    The selected futures contracts give us five parallel samples for the relation between the corn spot and futures price, and six for the relation between the ethanol spot and ethanol futures price. For every day in the period 8/2/2010 to 7/14/2011 we have from one to five observations of the corn relation (five replications) and from 8/5/2010 to 8/3/2011 we have one to twelve observations of the ethanol relation. Since we follow a set of contracts, the number of daily observations of the corn futures prices starts with five (twelve for the ethanol futures) and ends with only one as the contracts matures.  We could of course also have selected a sample giving an equal number of observations every day.

    There are three likely models which could be fit:

    1. Simple regression on the individual data points,
    2. Simple regression on the daily means,and
    3. Weighted regression on the daily means using the number of observations as the weight.

    When the number of daily observations is equal all three models will have the same parameter estimates. The weighted and individual regressions will always have the same parameter estimates, but when the sample sizes are unequal these will be different from the unweighted means regression. Whether the weighted or unweighted model should be used when the number of daily observations is unequal will depend on the situation.

    Since we now have replications of the relation between spot and the futures price we have the opportunity to test for lack of fit from the straight line model.

    In our case using this approach have a small drawback. We are looking for the regression of the spot price changes against the price changes in the futures contract. This model however will give us the inverse: the regression of the price changes in the futures contract against the changes in spot price. The inverse of the slope of this regression, which is what we are looking for, will in general not give the correct answer (Thonnard, 2006).  So we will use this approach (model#3) to test for linearity and then model #1 with all data for estimation of the slope.

    Ideally we would like to find stable (efficient) hedge ratios in the sense that they can be used for more than one hedge and over a longer period of time, thus greatly simplifying the workload for ethanol producing companies.

    All prices, both spot and futures in the following, have been converted from $/gallon (ethanol) or $/bushel (corn) to $/kg.

    The Corn hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the corn futures prices on the changes in corn spot prices (model#3):

    The analysis of variance cautions us that the lack of fit to a linear model for all contracts is significant. However the sum of squares due to this is very small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a linear function of the changes in the spot prices and the hedge ratios found as efficient. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    Nevertheless, this linear model will have to be monitored closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:

    Delta S = 0.0001 + 1.0073 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0073

    First, since the adjusted  R-square value (0.9838) is an estimate of the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position, a hedge based on this regression coefficient (slope) should be highly effective.

    The ratio of the variance of the hedged position and the un-hedged position is equal to 1-R2.  The variance of a hedged position based on this hedge ratio will be 12.7 % of the unhedged position.

    We have thus eliminated 87.3 % of the variance of the unhedged position. For a simple model like this this can be considered as a good result.

    In the figure the thick black line gives the 95% confidence limits and the yellow area the 95% prediction limits. As we can see, the relationship between the daily price changes is quite tight thus promising the possibility of effective hedges.

    Second, due to the differencing the basis caused by the difference in delivery location have disappeared, and even if the constant term is significant, it is so small that it with little loss can be considered zero.

    The R-square values would have been higher for the regressions on the means than for the regression above. This is because the total variability in the data would have been reduced by using means (note that the total degrees of freedom is reduced for the regressions on means).  A regression on the means will thus always suggest greater predictive ability than a regression on individual data because it predicts mean values, not individual values.

    The Ethanol hedge ratio

    The analysis of variance table (ANOVA) for the weighted regression of the changes in the ethanol futures prices on the changes in ethanol spot prices (model#3):

    The analysis of variance again cautions us that the lack of fit to a linear model for all contracts is significant.  In this case it is approximately ten times higher than for the corn contracts.

    However the sum of squares due to this is small small compared to the sum of squares due to linearity – so we will regard the changes in the futures prices to have been generated by a close to linear function of the changes in the spot prices and the hedge ratios found as “good enough”. In figure below the circles gives the daily means of the contracts and the line the weighted regression on these means:

    In this graph we can clearly see the deviation from a strictly linear model. The assumption of a linear model for the changes in ethanol spot and futures prices will have to be monitored very closely as further data becomes available.

    The result from the parameter estimation using simple regression (model#1) is given in the table below:

    The relation is:
    Delta S = 0.0135 Delta F + varepsilon

    Giving the un-tailed corn hedge ratio h* = 1.0135

    The adjusted  R-square value (0.8105) estimating the percentage reduction in the variability of changes in the value of the cash position from holding the hedged position is high even with the “lack of linearity”. A hedge based on this regression coefficient (slope) should then still be highly effective.

    The variance of a hedged position based on this hedge ratio will be 43.7 % of the unhedged position. It is not as good as for the corn contracts, but will still give a healthy reduction in the ethanol price risk facing the company.

    As this turned out, we can use both of these estimation methods for the hedge ratio as basis for strategy simulations, but one question remains unanswered: will this minimize the variance of the crush  ratio?

    References

    Understanding Basis, Chicago Board of Trade, 2004.  http://www.gofutures.com/pdfs/Understanding-Basis.pdf

    http://www.cmegroup.com/trading/agricultural/files/AC-406_DDG_CornCrush_042010.pdf

    Bond, Gary E. (1984). “The Effects of Supply and Interest Rate Shocks in Commodity Futures Markets,” American Journal of Agricultural Economics, 66, pp. 294-301.

    Chen, S. Lee, C.F. and Shrestha, K (2003) “Futures hedge ratios: a review,” The Quarterly Review of Economics and Finance, 43 pp. 433–465

    Ederington, Louis H. (1979). “The Hedging Performance of the New Futures Markets,” Journal of Finance, 34, pp. 157-70

    Einstein, Albert (1923). Sidelights on Relativity (Geometry and Experience). P. Dutton., Co.

    Figlewski, S., Lanskroner, Y. and Silber, W. L. (1991) “Tailing the Hedge: Why and How,” Journal of Futures Markets, 11: pp. 201-212.

    Johnson, Leland L.  (1960). ” The Theory of Hedging and Speculation in Commodity Futures,” Review of Economic Studies, 27, pp. 139-51.

    Kawaller, I. G. (1997 ) ”Tailing Futures Hedges/Tailing Spreads,” The Journal of Derivatives, Vol. 5, No. 2, pp. 62-70.

    Li, A. and Lien, D. D. (2003) “Futures Hedging Under Mark-to-Market Risk,” Journal of Futures Markets, Vol. 23, No. 4.

    Myers Robert J. and Thompson Stanley R. (1989) “Generalized Optimal Hedge Ratio Estimation,” American Journal of Agricultural Economics, Vol. 71, No. 4, pp. 858-868.

    Thonnard, M., (2006), Confidence Intervals in Inverse Regression. Diss. Technische Universiteit Eindhoven, Department of Mathematics and Computer Science, Web. 5 Apr. 2013. <http://alexandria.tue.nl/extra1/afstversl/wsk-i/thonnard2006.pdf>.

    Stein, Jerome L.  (1961). “The Simultaneous Determination of Spot and Futures Prices,” American Economic Review, 51, pp. 1012-25.

    Endnotes