Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Corporate Strategy – Strategy @ Risk

Category: Corporate Strategy

  • Risk Appetite and the Virtues of the Board

    Risk Appetite and the Virtues of the Board

    This entry is part 1 of 1 in the series Risk Appetite and the Virtues of the Board

     

     

     

    This article consists of two parts: Risk Appetite, and The Virtues of the Board. (Upcoming) This first part can be read as a standalone article, the second will be based on concepts developed in this part.

    Risk Appetite

    Multiple sources of risk are a fact of life. Only rarely will decisions concerning various risks be neatly separable. Intuitively, even when risks are statistically independent, bearing one risk should make an agent less willing to bear another. (Kimball, 1993)

    Risk appetite – the board’s willingness to bear risk – will depend both on the degree to which it dislikes uncertainty and to the level of that uncertainty. It is also likely to shift as the board respond to emerging market and macroeconomic uncertainty and events of financial distress.

    The following graph of the “price of risk[1]” index developed at the Bank of England shows this. (Gai & Vause, 2005)[2] The estimated series fluctuates close to the average “price of risk” most of the time, but has sharp downward spikes in times of financial crises. Risk appetite is apparently highly affected by exogenous shocks:

    Estimated_Risk_appetite_BE_In adverse circumstances, it follows that the board and the investors will require higher expected equity value of the firm to hold shares – an enhanced risk premium – and that their appetite for increased risk will be low.

    Risk Management and Risk Appetite

    Despite widespread use in risk management[3] and corporate governance literature, the term ‘risk appetite’[i] lacks clarity in how it is defined and understood:

    • The degree of uncertainty that an investor is willing to accept in respect of negative changes to its business or assets. (Generic)
    • Risk appetite is the degree of risk, on a broad-based level, that a company or other entity is willing to accept in the pursuit of its goals. (COSO)
    • Risk Appetite the amount of risk that an organisation is prepared to accept, tolerate, or be exposed to at any point in time (The Orange Book October 2004)

    The same applies to a number of other terms describing risk and the board’s attitudes to risk, as for the term “risk tolerance”:

    • The degree of uncertainty that an investor can handle in regard to a negative change in the value of his or her portfolio.
    • An investor’s ability to handle declines in the value of his/her portfolio.
    • Capacity to accept or absorb risk.
    • The willingness of an investor to tolerate risk in making investments, etc.

    It thus comes as no surprise that risk appetite and other terms describing risk are not well understood to a level of clarity that can provide a reference point for decision making[4]. Some takes the position that risk appetite never can be reduced to a sole figure or ratio, or to a single sentence statement. However to be able to move forward we have to try to operationalize the term in such a way that it can be:

    1. Used to commensurate risk with reward or to decide what level of risk that is commensurate with a particular reward and
    2. Measured and used to sett risk level(s) that, in the board’s view, is appropriate for the firm.

    It thus defines the boundaries of the activities the board intends for the firm, both to the management and the rest of the organization, by setting limits to risk taking and defining what acceptable risk means. This can again be augmented by a formal ‘risk appetite statement’ defining the types and levels of risk the organization is prepared to accept in pursuit of increased value.

    However, in view of the “price of risk” series above, such formal statements cannot be carved in stone or they have to contain rules for how they are to be applied in adverse circumstances, since they have to be subject to change as the business and macroeconomic climate changes.

    Deloitte’s Global Risk Management Survey 6. ed. (Deloitte, 2009) found that sixty-three percent of the institutions had a formal, approved statement of their risk appetite. (See Exhibit 4. below) Roughly one quarter of the institutions said they relied on quantitatively defined statements, while about one third used both quantitative and qualitative approaches:

    Risk-apptite_Deloitte_2009Using a formal ‘risk appetite statement’ is the best way for the board to communicate its visions, and the level and nature of the risks the board will consider as acceptable to the firm. This has to be quantitatively defined and be based on some opinion of the board’s utility function and use metrics that can fully capture all risks facing the company.

    We will in the following use the firm’s Equity Value as metric as this will capture all risks – those impacting the balance sheet, income statement, required capital and WACC etc.

    We will assume that the board’s utility function[5] have diminishing marginal utility for an increase in the company’s equity value. From this it follows that the board’s utility will decrease more with a loss of 1 $ than it will increase with a gain of 1 $. Thus the board is risk averse[ii].

    The upside-potential ratio

    To do this we will use the upside-potential ratio[6] (UPR), a measure developed as a measure of risk-adjusted returns (Sortino et al., 1999).  The UPR is a measure of the potential return on an asset relative to a preset return, per unit of downside risk. This ratio is a special case of the more general one-sided variability ratio Phib

    Phib p,q (X) := E1/p[{(X – b)+}p] / E1/q[{(X- b)}q],

    Where X is total return, (X-b) is excess return over the benchmark b[7] and the minus and plus sign denotes the left-sided moment (lower partial moment) and the right sided moment (upper partial moment) – of order p and q.

    The lower partial moment[8] is a measure of the “distance[9]” between risky situations and the corresponding benchmark when only unfavorably differences contribute to the “risk”. The upper partial moment on the other hand measures the “distance” between favorable situations and the benchmark.

    The Phi ratio is thus the ratio of “distances” between favorable and unfavorable events – when properly weighted (Tibiletti & Farinelli, 2002).

    For a fixed benchmark b, the higher Phi the more ‘profitable’ is the risky asset. Phi can therefore be used to rank risky assets. For a given asset, Phi will be a decreasing function of the benchmark b.

    The choice of values for p and q depends on the relevance given to the magnitude of the deviations from the benchmark b. The higher the values, the more emphasis are put on that tail. For p=q=1 we have the Omega index (Shadwick & Keating, 2002).

    The choice of p=1 and q=2, is assumed to fit a conservative investor while a value of p>>1 and q<<1 will be more in line with an aggressive investor (Caporin & Lisi, 2009).

    We will in the following use p=1 and q=2 for calculation of the upside-potential ratio (UPR) thus assuming that the board consists of conservative investors. For very aggressive boards other choices of p and q should be considered.

    LM-vs-UM#0The UPR for the firm can thus be expressed as a ratio of partial moments; that is as the ratio of the first order upper partial moment (UPM1)[10] and the second order lower partial moment (LPM2) (Nawrocki, 1999) and ( Breitmeyer, Hakenes & Pfingsten, 2001), or the over-performance divided by the root-mean-square of under-performance, both calculated at successive points on the probability distribution for the firm’s equity value.

    As we successively calculates the UPR starting at the left tail will the lower partial moment (LPM2) increase and the upper partial moment (UPM1) decrease:UPM+LPM The upside potential ratio will consequently decrease as we move from the lower left tail to the upper right tail – as shown in the figure below: Cum_distrib+UPRThe upside potential ratio have many interesting uses, one is shown in the table below. This table gives the upside potential ratio at budgeted value, that is the expected return above budget value per unit of downside risk – given the uncertainty the management for the individual subsidiaries have expressed. Most of the countries have budget values above expected value exposing downward risk. Only Turkey and Denmark have a ratio larger than one – all others have lager downward risk than upward potential. The extremes are Poland and Bulgaria.

    Country/
    Subsidiary
    Upside
    Potential Ratio
    Turkey2.38
    Denmark1.58
    Italy0.77
    Serbia0.58
    Switzerland0.23
    Norway0.22
    UK0.17
    Bulgaria0.08

    We will in the following use five different equity distributions, each representing a different strategy for the firm. The distributions (strategies) have approximately the same mean, but exhibits increasing variance as we move to successive darker curves. That is; an increase in the upside also will increase the possibility of a downside:

    Five-cutsBy calculating the UPR for successive points (benchmarks) on the different probability distribution for the firm’s equity value (strategies) we, can find the accompanying curves described by the UPR’s in the UPR and LPM2/UPM1 space[12], (Cumova & Nawrocki, 2003):

    Upside_potential_ratioThe colors of the curves give the corresponding equity value distributions shown above. We can see that the equity distribution with the longest upper and lower tails corresponds to the right curve for the UPR, and that the equity distribution with the shortest tails corresponds to the left (lowest upside-potential) curve.

    In the graph below, in the LPM2/UPM1 space, the curves for the UPR’s are shown for each of the different equity value distributions (or strategies). Each will give the rate the firm will have to exchange downside risk for upside potential as we move along the curve, given the selected strategy. The circles on the curves represent points with the same value of the UPR, as we move from one distribution to another:

    LM-vs-UM#2By connecting the points with equal value of the UPR we find the iso-UPR curves; the curves that give the same value for the UPR, across the strategies in the LPM2/UPM1 space:

    LM-vs-UM#3We have limited the number of UPR values to eight, but could of course have selected a larger number both inside and outside the limits we have set.

    The board now have the option of selecting the strategy they find most opportune, or the one that fits best to their “disposition” to risk by deciding the appropriate value of LPM2 and UPM1 or of the upside-potential ratio, and this what we will pursue further in the next part:  “The Virtues of the Board”.

    References

    Breitmeyer, C., Hakenes, H. and Pfingsten, A., (2001). The Properties of Downside Risk Measures. Available at SSRN: http://ssrn.com/abstract=812850 or http://dx.doi.org/10.2139/ssrn.812850.

    Caporin, M. & Lisi,F. (2009). Comparing and Selecting Performance Measures for Ranking Assets. Available at SSRN: http://ssrn.com/abstract=1393163 or http://dx.doi.org/10.2139/ssrn.1393163

    CRMPG III. (2008). The Report of the CRMPG III – Containing Systemic Risk: The Road to Reform. Counterparty Risk Management Policy Group. Available at: http://www.crmpolicygroup.org/index.html

    Cumova, D. & Nawrocki, D. (2003). Portfolio Optimization in an Upside Potential and Downside Risk Framework. Available at: http://www90.homepage.villanova.edu/michael.pagano/DN%20upm%20lpm%20measures.pdf

    Deloitte. (2009). Global Risk Management Survey: Risk management in the spotlight. Deloitte, Item #9067. Available at: http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/us_fsi_GlobalRskMgmtSrvy_June09.pdf

    Ekern, S. (1980). Increasing N-th degree risk. Economics Letters, 6: 329-333.

    Gai, P.  & Vause, N. (2004), Risk appetite: concept and measurement. Financial Stability Review, Bank of England. Available at: http://www.bankofengland.co.uk/publications/Documents/fsr/2004/fsr17art12.pdf

    Illing, M., & Aaron, M. (2005). A brief survey of risk-appetite indexes. Bank of Canada, Financial System Review, 37-43.

    Kimball, M.S. (1993). Standard risk aversion.  Econometrica 61, 589-611.

    Menezes, C., Geiss, C., & Tressler, J. (1980). Increasing downside risk. American Economic Review 70: 921-932.

    Nawrocki, D. N. (1999), A Brief History of Downside Risk Measures, The Journal of Investing, Vol. 8, No. 3: pp. 9-

    Sortino, F. A., van der Meer, R., & Plantinga, A. (1999). The upside potential ratio. , The Journal of Performance Measurement, 4(1), 10-15.

    Shadwick, W. and Keating, C., (2002). A universal performance measure, J. Performance Measurement. pp. 59–84.

    Tibiletti, L. &  Farinelli, S.,(2002). Sharpe Thinking with Asymmetrical Preferences. Available at SSRN: http://ssrn.com/abstract=338380 or http://dx.doi.org/10.2139/ssrn.338380

    Unser, M., (2000), Lower partial moments as measures of perceived risk: An experimental study, Journal of Economic Psychology, Elsevier, vol. 21(3): 253-280.

    Viole, F & Nawrocki, D. N., (2010), The Utility of Wealth in an Upper and Lower Partial Moment Fabric). Forthcoming, Journal of Investing 2011. Available at SSRN: http://ssrn.com/abstract=1543603

    Notes

    [1] In the graph risk appetite is found as the inverse of the markets price of risk, estimated by the two probability density functions over future returns – one risk-neutral distribution and one subjective distribution – on the S&P 500 index.

    [2] For a good overview of risk appetite indexes, see “A brief survey of risk-appetite indexes”. (Illing & Aaron, 2005)

    [3] Risk Management all the processes involved in identifying, assessing and judging risks, assigning ownership, taking actions to mitigate or anticipate them, and monitoring and reviewing progress.

    [4] The Policy Group recommends that each institution ensure that the risk tolerance of the firm is established or approved by the highest levels of management and shared with the board. The Policy Group further recommends that each institution ensure that periodic exercises aimed at estimation of risk tolerance should be shared with the highest levels of management, the board of directors and the institution’s primary supervisor in line with Core Precept III. Recommendation IV-2b (CRMPG III, 2008).

    For an extensive list of Risk Tolerance articles, see: http://www.planipedia.org/index.php/Risk_Tolerance_(Research_Category)

    [5] See: http://en.wikipedia.org/wiki/Utility, http://en.wikipedia.org/wiki/Ordinal_utility and http://en.wikipedia.org/wiki/Expected_utility_theory.

    [6] The ratio was created by Brian M. Rom in 1986 as an element of Investment Technologies’ Post-Modern Portfolio theory portfolio optimization software.

    [7] ‘b’ is usually the target or required rate of return for the strategy under consideration, (‘b’ was originally known as the minimum acceptable return, or MAR). We will in the following calculate the UPR for successive benchmarks (points) covering the complete probability distribution for the firm’s equity value.

    [8] The Lower partial moments will uniquely determine the probability distribution.

    [9] The use of the term distance is not unwarranted; the Phi ratio is very similar to the ratio of two Minkowski distances of order p and q.

    [10] The upper partial-moment is equivalent to the full moment minus the lower partial-moment.

    [11] Since we don’t know the closed form for the equity distributions (strategies), the figure above have been calculated from a limited, but large number of partial moments.

    Endnotes

    [i] Even if they are not the same, the terms ‘‘risk appetite’’ and ‘‘risk aversion’’ are often used interchangeably. Note that the statement: “increasing risk appetite means declining risk aversion; decreasing risk appetite indicates increasing risk aversion” is not necessarily true.

    [ii] In the following we assume that the board is non-satiated and risk-averse, and have a non-decreasing and concave utility function – U(C) – with derivatives at least of degrees five and of alternating signs – i.e. having all odd derivatives positive and all even derivatives negative. This is satisfied by most utility functions commonly used in mathematical economics including all completely monotone utility functions, as the logarithmic, exponential and power utility functions.

     More generally, a decision maker can be said as being nth-degree risk averse if sign (un) = (−1)n+1 (Ekern,1980).

     

  • Risk tolerance

    Amber dice on paperOne of the most important concepts within risk management is risk tolerance.  Without clearly defining risk tolerance it is virtually impossible to implement good risk management, since we do not know what to measure risk against.

    Defining risk tolerance means to define how much risk the business can live with.  Risk tolerance is vitally important in choice of strategy and in implementation of the chosen strategy.  It may well be that the business is unable to take strategic opportunities because it does not have the ability to carry the risk inherit in the wanted strategy.

    The risk carrying ability must therefore be mapped and preferably quantified in the beginning of the strategy process, and throughout the process possible strategic choices must be measured against the risk carrying ability of the business.  For example, if the financing ability puts a stop to further expansion, it limits the strategic choices the business may make.

    Risk tolerance must be measured against the key figures for which the business is the most vulnerable.  To assess risk tolerance as a more or less random number (say, for instance, 1 million) makes it close to impossible to understand risk tolerance in an appropriate way.  Hence,  the business needs to have a good understanding of what drives its value creation, and also what sets limits on strategic choices.  If the most vulnerable key figure for a business is its equity ratio, then risk tolerance needs to be measured against this ratio.

    The fact that risk tolerance needs to be measured against something means that it is a great advantage for a business to have models that can estimate risk in a quantitative manner, showing clearly what variables and relationships that have the biggest impact on the key figures most at risk.

    Originally published in Norwegian.

  • Inventory management – Stochastic supply

    Inventory management – Stochastic supply

    This entry is part 4 of 4 in the series Predictive Analytics

     

    Introduction

    We will now return to the newsvendor who was facing a onetime purchasing decision; where to set the inventory level to maximize expected profit – given his knowledge of the demand distribution.  It turned out that even if we did not know the closed form (( In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution, we could find the inventory level that maximized profit and how this affected the vendor’s risk – assuming that his supply with certainty could be fixed to that level. But what if that is not the case? What if the supply his supply is uncertain? Can we still optimize his inventory level?

    We will look at to slightly different cases:

    1.  one where supply is uniformly distributed, with actual delivery from 80% to 100% of his ordered volume and
    2. the other where the supply have a triangular distribution, with actual delivery from 80% to 105% of his ordered volume, but with most likely delivery at 100%.

    The demand distribution is as shown below (as before):

    Maximizing profit – uniformly distributed supply

    The figure below indicates what happens as we change the inventory level – given fixed supply (blue line). We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profit will increase to a point of maximum.

    If we let the actual delivery follow the uniform distribution described above, and successively changes the order point expected profit will follow the red line in the graph below. We can see that the new order point is to the right and further out on the inventory axis (order point). The vendor is forced to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2300 to 2900 units with a mean close to the inventory level giving maximum profit for the fixed supply case:

    The realized profits are as shown in the frequency graph below:

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

    Maximizing profit – triangular distributed supply

    Again we compare the expected profit with delivery following the triangular distribution as described above (red line) with the expected profit created by known and fixed supply (blue line).  We can see as we successively move to higher inventory levels (from left to right on the x-axis) that expected profits will increase to a point of maximum. However the order point for the stochastic supply is to the right and further out on the inventory axis than for the non-stochastic case:

    The uncertain supply again forces the vendor to order more newspapers to ‘outweigh’ the supply uncertainty:

    At the point of maximum profit the actual deliveries spans from 2250 to 2900 units with a mean again close to the inventory level giving maximum profit for the fixed supply case ((This is not necessarily true for other combinations of demand and supply distributions.)) .

    The realized profits are as shown in the frequency graph below:

    Average profit has somewhat been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation this variability has increased by 10%, and this is again mainly caused by an increased negative skewness – again have the down side been raised.

    The introduction of uncertain supply has shown that profit can still be maximized however the profit will be reduced by increased costs both in lost sales and in excess inventory. But most important, profit variability will increase raising issues of possible other strategies.

    Summary

    We have shown through Monte-Carlo simulations, that the ‘order point’ when the actual delivered amount is uncertain can be calculated without knowing the closed form of the demand distribution. We actually do not need the closed form for the distribution describing delivery, only historic data for the supplier’s performance (reliability).

    Since we do not need the closed form of the demand distribution or supply, we are not limited to such distributions, but can use historic data to describe the uncertainty as frequency distributions. Expanding the scope of analysis to include supply disruptions, localization of inventory etc. is thus a natural extension of this method.

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    Average profit has to some extent been reduced compared with the non-stochastic supply case, but more important is the increase in profit variability. Measured by the quartile variation, this variability has increased by almost 13%, and this is mainly caused by an increased negative skewness – the down side has been raised.

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes

  • Inventory Management: Is profit maximization right for you?

    Inventory Management: Is profit maximization right for you?

    This entry is part 2 of 4 in the series Predictive Analytics

     

    Introduction

    In the following we will exemplify how sales forecasts can be used to set inventory levels in single or in multilevel warehousing. By inventory we will mean a stock or store of goods; finished goods, raw materials, purchased parts, and retail items. Since the problem discussed is the same for both production and inventory, the two terms will be used interchangeably.

    Good inventory management is essential to the successful operation for most organizations both because of the amount of money the inventory represents and the impact that inventories have on the daily operations.

    An inventory can have many purposes among them the ability:

    1. to support independence of operations,
    2. to meet both anticipated and variation in demand,
    3. to decouple components of production and allow flexibility in production scheduling and
    4. to hedge against price increases, or to take advantage of quantity discounts.

    The many advantages of stock keeping must however be weighted against the costs of keeping the inventory. This can best be described as the “too much/too little problem”; order too much and inventory is left over or order too little and sales are lost.

    This can be as a single-period (a onetime purchasing decision) or a multi-period problem, involving a single warehouse or multilevel warehousing geographically dispersed. The task can then be to minimize the organizations total cost, maximize the level of customer service, minimize ‘loss’ or maximize profit etc.

    Whatever the purpose, the calculation will have to be based on knowledge of the sales distribution. In addition, sales will usually have a seasonal variance creating a balance act between production, logistic and warehousing costs. In the example given below the sales forecasts will have to be viewed as a periodic forecast (month, quarter, etc.).

    We have intentionally selected a ‘simple problem’ to highlight the optimization process and the properties of the optimal solution. The last is seldom described in the standard texts.

    The News-vendor problem

    The news-vendor is facing a onetime purchasing decision; to maximize expected profit so that the expected loss on the Qth unit equals the expected gain on the Qth unit:

    I.  Co * F(Q) = Cu * (1-F(Q)) , where

    Co = The cost of ordering one more unit than what would have been ordered if demand had been known – or the increase in profit enjoyed by having ordered one fewer unit,

    Cu = The cost of ordering one fewer unit than what would have been ordered if demand had been known  – or the increase in profit enjoyed by having ordered one more unit, and

    F(Q) = Demand Probability for q<= Q. By rearranging terms in the above equation we find:

    II.  F(Q) = Cu/{Co+Cu}

    This ratio is often called the critical ratio (CR). The usual way of solving this is to assume that the demand is normal distributed giving Q as:

    III.    Q = m + z * s, where: z = {Q-m}/s , is normal distributed with zero mean and variance equal  one.

    Demand unfortunately, rarely haves a normal distribution and to make things worse we usually don’t know the exact distribution at all. We can only ‘find’ it by Monte Carlo simulation and thus have to find the Q satisfying the equation (I) by numerical methods.

    For the news-vendor the inventory level should be set to maximize profit given the sales distribution. This implies that the cost of lost sales will have to be weighed against the cost of adding more to the stock.

    If we for the moment assume that all these costs can be regarded as fixed and independent of the inventory level, then the product markup (% of cost) will determine the optimal inventory level:

    IV. Cu= Co * (1+ {Markup/100}) 

    In the example given here the critical ratio is approx. 0.8.  The question then is if the inventory levels indicated by that critical ratio always will be the best for the organization.

    Expected demand

    The following graph indicates the news-vendors demand distribution. Expected demand is 2096 units ((Median demand is 1819 units and the demand lies most typically in the range of 1500 to 2000 units)), but the distribution is heavily skewed to the right ((The demand distribution has a skewness of 0.78., with a coefficient of variation of 0.45, a lower quartile of 1432 units and an upper quartile of 2720 units.))  so there is a possibility of demand exceeding the expected demand:

    By setting the product markup – in the example below it is set to 300% – we can calculate profit and loss based on the demand forecast.

    Profit and Loss (of opportunity)

    In the following we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum  ¤4963 at a level of 2729 units:

    At that point we can expect to have some excess stock and in some cases also lost sales. But regardless, it is at this point that expected profit is maximized, so this gives the optimal stock level.

    Since we include both costs of sold and unsold items, the point giving expected maximum profit will be below the point minimizing expected loss –¤1460 at a production level of 2910 units.

    Given the optimal inventory level (2729 units) we find the actual sales frequency distribution as shown in the graph below. At this level we expect an average sale of 1920 units – ranging from 262 to 2729 units ((Having a lower quartile of 1430 units and an upper quartile of 2714 units.)).

    The graph shows that the distribution possesses two different modes ((The most common value in a set of observations.)) or two local maxima. This bimodality is created by the fact that the demand distribution is heavily skewed to the right so that demand exceeding 2729 units will imply 2729 units sold with the rest as lost sales.

    This bimodality will of course be reflected in the distribution of realized profits. Have in mind that the line (blue) giving maximum profit is an average of all realized profits during the Monte Carlo simulation given the demand distribution and the selected inventory level. We can therefore expect realized profit both below and above this average (¤4963) – as shown in the frequency graph below:

    Expected (average) profit is ¤4963, with a minimum of ¤1681 and a maximum of ¤8186, the range of realized profits is therefore very large ((Having a lower quartile of ¤2991 and an upper quartile of ¤8129.)) ¤9867.

    So even if we maximize profit we can expect a large variation in realized profits, there is no way that the original uncertainty in the demand distribution can be reduced or removed.

    Risk and Reward

    Increased profit comes at a price: increased risk. The graph below describes the situation; the blue curve shows how expected profit increases with the production or inventory (service) level. The spread between the green and red curves indicates the band where actual profit with 80% probability will fall. As is clear from the graph, this band increases in width as we move to the right indicating an increased upside (area up to the green line) but also an increased probability for a substantial downside (area down to the red line:

    For some companies – depending on the shape of the demand distribution – other concerns than profit maximization might therefore be of more importance – like predictability of results (profit). The act of setting inventory or production levels should accordingly be viewed as an element for the boards risk assessments.

    On the other hand will the uncertainty band around loss as the service level increases decrease. This of course lies in the fact that loss due to lost sales diminishes as the service level increases and the that the high markup easily covers the cost of over-production.

    Thus a strategy of ‘loss’ minimization will falsely give a sense of ‘risk minimization’, while it in reality increases the uncertainty of future realized profit.

    Product markup

    The optimal stock or production level will be a function of the product markup. A high markup will give room for a higher level of unsold items while a low level will necessitate a focus on cost reduction and the acceptance of stock-out:

    The relation between markup (%) and the production level is quadratic ((Markup (%) = 757.5 – 0.78*production level + 0.00023*production level2))  implying that markup will have to be increasingly higher, the further out on the right tail we fix the production level.

    The Optimal inventory (production) level

    If we put it all together we get the chart below. In this the green curve is the accumulated sales giving the probability of the level of sales and the brown curve the optimal stock or production level given the markup.

    The optimal stock level is then found by drawing a line from the right markup axis (right y-axis) to the curve (red) for optimal stock level, and down to the x-axis giving the stock level. By continuing the line from the markup axis to the probability axis (left y-axis) we find the probability level for stock-out (1-the cumulative probability) and the probability for having a stock level in excess of demand:

    By using the sales distribution we can find the optimal stock/production level given the markup and this would not have been possible with single point sales forecasts – that could have ended up almost anywhere on the curve for forecasted sales.

    Even if a single point forecast managed to find expected sales – as mean, mode or median – it would have given wrong answers about the optimal stock/production level, since the shape of the sales distribution would have been unknown.

    In this case with the sales distribution having a right tail the level would have been to low – or with low markup, to high. With a left skewed sales distribution the result would have been the other way around: The level would have been too high and with low markup probably too low.

    In the case of multilevel warehousing, the above analyses have to be performed on all levels and solved as a simultaneous system.

    The state of affairs at the point of maximum

    To have the full picture of the state of affairs at the point of maximum we have to take a look at what we can expect of over- and under-production. At the level giving maximum expected profit we will on

    average have an underproduction of 168 units, ranging from zero to nearly 3000 ((Having a coefficient of variation of almost 250%)). On the face of it this could easily be interpreted as having set the level to low, but as we shall see that is not the case.

    Since we have a high markup, lost sales will weigh heavily in the profit maximization and as a result of this we can expect to have unsold items in our stock at the end of the period. On average we will have a little over 800 units left in stock, ranging from zero to nearly 2500. The lower quartile is 14 units and the upper is 1300 units so in 75% of the cases we will have an overproduction of less than 1300 units. However in 25% of the cases the overproduction will be in the range from 1300 to 2500 units.

    Even with the possibility of ending up at the end of the period with a large number of unsold units, the strategy of profit maximization will on average give the highest profit. However, as we have seen, with a very high level of uncertainty about the actual profit being realized.

    Now, since a lower inventory level in this case only will reduce profit by a small amount but lower the confidence limit by a substantial amount, other strategies giving more predictability for the actual result should be considered.

  • Be prepared for a bumpy ride

    Be prepared for a bumpy ride

    Imagine you’re nicely settled down in your airline seat on a transatlantic flight – comfort-able, with a great feeling. Then the captain comes on and welcomes everybody on board and continues, “It’s the first time I fly this type of machine, so wish me luck!” Still feeling great? ((Inspired by an article from BTS: http://www.bts.com/news-insights/strategy-execution-blog/Why_are_Business_Simulations_so_Effective.aspx))

    Running a company in today’s interconnected and volatile world has become extremely complicated; surely far more than flying an airliner. You probably don’t have all the indicators, dashboard system and controls as on a flight deck. And business conditions are likely to change for more than flight conditions ever will. Today we live with an information overload. Data streaming at us almost everywhere we turn. How can we cope? How do we make smart decisions?

    Pilots train over and over again. They spend hour after hour in flight simulators before being allowed to sit as co-pilots on a real passenger flight. Fortunately, for us passengers, flight hours normally pass by, day after day, without much excitement. Time to hit the simulator again and train engine fires, damaged landing gear, landing on water, passenger evacuation etc. becoming both mentally and practically prepared to manage the worst.

    Why aren’t we running business simulations to the same extent? Accounting, financial models and budgeting is more an art than science, many times founded on theories from the last century. (Not to mention Pacioli’s Italian accounting from 1491.) While the theory of behavioural economics progresses we must use the best tools we can get to better understand financial risks and opportunities and how to improve and refine value creation. The true job we’re set to do.

    How is it done? Like Einstein – seeking simplicity, as far as it goes. Finding out which pieces of information that is most crucial to the success and survival of the business. For major corporations these can be drawn down from the hundreds to some twenty key variables. (These variables are not set in stone once and for all, but need to be redefined in accordance with the business situation we foresee in the near future.)

    At Allevo our focal point is on Risk Governance at large and helping organisations implement Enterprise Risk Management (ERM) frame¬works and processes, specifically assisting boards and executive management to exercise their Risk Oversight duties. Fundamental to good risk management practice is to understand end articulate the organisation’s (i.e. the Board’s) appetite for risk. Without understanding the appetite and tolerance levels for various risks it’s hard to measure, aggregate and prioritize them. How much are we willing to spend on new ventures and opportunities? How much can we afford to lose? How do we calculate the trade-offs?

    There are two essential elements of Risk Appetite: risk capacity and risk capability.

    By risk capacity we mean the financial ability to take on new opportunities with their inherent risks (i.e. availability of cash and funding across the strategy period). By risk capability is meant the non-financial resources of the organisation. Do we have the know¬ledge and resources to take on new ventures? Cash and funding is fundamental and comes first.

    Does executive management and the board really understand the strengths and vulnerabilities hiding in the balance sheet or in the P&L-account? Many may have a gut feeling, mostly the CFO and the treasury department. But shouldn’t the executive team and the board (including the Audit Committee, and the Risk Committee if there is one) also really know?

    At Allevo we have aligned with Strategy@Risk Ltd to do business simulations. They have experiences from all kinds of industries; especially process industries where they even helped optimize manufacturing processes. They have simulated airports and flight patterns for a whole country. For companies with high level of raw material and commodity risks they simulate optimum hedging strategies. But their main contribution, in our opinion, is their ability to simulate your organisation’s balance sheet and P&L accounts. They have created a simulation tool that can be applied to a whole corporation. It needs only to be adjusted to your specific operations and business environ¬ments, which is done through inter-views and a few workshops with your own people that have the best knowledge of your business (operations, finances, markets, strategy etc.).

    When the key variables have been identified, it’s time to run the first Monte Carlo simulations to find out if the model fits with recent actual experiences and otherwise feels reliable.

    No model can ever predict the future. What we want to do is to find the key strengths and weaknesses in your operations and in your balance sheet. By running sensitivity analysis we can first of all understand which the key variables are. We want to focus what’s important, and leave alone those variables that have little effect on outcomes.

    Now, it’s time for the most important part. Considering how the selected variables can vary and interact over time. The future contains an inconceivable amount of different outcomes ((There are probably more different futures than ways of dealing 52 playing cards. Don’t you think? Well there are only 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 ways to shuffle a deck of 52 cards (8.1 x 1067 ))). What does that say about budgeting with discrete numbers?)). The question is how can we achieve the outcomes that we desire and avoid the ones that we dread the most?

    Running 10,000 simulations (i.e. closing each and every annual account over 10,000 years) we can stop the simulation when reaching a desired level of outcome and investigate the position of the key variables. Likewise when nasty results appear, we stop again and recording the underlying position of each variable.

    The simulations generate an 80-page standard report (which, once again, can feel like information overload). But once you’ve got a feeling for the sensitivity of the business you could instead do specific “what if?” analysis of scenarios of special interest to yourself, the executive team or to the board.

    Finally, the model equates the probability distribution of the organisation’s Enterprise Value going forward. The key for any business is to grow Enterprise Value.

    Simulations show how the likelihood of increasing or losing value varies with different strategies. This part of the simulation tool could be extremely important in strategy selection.

    If you wish to go into more depth on how simulations can support you and your organisation, please visit

    www.allevo.se or www.strategy-at-risk.com

    There you’ll find a great depth of material to chose from; or call us direct and we’ll schedule a quick on-site presentation.

    Have a good flight, and …

    Happy landing!