Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Strategy – Strategy @ Risk

Tag: Strategy

  • Risk tolerance

    Amber dice on paperOne of the most important concepts within risk management is risk tolerance.  Without clearly defining risk tolerance it is virtually impossible to implement good risk management, since we do not know what to measure risk against.

    Defining risk tolerance means to define how much risk the business can live with.  Risk tolerance is vitally important in choice of strategy and in implementation of the chosen strategy.  It may well be that the business is unable to take strategic opportunities because it does not have the ability to carry the risk inherit in the wanted strategy.

    The risk carrying ability must therefore be mapped and preferably quantified in the beginning of the strategy process, and throughout the process possible strategic choices must be measured against the risk carrying ability of the business.  For example, if the financing ability puts a stop to further expansion, it limits the strategic choices the business may make.

    Risk tolerance must be measured against the key figures for which the business is the most vulnerable.  To assess risk tolerance as a more or less random number (say, for instance, 1 million) makes it close to impossible to understand risk tolerance in an appropriate way.  Hence,  the business needs to have a good understanding of what drives its value creation, and also what sets limits on strategic choices.  If the most vulnerable key figure for a business is its equity ratio, then risk tolerance needs to be measured against this ratio.

    The fact that risk tolerance needs to be measured against something means that it is a great advantage for a business to have models that can estimate risk in a quantitative manner, showing clearly what variables and relationships that have the biggest impact on the key figures most at risk.

    Originally published in Norwegian.

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes

  • Uncertainty modeling

    Uncertainty modeling

    This entry is part 2 of 3 in the series What We Do

    Prediction is very difficult, especially about the future.
    Niels Bohr. Danish physicist (1885 – 1962)

    Strategy @ Risks models provide the possibility to study risk and uncertainties related to operational activities;  cost, prices, suppliers,  markets, sales channels etc. financial issues like; interest rates risk, exchange rates risks, translation risk , taxes etc., strategic issues like investments in new or existing activities, valuation and M&As’ etc and for a wide range of budgeting purposes.

    All economic activities have an inherent volatility that is an integrated part of its operations. This means that whatever you do some uncertainty will always remain.

    The aim is to estimate the economic impact that such critical uncertainty may have on corporate earnings at risk. This will add a third dimension – probability – to all forecasts, give new insight: the ability to deal with uncertainties in an informed way and thus benefits above ordinary spread-sheet exercises.

    The results from these analyzes can be presented in form of B/S and P&L looking at the coming one to five (short term) or five to fifteen years (long term); showing the impacts to e.g. equity value, company value, operating income etc. With the purpose of:

    • Improve predictability in operating earnings and its’ expected volatility
    • Improve budgeting processes, predicting budget deviations and its’ probabilities
    • Evaluate alternative strategic investment options at risk
    • Identify and benchmark investment portfolios and their uncertainty
    • Identify and benchmark individual business units’ risk profiles
    • Evaluate equity values and enterprise values and their uncertainty in M&A processes, etc.

    Methods

    To be able to add uncertainty to financial models, we also have to add more complexity. This complexity is inevitable, but in our case, it is desirable and it will be well managed inside our models.

    People say they want models that are simple, but what they really want is models with the necessary features – that are easy to use. If something is complex but well designed, it will be easy to use – and this holds for our models.

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc.

    We know however that forecasts based on average values are on average wrong. In addition will deterministic models miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they bring forth.

    S@R has set out to create models that can give answers to both deterministic and stochastic questions, by linking dedicated Ebitda models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or
    2. by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model – the ‘short cut’ method.

    The first approach implies setting up a dedicated Ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about markets, capacity driven investments, operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R. This is a tool for long term planning and strategy development.

    The second (‘the short cut’) uses coefficients of fabrications and their variations, and is a low effort (cost) alternative, usually using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities. It can be based on existing investment and market plans.  The data needed for the company’s economic environment (taxes, interest rates etc) will be the same in both alternatives:

    The ‘short cut’ approach is especially suited for quick appraisals of M&A cases where time and data is limited and where one wishes to limit efforts in an initial stage. Later the data and assumptions can be augmented to much more sophisticated analysis within the same ‘short cut’ framework. In this way analysis can be successively built in the direction the previous studies suggested.

    This also makes it a good tool for short-term (3-5 years) analysis and even for budget assessment. Since it will use a limited number of variables – usually less than twenty – describing the operations, it is easy to maintain and operate. The variables describing financial strategy and the economic environment come in addition, but will be easy to obtain.

    Used in budgeting it will give the opportunity to evaluate budget targets, their probable deviation from expected result and the probable upside or down side given the budget target (Upside/downside ratio).

    Done this way analysis can be run for subsidiaries across countries translating the P&L and Balance to any currency for benchmarking, investment appraisals, risk and opportunity assessments etc. The final uncertainty distributions can then be “aggregated’ to show global risk for the mother company.

    An interesting feature is the models ability to start simulations with an empty opening balance. This can be used to assess divisions that do not have an independent balance since the model will call for equity/debt etc. based on a target ratio, according to the simulated production and sales and the necessary investments. Questions about further investment in divisions or product lines can be studied this way.

    Since all runs (500 to 1000) in the simulation produces a complete P&L and Balance the uncertainty curve (distribution) for any financial metric like ‘Yearly result’, ‘free cash flow’, economic profit’, ‘equity value’, ‘IRR’ or’ translation gain/loss’ etc. can be produced.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic Ebitda model.

    Time and effort

    The work load for the client is usually limited to a small team of people ( 1 to 3 persons) acting as project leaders and principal contacts, assuring that all necessary information, describing value and risks for the clients’ operations can be collected as basis for modeling and calculations. However the type of data will have to be agreed upon depending on the scope of analysis.

    Very often will key people from the controller group be adequate for this work and if they don’t have the direct knowledge they usually know who to ask. The work for this team, depending on the scope and choice of method (see above) can vary in effective time from a few days to a couple of weeks, but this can be stretched from three to four weeks to the same number of months.

    For S&R the time frame will depend on the availability of key personnel from the client and the availability of data. For the second alternative it can take from one to three weeks of normal work to three to six months for the first alternative for more complex models. The total time will also depend on the number of analysis that needs to be run and the type of reports that has to be delivered.

    S@R_ValueSim

    Selecting strategy

    Models like this are excellent for selection and assessment of strategies. Since we can find the probability distribution for equity value, changes in this brought by different strategies will form a basis for selection or adjustment of current strategy. Models including real option strategies are a natural extension of these simulation models:

    If there is a strategy with a curve to the right and under all other feasible strategies this will be the stochastic dominant one. If the curves crosses further calculations needs to be done before a stochastic dominant or preferable strategy can be found:

    Types of problems we aim to address:

    The effects of uncertainties on the P&L and Balance and the effects of the Boards strategies (market, hedging etc.) on future P&L and Balance sheets evaluating:

    • Market position and potential for growth
    • Effects of tax and capital cost
    • Strategies
    • Business units, country units or product lines –  capital allocation – compare risk, opportunity and expected profitability
    • Valuations, capital cost and debt requirements, individually and effect on company
    • The future cash-flow volatility of company and the individual BU’s
    • Investments, M&A actions, their individual value, necessary commitments and impact on company
    • Etc.

    The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against uncertain factors.

    Used in budgeting, this will improve budget stability through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.

    This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and Choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.

    A severe depression like that of 1920-1921 is outside the range of probability. –The Harvard Economic Society, 16 November 1929

  • Perception of Risk

    Perception of Risk

    Google Trends and Google Insights for Search gives us the opportunity to gain information on a subject’s popularity. A paper by Google Inc. and Centers for Disease Control and Prevention (USA) have shown how search queries can be used to estimate the current level of influenza activity in the United States. (Ginsberg, Mohebbi, Patel, Brammer, Smolinski, & Brilliant, 2009)

    It is tempting to use these Google tools to see how searches for terms connected to risk and strategy has developed over the last years. Using Google Trends searching for the terms; economic risk and financial strategy we find the relative and normalized search frequencies as shown in the graphs below:

    Search-volume-index_1

    The weekly observations starts in January 2004, but we have due to missing data (?) started the economic risk search series in September 2004. As is evident from the time series, the search terms are highly correlated (appr. 0.80) and there is a consistent seasonal variation – with heightened activity in spring and fall. The average value for the normalized search volume index (index) is 1.0 for the term economic risk and 1.58 for financial strategy. The term financial strategy has then on average been used 0.58 times more than economic risk.

    The numbers …. on the y-axis of the Search Volume Index aren’t absolute search traffic numbers. Instead, Trends scales the first term you’ve entered so that its average search traffic in the chosen time period is 1.0; subsequent terms are then scaled relative to the first term. Note that all numbers are relative to total traffic. (About Google Trends, 2009)

    Both series shows a falling trend from early 2004 to mid 2006, indicating the terms lower relative shares of all Google searches. However from that on the relative shares have been maintained, indicating increased interest in the terms against increased Internet search activity.

    It is also possible to rank the different regions interest in the subject (the table can be sorted by pressing the column label):

    Region Ranking

    RegionRiskStrategy
    Singapore1.000.80
    South Africa0.861.43
    Hong Kong0.740.83
    Malaysia0.701.06
    India0.501.10
    South Korea0.440.46
    Philippines0.410.58
    Australia0.360.50
    Indonesia0.350.35
    New Zealand0.260.38

    Singapore is the region with the highest shares of searches including  the term ‘risk’ and South African the region with the highest shares of searches including ‘strategy’ In India the term ‘financial strategy’ is important but ‘risk’ is less important.`

    The most striking feature of the table however is the lack of American and European regions. Is there less interest in these subjects in the Vest than in the East ?

    References

    Ginsberg, J, Mohebbi, M, Patel, R, Brammer, L, Smolinski, M., & Brilliant, L., (2009). Detecting influenza epidemics using search engine query data. Nature, 457, 1012-1014.

    (n.d.). About Google trends. Retrieved from http://www.google.com/intl/en/trends/about.html#7

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.

  • Valuation as a strategic tool

    Valuation as a strategic tool

    This entry is part 1 of 2 in the series Valuation

     

    Valuation is something usually done only when selling or buying a company (see: probability of gain and loss). However it is a versatile tool in assessing issues as risk and strategies both in operations and finance.

    The risk and strategy element is often not evident unless the valuation is executed as a Monte Carlo simulation giving the probability distribution for equity value (or the value of entity).  We will in a new series of posts take a look at how this distribution can be used.

    By strategy we will in the following mean a plan of action designed to achieve a particular goal. The plan may involve issues across finance and operation of the company; debt, equity, taxes, currency, markets, sales, production etc. The goal usually is to move the value distribution to the right (increasing value), but it may well be to shorten the left tail – reducing risk – or increasing the upside by lengthening the right tail.

    There are a variety of definitions of risk. In general, risk can be described as; “uncertainty of loss” (Denenberg, 1964); “uncertainty about loss” (Mehr &Cammack, 1961); or “uncertainty concerning loss” (Rabel, 1968). Greene defines financial risk as the “uncertainty as to the occurrence of an economic loss” (Greene, 1962).

    Risk can also be described as “measurable uncertainty” when the probability of an outcome is possible to calculate (is knowable), and uncertainty, when the probability of an outcome is not possible to determine (is unknowable) (Knight, 1921). Thus risk can be calculated, but uncertainty only reduced.

    In our context some uncertainty is objectively measurable like down time, error rates, operating rates, production time, seat factor, turnaround time etc. For others like sales, interest rates, inflation rates, etc. the uncertainty can only subjectively be measured.

    “[Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.” (John Maynard Keynes, 1937)

    On this basis we will proceed, using managers best guess about the range of possible values and most likely value for production related variables and market consensus etc. for possible outcomes for variables like inflation, interest etc. We will use this to generate appropriate distributions (log-normal) for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate or does not exist, we will use rectangular distributions.

    Benoit Mandelbrot (Mandelbrot, 2004) and Taleb Nasim (Nasim, 2007) have rightly criticized the economic profession for “over use” of the normal distribution – the bell curve. The argument is that it has too thin and short tails. It will thus underestimate the possibility of far out extremes – that is, low probability events with high impact (Black Swan’s).

    Since we use Monte Carlo simulation we can use any distribution to represent possible outcomes of a variable. So using the normal distribution for it’s statistically nicety is not necessary. We can even construct distributions that have the features we look for, without having to describe it mathematically.

    However using normal distributions for some variables and log-normal for others etc. in a value simulation will not give you a normal or log-normal distributed equity value. A number of things can happen in the forecast period; adverse sales, interest or currency rates, incurred losses, new equity called etc. Together with tax, legal and IFRS rules etc. the system will not be linear and much more complex to calculate then mere additions, subtraction or multiplication of probability distributions.

    We will in the following adhere to uncertainty and loss, where loss is an event where calculated equity value is less than book value of equity or in the case of M&A, less than the price paid.

    Assume that we have calculated  the value distribution (cumulative) for two different strategies. The distribution for current operations (blue curve) have a shape showing considerable downside risk (left tail) and a limited upside potential; give a mean equity value of $92M with a minimum of $-28M and a maximum of $150M. This, the span of possible outcomes and the fact that it can be negative compelled the board to look for new strategies reducing downside risk.

    strategy1

    They come up with strategy #1 (green curve) which to a risk-averse board is a good proposition: reducing downward risk by substantially shortening the left tail, increasing expected value of equity by moving the distribution to the right and reducing the overall uncertainty by producing a more vertical curve. In numbers; the minimum value was reduced to $68M, the mean value of equity was increased to $112M and the coefficient of variation was reduced from 30% to 14%. The upside potential increased somewhat but not much.
    To a risk-seeking board strategy#2 (red curve) would be a better proposition: the right tail has been stretched out giving a maximum value of $241M, however so have the left tail giving a minimum value to $-163M, increasing the event space and the coefficient of variation to 57%. The mean value of equity has been slightly reduced to $106M.

    So how could the strategies have been brought about?  Strategy #1 could involve introduction of long term energy contracts taking advantage of today’s low energy cost. Strategy #2 introduces a new product with high initial investments and considerable uncertainties about market acceptance.

    As we now can see the shape of the value distribution gives a lot of information about the company’s risk and opportunities.  And given the boards risk appetite it should be fairly simple to select between strategies just looking at the curves. But what if it is not obvious which the best is? We will return later in this series to answer that question and how the company’s risk and opportunities can be calculated.

    References

    Denenberg, H., et al. (1964). Risk and insurance. Englewood Cliffs, NJ: PrenticeHall,Inc.
    Greene, M. R. (1962). Risk and insurance. Cincinnati, OH: South-Western Publishing Co.
    Keynes, John Maynard. (1937). General Theory of Employment. Quarterly Journal of Economics.
    Knight, F. H. (1921). Risk, uncertainty and profit. Boston, MA: Houghton Mifflin Co.
    Mandelbrot, B., & Hudson, R. (2006). The (Mis) Behavior of Markets. Cambridge: Perseus Books Group.
    Mehr, R. I. and Cammack, E. (1961). Principles of insurance, 3.  Edition. Richard D. Irwin, Inc.
    Rable, W. H. (1968). Further comment. Journal of Risk and Insurance, 35 (4): 611-612.
    Taleb, N., (2007). The Black Swan. New York: Random House.