Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Upside potential ratio – Strategy @ Risk

Tag: Upside potential ratio

  • Risk Appetite and the Virtues of the Board

    Risk Appetite and the Virtues of the Board

    This entry is part 1 of 1 in the series Risk Appetite and the Virtues of the Board

     

     

     

    This article consists of two parts: Risk Appetite, and The Virtues of the Board. (Upcoming) This first part can be read as a standalone article, the second will be based on concepts developed in this part.

    Risk Appetite

    Multiple sources of risk are a fact of life. Only rarely will decisions concerning various risks be neatly separable. Intuitively, even when risks are statistically independent, bearing one risk should make an agent less willing to bear another. (Kimball, 1993)

    Risk appetite – the board’s willingness to bear risk – will depend both on the degree to which it dislikes uncertainty and to the level of that uncertainty. It is also likely to shift as the board respond to emerging market and macroeconomic uncertainty and events of financial distress.

    The following graph of the “price of risk[1]” index developed at the Bank of England shows this. (Gai & Vause, 2005)[2] The estimated series fluctuates close to the average “price of risk” most of the time, but has sharp downward spikes in times of financial crises. Risk appetite is apparently highly affected by exogenous shocks:

    Estimated_Risk_appetite_BE_In adverse circumstances, it follows that the board and the investors will require higher expected equity value of the firm to hold shares – an enhanced risk premium – and that their appetite for increased risk will be low.

    Risk Management and Risk Appetite

    Despite widespread use in risk management[3] and corporate governance literature, the term ‘risk appetite’[i] lacks clarity in how it is defined and understood:

    • The degree of uncertainty that an investor is willing to accept in respect of negative changes to its business or assets. (Generic)
    • Risk appetite is the degree of risk, on a broad-based level, that a company or other entity is willing to accept in the pursuit of its goals. (COSO)
    • Risk Appetite the amount of risk that an organisation is prepared to accept, tolerate, or be exposed to at any point in time (The Orange Book October 2004)

    The same applies to a number of other terms describing risk and the board’s attitudes to risk, as for the term “risk tolerance”:

    • The degree of uncertainty that an investor can handle in regard to a negative change in the value of his or her portfolio.
    • An investor’s ability to handle declines in the value of his/her portfolio.
    • Capacity to accept or absorb risk.
    • The willingness of an investor to tolerate risk in making investments, etc.

    It thus comes as no surprise that risk appetite and other terms describing risk are not well understood to a level of clarity that can provide a reference point for decision making[4]. Some takes the position that risk appetite never can be reduced to a sole figure or ratio, or to a single sentence statement. However to be able to move forward we have to try to operationalize the term in such a way that it can be:

    1. Used to commensurate risk with reward or to decide what level of risk that is commensurate with a particular reward and
    2. Measured and used to sett risk level(s) that, in the board’s view, is appropriate for the firm.

    It thus defines the boundaries of the activities the board intends for the firm, both to the management and the rest of the organization, by setting limits to risk taking and defining what acceptable risk means. This can again be augmented by a formal ‘risk appetite statement’ defining the types and levels of risk the organization is prepared to accept in pursuit of increased value.

    However, in view of the “price of risk” series above, such formal statements cannot be carved in stone or they have to contain rules for how they are to be applied in adverse circumstances, since they have to be subject to change as the business and macroeconomic climate changes.

    Deloitte’s Global Risk Management Survey 6. ed. (Deloitte, 2009) found that sixty-three percent of the institutions had a formal, approved statement of their risk appetite. (See Exhibit 4. below) Roughly one quarter of the institutions said they relied on quantitatively defined statements, while about one third used both quantitative and qualitative approaches:

    Risk-apptite_Deloitte_2009Using a formal ‘risk appetite statement’ is the best way for the board to communicate its visions, and the level and nature of the risks the board will consider as acceptable to the firm. This has to be quantitatively defined and be based on some opinion of the board’s utility function and use metrics that can fully capture all risks facing the company.

    We will in the following use the firm’s Equity Value as metric as this will capture all risks – those impacting the balance sheet, income statement, required capital and WACC etc.

    We will assume that the board’s utility function[5] have diminishing marginal utility for an increase in the company’s equity value. From this it follows that the board’s utility will decrease more with a loss of 1 $ than it will increase with a gain of 1 $. Thus the board is risk averse[ii].

    The upside-potential ratio

    To do this we will use the upside-potential ratio[6] (UPR), a measure developed as a measure of risk-adjusted returns (Sortino et al., 1999).  The UPR is a measure of the potential return on an asset relative to a preset return, per unit of downside risk. This ratio is a special case of the more general one-sided variability ratio Phib

    Phib p,q (X) := E1/p[{(X – b)+}p] / E1/q[{(X- b)}q],

    Where X is total return, (X-b) is excess return over the benchmark b[7] and the minus and plus sign denotes the left-sided moment (lower partial moment) and the right sided moment (upper partial moment) – of order p and q.

    The lower partial moment[8] is a measure of the “distance[9]” between risky situations and the corresponding benchmark when only unfavorably differences contribute to the “risk”. The upper partial moment on the other hand measures the “distance” between favorable situations and the benchmark.

    The Phi ratio is thus the ratio of “distances” between favorable and unfavorable events – when properly weighted (Tibiletti & Farinelli, 2002).

    For a fixed benchmark b, the higher Phi the more ‘profitable’ is the risky asset. Phi can therefore be used to rank risky assets. For a given asset, Phi will be a decreasing function of the benchmark b.

    The choice of values for p and q depends on the relevance given to the magnitude of the deviations from the benchmark b. The higher the values, the more emphasis are put on that tail. For p=q=1 we have the Omega index (Shadwick & Keating, 2002).

    The choice of p=1 and q=2, is assumed to fit a conservative investor while a value of p>>1 and q<<1 will be more in line with an aggressive investor (Caporin & Lisi, 2009).

    We will in the following use p=1 and q=2 for calculation of the upside-potential ratio (UPR) thus assuming that the board consists of conservative investors. For very aggressive boards other choices of p and q should be considered.

    LM-vs-UM#0The UPR for the firm can thus be expressed as a ratio of partial moments; that is as the ratio of the first order upper partial moment (UPM1)[10] and the second order lower partial moment (LPM2) (Nawrocki, 1999) and ( Breitmeyer, Hakenes & Pfingsten, 2001), or the over-performance divided by the root-mean-square of under-performance, both calculated at successive points on the probability distribution for the firm’s equity value.

    As we successively calculates the UPR starting at the left tail will the lower partial moment (LPM2) increase and the upper partial moment (UPM1) decrease:UPM+LPM The upside potential ratio will consequently decrease as we move from the lower left tail to the upper right tail – as shown in the figure below: Cum_distrib+UPRThe upside potential ratio have many interesting uses, one is shown in the table below. This table gives the upside potential ratio at budgeted value, that is the expected return above budget value per unit of downside risk – given the uncertainty the management for the individual subsidiaries have expressed. Most of the countries have budget values above expected value exposing downward risk. Only Turkey and Denmark have a ratio larger than one – all others have lager downward risk than upward potential. The extremes are Poland and Bulgaria.

    Country/
    Subsidiary
    Upside
    Potential Ratio
    Turkey2.38
    Denmark1.58
    Italy0.77
    Serbia0.58
    Switzerland0.23
    Norway0.22
    UK0.17
    Bulgaria0.08

    We will in the following use five different equity distributions, each representing a different strategy for the firm. The distributions (strategies) have approximately the same mean, but exhibits increasing variance as we move to successive darker curves. That is; an increase in the upside also will increase the possibility of a downside:

    Five-cutsBy calculating the UPR for successive points (benchmarks) on the different probability distribution for the firm’s equity value (strategies) we, can find the accompanying curves described by the UPR’s in the UPR and LPM2/UPM1 space[12], (Cumova & Nawrocki, 2003):

    Upside_potential_ratioThe colors of the curves give the corresponding equity value distributions shown above. We can see that the equity distribution with the longest upper and lower tails corresponds to the right curve for the UPR, and that the equity distribution with the shortest tails corresponds to the left (lowest upside-potential) curve.

    In the graph below, in the LPM2/UPM1 space, the curves for the UPR’s are shown for each of the different equity value distributions (or strategies). Each will give the rate the firm will have to exchange downside risk for upside potential as we move along the curve, given the selected strategy. The circles on the curves represent points with the same value of the UPR, as we move from one distribution to another:

    LM-vs-UM#2By connecting the points with equal value of the UPR we find the iso-UPR curves; the curves that give the same value for the UPR, across the strategies in the LPM2/UPM1 space:

    LM-vs-UM#3We have limited the number of UPR values to eight, but could of course have selected a larger number both inside and outside the limits we have set.

    The board now have the option of selecting the strategy they find most opportune, or the one that fits best to their “disposition” to risk by deciding the appropriate value of LPM2 and UPM1 or of the upside-potential ratio, and this what we will pursue further in the next part:  “The Virtues of the Board”.

    References

    Breitmeyer, C., Hakenes, H. and Pfingsten, A., (2001). The Properties of Downside Risk Measures. Available at SSRN: http://ssrn.com/abstract=812850 or http://dx.doi.org/10.2139/ssrn.812850.

    Caporin, M. & Lisi,F. (2009). Comparing and Selecting Performance Measures for Ranking Assets. Available at SSRN: http://ssrn.com/abstract=1393163 or http://dx.doi.org/10.2139/ssrn.1393163

    CRMPG III. (2008). The Report of the CRMPG III – Containing Systemic Risk: The Road to Reform. Counterparty Risk Management Policy Group. Available at: http://www.crmpolicygroup.org/index.html

    Cumova, D. & Nawrocki, D. (2003). Portfolio Optimization in an Upside Potential and Downside Risk Framework. Available at: http://www90.homepage.villanova.edu/michael.pagano/DN%20upm%20lpm%20measures.pdf

    Deloitte. (2009). Global Risk Management Survey: Risk management in the spotlight. Deloitte, Item #9067. Available at: http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/us_fsi_GlobalRskMgmtSrvy_June09.pdf

    Ekern, S. (1980). Increasing N-th degree risk. Economics Letters, 6: 329-333.

    Gai, P.  & Vause, N. (2004), Risk appetite: concept and measurement. Financial Stability Review, Bank of England. Available at: http://www.bankofengland.co.uk/publications/Documents/fsr/2004/fsr17art12.pdf

    Illing, M., & Aaron, M. (2005). A brief survey of risk-appetite indexes. Bank of Canada, Financial System Review, 37-43.

    Kimball, M.S. (1993). Standard risk aversion.  Econometrica 61, 589-611.

    Menezes, C., Geiss, C., & Tressler, J. (1980). Increasing downside risk. American Economic Review 70: 921-932.

    Nawrocki, D. N. (1999), A Brief History of Downside Risk Measures, The Journal of Investing, Vol. 8, No. 3: pp. 9-

    Sortino, F. A., van der Meer, R., & Plantinga, A. (1999). The upside potential ratio. , The Journal of Performance Measurement, 4(1), 10-15.

    Shadwick, W. and Keating, C., (2002). A universal performance measure, J. Performance Measurement. pp. 59–84.

    Tibiletti, L. &  Farinelli, S.,(2002). Sharpe Thinking with Asymmetrical Preferences. Available at SSRN: http://ssrn.com/abstract=338380 or http://dx.doi.org/10.2139/ssrn.338380

    Unser, M., (2000), Lower partial moments as measures of perceived risk: An experimental study, Journal of Economic Psychology, Elsevier, vol. 21(3): 253-280.

    Viole, F & Nawrocki, D. N., (2010), The Utility of Wealth in an Upper and Lower Partial Moment Fabric). Forthcoming, Journal of Investing 2011. Available at SSRN: http://ssrn.com/abstract=1543603

    Notes

    [1] In the graph risk appetite is found as the inverse of the markets price of risk, estimated by the two probability density functions over future returns – one risk-neutral distribution and one subjective distribution – on the S&P 500 index.

    [2] For a good overview of risk appetite indexes, see “A brief survey of risk-appetite indexes”. (Illing & Aaron, 2005)

    [3] Risk Management all the processes involved in identifying, assessing and judging risks, assigning ownership, taking actions to mitigate or anticipate them, and monitoring and reviewing progress.

    [4] The Policy Group recommends that each institution ensure that the risk tolerance of the firm is established or approved by the highest levels of management and shared with the board. The Policy Group further recommends that each institution ensure that periodic exercises aimed at estimation of risk tolerance should be shared with the highest levels of management, the board of directors and the institution’s primary supervisor in line with Core Precept III. Recommendation IV-2b (CRMPG III, 2008).

    For an extensive list of Risk Tolerance articles, see: http://www.planipedia.org/index.php/Risk_Tolerance_(Research_Category)

    [5] See: http://en.wikipedia.org/wiki/Utility, http://en.wikipedia.org/wiki/Ordinal_utility and http://en.wikipedia.org/wiki/Expected_utility_theory.

    [6] The ratio was created by Brian M. Rom in 1986 as an element of Investment Technologies’ Post-Modern Portfolio theory portfolio optimization software.

    [7] ‘b’ is usually the target or required rate of return for the strategy under consideration, (‘b’ was originally known as the minimum acceptable return, or MAR). We will in the following calculate the UPR for successive benchmarks (points) covering the complete probability distribution for the firm’s equity value.

    [8] The Lower partial moments will uniquely determine the probability distribution.

    [9] The use of the term distance is not unwarranted; the Phi ratio is very similar to the ratio of two Minkowski distances of order p and q.

    [10] The upper partial-moment is equivalent to the full moment minus the lower partial-moment.

    [11] Since we don’t know the closed form for the equity distributions (strategies), the figure above have been calculated from a limited, but large number of partial moments.

    Endnotes

    [i] Even if they are not the same, the terms ‘‘risk appetite’’ and ‘‘risk aversion’’ are often used interchangeably. Note that the statement: “increasing risk appetite means declining risk aversion; decreasing risk appetite indicates increasing risk aversion” is not necessarily true.

    [ii] In the following we assume that the board is non-satiated and risk-averse, and have a non-decreasing and concave utility function – U(C) – with derivatives at least of degrees five and of alternating signs – i.e. having all odd derivatives positive and all even derivatives negative. This is satisfied by most utility functions commonly used in mathematical economics including all completely monotone utility functions, as the logarithmic, exponential and power utility functions.

     More generally, a decision maker can be said as being nth-degree risk averse if sign (un) = (−1)n+1 (Ekern,1980).

     

  • Budgeting Revisited

    Budgeting Revisited

    This entry is part 2 of 2 in the series Budgeting

     

    Introduction

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Good budgeting practices are structured to minimize errors and inconsistencies, drawing in all the necessary participants to contribute their business experience and the perspective of each department. Best practice in budgeting entails a mixture of top-down guidelines and standards, combined with bottom-up individual knowledge and experience.

    Excel, the de facto tool for budgeting, is a powerful personal productivity tool. Its current capabilities, however, are often inadequate to support the critical nature of budgeting and forecasting. There will come a point when a company’s reliance on spreadsheets for budgeting leads to severely ineffective decision-making, lost productivity and lost opportunities.

    Spreadsheets can accommodate many tasks – but, over time, some of the models running in Excel may grow too big for the spreadsheet application. Programming in a spreadsheet model often requires embedded assumptions, complex macros, creating opportunities for formula errors and broken links between workbooks.

    It is common for spreadsheet budget models and their intricacies to be known and maintained by a single person who becomes a vulnerability point with no backup. And there are other maintenance and usage issues:

    A.    Spreadsheet budget models are difficult to distribute and even more difficult to collect and consolidate.
    B.    Data confidentiality is almost impossible to maintain in spreadsheets, which are not designed to hide or expose data based upon each user’s role.
    C.    Financial statements are usually not fully integrated leaving little basis for decision making.

    These are serious drawbacks for corporate governance and make the audit process more difficult.

    This is a few of many reasons why we use a dedicated simulation language for our models that specifically do not mix data and code.

    The budget model

    In practice budgeting can be performed on different levels:
    1.    Cash Flow
    2.    EBITDA
    3.    EBIT
    4.    Profit or
    5.    Company value.

    The most efficient is on EBITDA level, since taxes, depreciation and amortization on the short-term is mostly given. This is also the level where consolidation of daughter companies easiest is achieved. An EBITDA model describing the firm’s operations can again be used as a subroutine for more detailed and encompassing analysis thru P&L and Balance simulation.

    The aim will then to estimate of the firm’s equity value and is probability distribution. This can again be used for strategy selection etc.

    Forecasting

    In today’s fast moving and highly uncertain markets, forecasting have become the single most important element of the budget process.

    Forecasting or predictive analytics can best be described as statistic modeling enabling prediction of future events or results, using present and past information and data.

    1. Forecasts must integrate both external and internal cost and value drivers of the business.
    2. Absolute forecast accuracy (i.e. small confidence intervals) is less important than the insight about how current decisions and likely future events will interact to form the result.
    3. Detail does not equal accuracy with respect to forecasts.
    4. The forecast is often less important than the assumptions and variables that underpin it – those are the things that should be traced to provide advance warning.
    5.  Never relay on single point or scenario forecasting.

    All uncertainty about the market sizes, market shares, cost and prices, interest rates, exchange rates and taxes etc. – and their correlation will finally end up contributing to the uncertainty in the firm’s budget forecasts.

    The EBITDA model

    The EBITDA model have to be detailed enough to capture all important cost and value drivers, but simple enough to be easy to update with new data and assumptions.

    Input to the model can come from different sources; any internal reporting system or spread sheet. The easiest way to communicate with the model is by using Excel  spread sheet – templates.

    Such templates will be pre-defined in the sense that the information the model needs is on a pre-determined place in the workbook.  This makes it easy if the budgets for daughter companies is reported (and consolidated) in a common system (e.g. SAP) and can ‘dump’ onto an excel spread sheet. If the budgets are communicated directly to head office or the mother company then they can be read directly by the model.

    Standalone models and dedicated subroutines

    We usually construct our EBITDA models so that they can be used both as a standalone model and as a subroutine for balance simulation. The model can then be used both for short term budgeting and long-term EBITDA forecasting and simulation and for short/long term balance forecasting and simulation. This means that the same model can be efficiently reused in different contexts.
    Rolling budgets and forecast

    The EBITDA model can be constructed to give rolling forecast based on updated monthly or quarterly values, taking into consideration the seasonality of the operations. This will give new forecasts (new budget) for the remaining of the year and/or the next twelve month. By forecasts we again mean the probability distributions for the budget variables.

    Even if the variables have not changed, the fact that we move towards the end of the year will reduce the uncertainty of if the end year results and also for the forecast for the next twelve month.

    Uncertainty

    The most important part of budgeting with Monte Carlo simulation is assessment of the uncertainty in the budgeted (forecasted) cost and value drivers. This uncertainty is given as the most likely value (usually the budget figure) and the interval where it is assessed with a high degree of confidence (approx. 95%) to fall.

    We will then use these lower and upper limits (5% and 95%) for sales, prices and other budget items and the budget values as indicators of the shape of the probability distributions for the individual budget items. Together they described the range and uncertainty in the EBITDA forecasts.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – by a large number of runs of the model, usually 1000 – of net revenue, operating expenses and finally EBITDA. This again will give us their probability distributions

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall. It is not based on any precise computation but is a reasonable assessment by knowledgeable persons. Selecting the budget value however is more difficult. Should it be the “mean”
    or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average .

    This is therefore where the individual mangers intent and culture will be manifested, and it is here the greatest learning effect for both the managers and the mother company will be, as under-budgeting  and overconfidence  will stand out as excessive large deviations from the model calculated expected value (probability weighted average over the interval).

    Output

    The output from the Monte Carlo simulation will be in the form of graphs that puts all run’s in the simulation together to form the cumulative distribution for the operating expenses (red line):

    In the figure we have computed the frequencies of observed (simulated) values for operating expenses (blue frequency plot) – the x-axis gives the operating expenses and the left y-axis the frequency. By summing up from left to right we can compute the cumulative probability curve. The s-shaped curve (red) gives for every point the probability (on the right y-axis) for having an operating expenses less than the corresponding point on the x-axis. The shape of this curve and its range on the x-axis gives us the uncertainty in the forecasts.

    A steep curve indicates little uncertainty and a flat curve indicates greater uncertainty.  The curve is calculated from the uncertainties reported in the reporting package or templates.

    Large uncertainties in the reported variables will contribute to the overall uncertainty in the EBITDA forecast and thus to a flatter curve and contrariwise. If the reported uncertainty in sales and prices has a marked downside and the costs a marked upside the resulting EBITDA distribution might very well have a portion on the negative side on the x-axis – that is, with some probability the EBITDA might end up negative.

    In the figure below the lines give the expected EBITDA and the budget value. The expected EBIT can be found by drawing a horizontal line from the 0.5 (50%) point on the y-axis to the curve and a vertical line from this point on the curve to the x-axis. This point gives us the expected EBITDA value – the point where it is 50% probability of having a value of EBITDA below and 100%-50%=50% of having it above.

    The second set of lines give the budget figure and the probability that it will end up lower than budget. In this case it is almost a 100% probability that it will be much lower than the management have expected.

    This distributions location on the EBITDA axis (x-axis) and its shape gives a large amount of information of what we can expect of possible results and their probability.

    The following figure that gives the EBIT distributions for a number of subsidiaries exemplifies this. One wills most probable never earn money (grey), three is cash cows (blue, green and brown) and the last (red) can earn a lot of money:

    Budget revisions and follow up

    Normally – if something extraordinary does not happen – we would expect both the budget and the actual EBITDA to fall somewhere in the region of the expected value. We have however to expect some deviation both from budget and expected value due to the nature of the industry.  Having in mind the possibility of unanticipated events or events “outside” the subsidiary’s budget responsibilities, but affecting the outcome this implies that:

    • Having the actual result deviating from budget is not necessary a sign of bad budgeting.
    • Having the result close to or on budget is not necessary a sign of good budgeting.

    However:

    •  Large deviations between budget and actual result needs looking into – especially if the deviation to expected value also is large.
    • Large deviation between budget and expected value can imply either that the limits are set “wrong” or that the budget EBITDA is not reflecting the downside risk or upside opportunity expressed by the limits.

    Another way of looking at the distributions is by the probabilities of having the actual result below budget that is how far off line the budget ended up. In the graph below, country #1’s budget came out with a probability of 72% of having the actual result below budget.  It turned out that the actual figure with only 36% probability would have been lower. The length of the bars thus indicates the budget discrepancies.

    For country# 2 it is the other way around: the probability of having had a result lower than the final result is 88% while the budgeted figure had a 63% probability of having been too low. In this case the market was seriously misjudged.

    In the following we have measured the deviation of the actual result both from the budget values and from the expected values. In the figures the left axis give the deviation from expected value and the bottom axis the deviation from budget value.

    1.  If the deviation for a country falls in the upper right quadrant the deviation are positive for both budget and expected value – and the country is overachieving.
    2. If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the country is underachieving.
    3. If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – and the country is overachieving but has had a to high budget.

    With a left skewed EBITDA distribution there should not be any observations in the lower right quadrant that will only happen when the distribution is skewed to the right – and then there will not be any observations in the upper left quadrant:

    As the manager’s gets more experienced in assessing the uncertainty they face, we see that the budget figures are more in line with the expected values and that the interval’s given is shorter and better oriented.

    If the budget is in line with expected value given the described uncertainty, the upside potential ratio should be approx. one. A high value should indicate a potential for higher EBITDA and vice versa. Using this measure we can numerically describe the managements budgeting behavior:

    Rolling budgets

    If the model is set up to give rolling forecasts of the budget EBITDA as new and in this case monthly data, we will get successive forecast as in the figure below:

    As data for new month are received, the curve is getting steeper since the uncertainty is reduced. From the squares on the lines indicating expected value we see that the value is moving slowly to the right and higher EBITDA values.

    We can of course also use this for long term forecasting as in the figure below:

    As should now be evident; the EBITDA Monte Carlo model have multiple fields of use and all of them will increases the managements possibilities of control and foresight giving ample opportunity for prudent planning for the future.

     

     

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.