Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
ERM – Page 3 – Strategy @ Risk

Tag: ERM

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • The Value of Information

    The Value of Information

    This entry is part 4 of 4 in the series A short presentation of S@R

     

    Enterprise risk management (ERM) only has value to those who know that the future is uncertain

    Businesses have three key needs:

    First, they need to have a product or service that people will buy. They need revenues.

    Second, they need to have the ability to provide that product or service at a cost less than what their customers will pay. They need profits. Once they have revenues and profits, their business is a valuable asset.

    So third, they need to have a system to avoid losing that asset because of unforeseen adverse experience. They need risk management.

    The top CFO concern is the firm’s ability to forecast results and the first stepping-stone in the process of forecasting results is to forecast demand – and this is where ERM starts.

    The main risk any firm faces is the variability (uncertainty) of demand. Since all production activities like procurement of raw materials, sizing of work force, investment in machinery etc. is based on expected demand the task of forecasting future demand is crucial. It is of course difficult and in most cases not possible to perfectly forecast demand, but it is always possible to make forecasts that give better results than mere educated guesses.

    We will attempt in the following to show the value of making good forecasts by estimating the daily probability distribution for demand. We will do this using a very simple model, assuming that:

    1. Daily demand is normal distributed with expected sales of 100 units and a standard deviation of 12 units,
    2. the product can not be stocked,
    3. it sells at $4 pr unit, has a variable production cost of $2 and a fixed production cost of $50.

    Now we need to forecast the daily sales. If we had perfect information about the demand, we would have a probability distribution for daily profit as given by the red histogram and line in the graphs below.

    • One form of forecast (average) is the educated guess using the average daily sales (blue histogram). As we can see from the graphs, this forecast method gives a large downside (too high production) and no upside (too low production).
    • A better method (limited information) would have been to forecast demand by its relation to some other observable variable. Let us assume that we have a forecast method that gives us a near perfect forecast in 50% of the cases and a probability distribution for the rest that is normal distributed with expected sales as for demand, but with a standard deviation of six units (green histogram).

    Profit-histogramWith the knowledge we have from (selecting strategy) we clearly se that the last forecast strategy is stochastic dominant to the use of average demand as forecast.

    ProfitSo, what is the value to the company of more informed forecasts than the mere use of expected sales? The graph below gives the distribution for the differences in profit (percentage) using the two methods. Over time, the second method  will give on average an 8% higher profit than just using the average demand as forecast.

    Diff-in-profitHowever, there is still another seven to eight percent room for further improvement in the forecasting procedure.

    If the company could be reasonable sure of the existence of a better forecast model than using the average, it would be a good strategy to put money into a betterment. In fact it could use up to 8% of all future profit if it knew that a method as good as or better than our second method existed.

  • Credit Risk

    Credit Risk

    This entry is part 4 of 4 in the series Risk of Bankruptcy

    Other Methods

    A number of other statistical methods have also been used to predict future company failure and credit risk, see: (Atiya, 2001), (Chandra, Ravi, Bose, 2009) and (Bastos, 2008). A recent study (Boguslauskas, Mileris , 2009) analyzed 30 scientific publications comprising 77 models:

    1. 63% used artificial neural networks (ANN)
    2. 53% used logistic regression (LR)
    3. 37% used discriminant analysis (DA)
    4. 23% used decision trees and (DT)
    5. 33% used various other methods

    The general accuracy of the different models was evaluated: the proportion of companies correctly classified (figure 3 in the article):

    Classification-error

    The box and whisker plot above shows that logistic regression (87%) and artificial neural networks (87%) gives almost the same accuracy while decision trees (83%) and discriminant analysis (77%) seems to be less reliable methods.

    However from the boxes it is evident that decision trees as a method have a much larger variance in classification accuracy than the others and that artificial neural network have the lowest variance. For logistic regression and discriminant analysis the variance is approximately the same.

    Comparing methods based on different data sets can easily be misleading. Accurate parameter estimation relies heavily on available data and their usability for that particular method.

    References

    Atiya, Amir F. (2001). Bankruptcy prediction for credit risk using neural networks: a survey and new results. IEEE TRANSACTIONS ON NEURAL NETWORKS, 12(4), Retrieved from http://ieee-cis.org/pubs/tnn/

    Bastos, Joao. (2008, April 01). Credit scoring with boosted decision trees. Retrieved from http://mpra.ub.uni-muenchen.de/8156/

    Boguslauskas, Vytautas. Mileris, Ricardas. (2009). Estimation of credit risk by artificial neural networks models. ECONOMICS OF ENGINEERING DECISIONS, 4(64), Retrieved from http://internet.ktu.lt/en/science/journals/econo/inzek064.html

    Chandra, D. K., Ravi, V., Bose, I. (2009). Failure prediction of dotcom companies using hybrid intelligent techniques. Expert Systems with Applications, (36), 4830–4837.

  • Concession Revenue Modelling and Forecasting

    Concession Revenue Modelling and Forecasting

    This entry is part 2 of 4 in the series Airports

     

    Concessions are an important source of revenue for all airports. An airport simulation model should therefore be able to give a good forecast of revenue from different types of concessions -given a small set of assumptions about local future price levels and income development for its international Pax. Since we already have a good forecast model for the expected number of international Pax (and its variation) we will attempt to forecast the airports revenue pr Pax from one type of concession and use both forecasts to estimate the airports revenue from that concession.

    The theory behind is simple; the concessionaires sales is a function of product price and the customers (Pax) income level. Some other airport specific variables also enter the equation however they will not be discussed here. As a proxy for change in Pax income we will use the individual countries change in GDP.  The price movement is represented by the corresponding movements of a price index.

    We assume that changes in the trend for the airports revenue is a function of the changes in the general income level and that the seasonal variance is caused by the seasonal changes in the passenger mix (business/leisure travel).

    It is of course impossible to forecast the exact level of revenue, but that is as we shall see where Monte Carlo simulation proves its worth.

    The fist step is a time series analysis of the observed revenue pr Pax, decomposing the series in trend and seasonal factors:

    Concession-revenue

    The time series fit turns out to be very good explaining more than 90 % of the series variation. At this point however our only interest is the trend movements and its relation to change in prices, income and a few other airport specific variables. We will however here only look at income – the most important of the variable.

    Step two, is a time series analysis of income (weighted average of GDP development in countries with majority of Pax) separating trend and seasonal factors. This trend is what we are looking for; we want to use it to explain the trend movements in the revenue.

    Step three, is then a regression of the revenue trend on the income trend as shown in the graph below. The revenue trend was estimated assuming a quadratic relation over time and we can see that the fit is good. In fact 98 % of the variance in the revenue trend can be explained by the change in income (+) trend:

    Concession-trend

    Now the model will be as follows – step four:

    1. We will collect the central banks GDP forecasts (base line scenario) and use this to forecast the most likely change in income trend
    2. More and more central banks are now producing fan charts giving the possible event space (with probabilities) for their forecasts. We will use this to establish a probability distribution for our income proxy

    Below is given an example of a fan chart taken from the Bank of England’s inflation report November 2009. (Bank of England, 2009) ((The fan chart depicts the probability of various outcomes for GDP growth.  It has been conditioned on the assumption that the stock of purchased assets financed by the issuance of central bank reserves reaches £200 billion and remains there throughout the forecast period.  To the left of the first vertical dashed line, the distribution reflects the likelihood of revisions to the data over the past; to the right, it reflects uncertainty over the evolution of GDP growth in the future.  If economic circumstances identical to today’s were to prevail on 100 occasions, the MPC’s best collective judgement is that the mature estimate of GDP growth would lie within the darkest central band on only 10 of those occasions.  The fan chart is constructed so that outturns are also expected to lie within each pair of the lighter green areas on 10 occasions.  In any particular quarter of the forecast period, GDP is therefore expected to lie somewhere within the fan on 90 out of 100 occasions.  The bands widen as the time horizon is extended, indicating the increasing uncertainty about outcomes.  See the box on page 39 of the November 2007 Inflation Report for a fuller description of the fan chart and what it represents.  The second dashed line is drawn at the two-year point of the projection.))

    Bilde1

    3. We will then use the relation between historic revenue and income trend to forecast the revenue trend
    4. Adding the seasonal variation using the estimated seasonal factors – give us a forecast of the periodic revenue.

    For our historic data the result is shown in the graph below:

    Concession-revenue-estimate

    The calculated revenue series have a very high correlation with the observed revenue series (R=0.95) explaining approximately 90% of the series variation.

    Step five, now we can forecast the revenue from concession pr Pax figures for the next periods (month, quarters or years), using Monte Carlo simulation:

    1. From the income proxy distribution we draw a possible change in yearly income and calculates the new trend
    2. Using the estimated relation between historic revenue and income trend we forecast the most likely revenue trend and calculate the 95% confidence interval. We then use this to establish a probability distribution for the period’s trend level and draws a value. This value is adjusted with the period’s seasonal factor and becomes our forecasted value for the airports revenue from the concession – for this period.

    Running thru this a thousand times we get a distribution as given below:

    Concession-revenue-distribuIn the airport EBITDA model this only a small but important part for forecasting future airport revenue. As the models data are updated (monthly) all the time series analysis and regressions are redone dynamically to capture changes in trends and seasonal factors.

    The level of monthly revenue from the concession is obviously more complex than can be described with a small set of variable and assumptions. Our model has with high probability specification errors and we may or may not have violated some of the statistical methods assumptions (the model produces output to monitor this). But we feel that we are far better of than having put all our money on a single figure as a forecast. At least we know something about the forecasts uncertainty.

    References

    Bank of England. (2009, November). Inflation Report November 2009 . Retrieved from http://www.bankofengland.co.uk/publications/inflationreport/ir09nov5.ppt

  • Where do you go from risk mapping?

    Where do you go from risk mapping?

    You can’t control what you can’t measure. (DeMarco 1998)

    Risk mapping is a much advocated and often used tool. Numerous articles, books, guidelines and standards have been written on the subject and software has been developed to facilitate the process ( e.g., AS/NZS 4360, 2004). It is the first stepping stone in Risk Management; the logical and systematic method of identifying, analyzing, treating and monitoring the risks and opportunities involved in any activity or process. Risk management is now becoming an integral part of any organizations planning regardless of the type of business, activity or function.

    Risk Mapping

    The risk mapping process is usually divided into seven ordered activities. The sequence can be as shown below, but the process can imply repeated prior activities as results of later appraisals of risky events in the process:

    Risk-mapping-process

    The objective is to separate the acceptable risks from the unacceptable risks, and to provide data to assist in the evaluation and control of risks and opportunities.

    The Risk Events List

    The risk list is the result of risk identification activities. It consists of a list of all risks and opportunities grouped by an agreed upon classification. It is put together by the risk identification group lead by the risk officer; the key person responsible for risk management. The risk list is the basis for the risk data database containing information about each project, risk and persons involved in risk management. The main output table is the risk register.

    Risk Register

    The Risk Register is a form containing a large set of fields for each risky event being analyzed and controlled. The form contains data about the event, its computational aspects and all risk response information. This register is the basis for a number of cross tables visualizing types of risk, likelihood, impact, response, responsibility etc.  Of those one is of special interest to us – the risk probability and impact matrix.

    The Risk Level Matrix

    The risk level matrix is based on two tables established during the third activity in the risk mapping process; the likelihood and the impact table.

    The Likelihood table

    During the risk analysis the potential likelihood that a given risk will occur is assessed, and an appropriate risk probability is selected from the table below:

    Probability-table_risk-mapp

    The Impact Table

    At the same time the potential impact of each risk is analyzed, and an appropriate impact level is selected from the table below:

    Impact-table_risk-mapping

    The Risk Matrix

    The risk level matrix shows the combination (product) of risk impact and probability, and is utilized to decide the relative priority of risks.  Risks that fall into the upper right triangle of the matrix are the highest priority, and should receive the majority of risk management resources during response planning and risk monitoring/control.  Risks that fall on the diagonal of the matrix are the next highest priority, followed by risks that fall into the lower left triangle of the matrix:

    Risk-matrix_risk-mappingIn practice it can look like this with impact in four groups (the numbers refers to the risk description in the risk register):

    Impact-vs-likelihoodFrom the graph we can see that there are no risks with high probability and high impact and that we have at least four clusters of risks (centroid method). The individual risks location determines the actions needed:

    risk_map2We can multiply impact with likelihood and calculate something like expected effect and use this to rank order the risks, but this is as far as we can get with this method.

    However it is a great tool for the introduction of risk management in any organization; it is easy to communicate, places responsibilities, creates awareness and most of all – lists all known hazards and risks that faces the organization.

    But it has all the limitations of qualitative analysis. Word form or descriptive scales are used to describe the magnitude of potential consequences and their likelihood. No relations between the risks exist and their individual or combined effect on the P&L and Balance sheet is at best difficult to understand.

    Most risks are attributable to one or more observable variables. They can be continuous or have discrete values, but they are all stochastic variables.

    Now, even a “qualitative“ variable like political risk is measurable. Political risk is usually manifested as uncertainty about taxes, repatriation of funds, nationalization etc. Such risks can mostly be modeled and analyzed with decision-tree techniques, giving project value distributions for the different scenarios. Approaches like that give better control than just applying some general qualitative country risk measure.

    Risk Breakdown Structure (RBS)

    A first step in the direction of quantitative risk analysis can be to perform a risk breakdown analysis to source-orient the individual risks. This is usually done in descending levels increasing the details in the definition of sources of risk. This will give a better and often new understanding of the types of risk, their dependencies, root and possible covariation. (Zacharias, Panopoulos, Askounis, 2008)

    RBS can be further developed using Bayesian network techniques to describe and simulate discrete types of risk, usually types of hazard, failures or fault prediction in operations. (Fenton, Neil, 2007)

    But have we measured the risks and what is the organizations total risk? Is it the sum of all risks, or some average?

    You can’t measure what you can’t define. (Kagan, 1993)

    Can we really manage the risks and exploit the opportunities with the tool (risk model) we now have? A model is a way of representing some feature of reality. Models are not true or false. They are simply useful or not useful for some purpose.

    Risk mapping is – apart from its introductory qualities to risk management – not useful for serious corporate risk analysis. It does not define total corporate risk nether does it measure it. Its focus on risk (hazard) also makes one forget about the opportunities, which has to be treated separately and not as what it really is – the other side of the probability distribution.

    The road ahead

    We need to move to quantitative analysis with variables that describes the operations, and where numerical values are calculated for both consequences and likelihood – combining risk and opportunity.

    This implies modeling the operations in sufficient detail to describe numerically what’s going on. In paper production this means modeling the market (demand and prices), competitor behavior (market shares and sales), fx-rates for input materials and possible exports, production (wood, chemicals, recycled paper, filler, pulp, water etc, cost, machine speeds, trim width, basis weight, total efficiency, max days of production, electricity consumption, heat cost and recovery packaging, manning level, hazards etc.), labor cost, distribution cost, rebates, commissions, fixed costs, maintenance and reinvestment, interest rates, taxes etc. All of which are stochastic variable.

    These variables, their shape and location are the basis for all uncertainty the firm faces whether it be risk or opportunities. The act of measuring their behavior and interrelationship helps improve precision and reduce uncertainty about the firm’s operations. (Hubbard, 2007)

    To us short term risk is about the location and shape of the EBITDA distribution for the next one to three years and long term risk about the location and shape of the today’s company’s equity value distribution, calculated by estimating the company’s operations over a ten to fifteen years horizon.  Risk is then the location and left tail of the distribution while the possible opportunities (upside) are in the right tail of the same distribution. And now all kinds of tools can be used to measure risk and opportunities.

    Risk mapping is in this context a little like treating a disease’s symptoms rather than the disease itself.

    References

    AS/NZS 4360:2004 http://www.saiglobal.com/shop/script/Details.asp?DocN=AS0733759041AT

    Demarco, T., (1982). Controlling Software Projects. Englewood Cliffs: Yourdon Press.

    Fenton, F. Neil, M. (2007, November). Managing Risk in the Modern World. Retrieved from http://www.lms.ac.uk/activities/comp_sci_com/KTR/apps_bayesian_networks.pdf

    Hubbard, D., (2007). How to Measure Anything. Chichester: John Wiley & Sons.

    Kagan, S. L. (1993). Defining, assessing and implementing readiness: Challenges and opportunities.

    Zacharias O., Panopoulos D., Askounis D.  (2008). Large Scale Program Risk Analysis Using a Risk Breakdown Structure. European Journal of Economics, Finance and Administrative Sciences, (12), 170-181.

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.