Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
S@R – Page 7 – Strategy @ Risk

Author: S@R

  • Concession Revenue Modelling and Forecasting

    Concession Revenue Modelling and Forecasting

    This entry is part 2 of 4 in the series Airports

     

    Concessions are an important source of revenue for all airports. An airport simulation model should therefore be able to give a good forecast of revenue from different types of concessions -given a small set of assumptions about local future price levels and income development for its international Pax. Since we already have a good forecast model for the expected number of international Pax (and its variation) we will attempt to forecast the airports revenue pr Pax from one type of concession and use both forecasts to estimate the airports revenue from that concession.

    The theory behind is simple; the concessionaires sales is a function of product price and the customers (Pax) income level. Some other airport specific variables also enter the equation however they will not be discussed here. As a proxy for change in Pax income we will use the individual countries change in GDP.  The price movement is represented by the corresponding movements of a price index.

    We assume that changes in the trend for the airports revenue is a function of the changes in the general income level and that the seasonal variance is caused by the seasonal changes in the passenger mix (business/leisure travel).

    It is of course impossible to forecast the exact level of revenue, but that is as we shall see where Monte Carlo simulation proves its worth.

    The fist step is a time series analysis of the observed revenue pr Pax, decomposing the series in trend and seasonal factors:

    Concession-revenue

    The time series fit turns out to be very good explaining more than 90 % of the series variation. At this point however our only interest is the trend movements and its relation to change in prices, income and a few other airport specific variables. We will however here only look at income – the most important of the variable.

    Step two, is a time series analysis of income (weighted average of GDP development in countries with majority of Pax) separating trend and seasonal factors. This trend is what we are looking for; we want to use it to explain the trend movements in the revenue.

    Step three, is then a regression of the revenue trend on the income trend as shown in the graph below. The revenue trend was estimated assuming a quadratic relation over time and we can see that the fit is good. In fact 98 % of the variance in the revenue trend can be explained by the change in income (+) trend:

    Concession-trend

    Now the model will be as follows – step four:

    1. We will collect the central banks GDP forecasts (base line scenario) and use this to forecast the most likely change in income trend
    2. More and more central banks are now producing fan charts giving the possible event space (with probabilities) for their forecasts. We will use this to establish a probability distribution for our income proxy

    Below is given an example of a fan chart taken from the Bank of England’s inflation report November 2009. (Bank of England, 2009) ((The fan chart depicts the probability of various outcomes for GDP growth.  It has been conditioned on the assumption that the stock of purchased assets financed by the issuance of central bank reserves reaches £200 billion and remains there throughout the forecast period.  To the left of the first vertical dashed line, the distribution reflects the likelihood of revisions to the data over the past; to the right, it reflects uncertainty over the evolution of GDP growth in the future.  If economic circumstances identical to today’s were to prevail on 100 occasions, the MPC’s best collective judgement is that the mature estimate of GDP growth would lie within the darkest central band on only 10 of those occasions.  The fan chart is constructed so that outturns are also expected to lie within each pair of the lighter green areas on 10 occasions.  In any particular quarter of the forecast period, GDP is therefore expected to lie somewhere within the fan on 90 out of 100 occasions.  The bands widen as the time horizon is extended, indicating the increasing uncertainty about outcomes.  See the box on page 39 of the November 2007 Inflation Report for a fuller description of the fan chart and what it represents.  The second dashed line is drawn at the two-year point of the projection.))

    Bilde1

    3. We will then use the relation between historic revenue and income trend to forecast the revenue trend
    4. Adding the seasonal variation using the estimated seasonal factors – give us a forecast of the periodic revenue.

    For our historic data the result is shown in the graph below:

    Concession-revenue-estimate

    The calculated revenue series have a very high correlation with the observed revenue series (R=0.95) explaining approximately 90% of the series variation.

    Step five, now we can forecast the revenue from concession pr Pax figures for the next periods (month, quarters or years), using Monte Carlo simulation:

    1. From the income proxy distribution we draw a possible change in yearly income and calculates the new trend
    2. Using the estimated relation between historic revenue and income trend we forecast the most likely revenue trend and calculate the 95% confidence interval. We then use this to establish a probability distribution for the period’s trend level and draws a value. This value is adjusted with the period’s seasonal factor and becomes our forecasted value for the airports revenue from the concession – for this period.

    Running thru this a thousand times we get a distribution as given below:

    Concession-revenue-distribuIn the airport EBITDA model this only a small but important part for forecasting future airport revenue. As the models data are updated (monthly) all the time series analysis and regressions are redone dynamically to capture changes in trends and seasonal factors.

    The level of monthly revenue from the concession is obviously more complex than can be described with a small set of variable and assumptions. Our model has with high probability specification errors and we may or may not have violated some of the statistical methods assumptions (the model produces output to monitor this). But we feel that we are far better of than having put all our money on a single figure as a forecast. At least we know something about the forecasts uncertainty.

    References

    Bank of England. (2009, November). Inflation Report November 2009 . Retrieved from http://www.bankofengland.co.uk/publications/inflationreport/ir09nov5.ppt

  • Perception of Risk

    Perception of Risk

    Google Trends and Google Insights for Search gives us the opportunity to gain information on a subject’s popularity. A paper by Google Inc. and Centers for Disease Control and Prevention (USA) have shown how search queries can be used to estimate the current level of influenza activity in the United States. (Ginsberg, Mohebbi, Patel, Brammer, Smolinski, & Brilliant, 2009)

    It is tempting to use these Google tools to see how searches for terms connected to risk and strategy has developed over the last years. Using Google Trends searching for the terms; economic risk and financial strategy we find the relative and normalized search frequencies as shown in the graphs below:

    Search-volume-index_1

    The weekly observations starts in January 2004, but we have due to missing data (?) started the economic risk search series in September 2004. As is evident from the time series, the search terms are highly correlated (appr. 0.80) and there is a consistent seasonal variation – with heightened activity in spring and fall. The average value for the normalized search volume index (index) is 1.0 for the term economic risk and 1.58 for financial strategy. The term financial strategy has then on average been used 0.58 times more than economic risk.

    The numbers …. on the y-axis of the Search Volume Index aren’t absolute search traffic numbers. Instead, Trends scales the first term you’ve entered so that its average search traffic in the chosen time period is 1.0; subsequent terms are then scaled relative to the first term. Note that all numbers are relative to total traffic. (About Google Trends, 2009)

    Both series shows a falling trend from early 2004 to mid 2006, indicating the terms lower relative shares of all Google searches. However from that on the relative shares have been maintained, indicating increased interest in the terms against increased Internet search activity.

    It is also possible to rank the different regions interest in the subject (the table can be sorted by pressing the column label):

    Region Ranking

    RegionRiskStrategy
    Singapore1.000.80
    South Africa0.861.43
    Hong Kong0.740.83
    Malaysia0.701.06
    India0.501.10
    South Korea0.440.46
    Philippines0.410.58
    Australia0.360.50
    Indonesia0.350.35
    New Zealand0.260.38

    Singapore is the region with the highest shares of searches including  the term ‘risk’ and South African the region with the highest shares of searches including ‘strategy’ In India the term ‘financial strategy’ is important but ‘risk’ is less important.`

    The most striking feature of the table however is the lack of American and European regions. Is there less interest in these subjects in the Vest than in the East ?

    References

    Ginsberg, J, Mohebbi, M, Patel, R, Brammer, L, Smolinski, M., & Brilliant, L., (2009). Detecting influenza epidemics using search engine query data. Nature, 457, 1012-1014.

    (n.d.). About Google trends. Retrieved from http://www.google.com/intl/en/trends/about.html#7

  • Where do you go from risk mapping?

    Where do you go from risk mapping?

    You can’t control what you can’t measure. (DeMarco 1998)

    Risk mapping is a much advocated and often used tool. Numerous articles, books, guidelines and standards have been written on the subject and software has been developed to facilitate the process ( e.g., AS/NZS 4360, 2004). It is the first stepping stone in Risk Management; the logical and systematic method of identifying, analyzing, treating and monitoring the risks and opportunities involved in any activity or process. Risk management is now becoming an integral part of any organizations planning regardless of the type of business, activity or function.

    Risk Mapping

    The risk mapping process is usually divided into seven ordered activities. The sequence can be as shown below, but the process can imply repeated prior activities as results of later appraisals of risky events in the process:

    Risk-mapping-process

    The objective is to separate the acceptable risks from the unacceptable risks, and to provide data to assist in the evaluation and control of risks and opportunities.

    The Risk Events List

    The risk list is the result of risk identification activities. It consists of a list of all risks and opportunities grouped by an agreed upon classification. It is put together by the risk identification group lead by the risk officer; the key person responsible for risk management. The risk list is the basis for the risk data database containing information about each project, risk and persons involved in risk management. The main output table is the risk register.

    Risk Register

    The Risk Register is a form containing a large set of fields for each risky event being analyzed and controlled. The form contains data about the event, its computational aspects and all risk response information. This register is the basis for a number of cross tables visualizing types of risk, likelihood, impact, response, responsibility etc.  Of those one is of special interest to us – the risk probability and impact matrix.

    The Risk Level Matrix

    The risk level matrix is based on two tables established during the third activity in the risk mapping process; the likelihood and the impact table.

    The Likelihood table

    During the risk analysis the potential likelihood that a given risk will occur is assessed, and an appropriate risk probability is selected from the table below:

    Probability-table_risk-mapp

    The Impact Table

    At the same time the potential impact of each risk is analyzed, and an appropriate impact level is selected from the table below:

    Impact-table_risk-mapping

    The Risk Matrix

    The risk level matrix shows the combination (product) of risk impact and probability, and is utilized to decide the relative priority of risks.  Risks that fall into the upper right triangle of the matrix are the highest priority, and should receive the majority of risk management resources during response planning and risk monitoring/control.  Risks that fall on the diagonal of the matrix are the next highest priority, followed by risks that fall into the lower left triangle of the matrix:

    Risk-matrix_risk-mappingIn practice it can look like this with impact in four groups (the numbers refers to the risk description in the risk register):

    Impact-vs-likelihoodFrom the graph we can see that there are no risks with high probability and high impact and that we have at least four clusters of risks (centroid method). The individual risks location determines the actions needed:

    risk_map2We can multiply impact with likelihood and calculate something like expected effect and use this to rank order the risks, but this is as far as we can get with this method.

    However it is a great tool for the introduction of risk management in any organization; it is easy to communicate, places responsibilities, creates awareness and most of all – lists all known hazards and risks that faces the organization.

    But it has all the limitations of qualitative analysis. Word form or descriptive scales are used to describe the magnitude of potential consequences and their likelihood. No relations between the risks exist and their individual or combined effect on the P&L and Balance sheet is at best difficult to understand.

    Most risks are attributable to one or more observable variables. They can be continuous or have discrete values, but they are all stochastic variables.

    Now, even a “qualitative“ variable like political risk is measurable. Political risk is usually manifested as uncertainty about taxes, repatriation of funds, nationalization etc. Such risks can mostly be modeled and analyzed with decision-tree techniques, giving project value distributions for the different scenarios. Approaches like that give better control than just applying some general qualitative country risk measure.

    Risk Breakdown Structure (RBS)

    A first step in the direction of quantitative risk analysis can be to perform a risk breakdown analysis to source-orient the individual risks. This is usually done in descending levels increasing the details in the definition of sources of risk. This will give a better and often new understanding of the types of risk, their dependencies, root and possible covariation. (Zacharias, Panopoulos, Askounis, 2008)

    RBS can be further developed using Bayesian network techniques to describe and simulate discrete types of risk, usually types of hazard, failures or fault prediction in operations. (Fenton, Neil, 2007)

    But have we measured the risks and what is the organizations total risk? Is it the sum of all risks, or some average?

    You can’t measure what you can’t define. (Kagan, 1993)

    Can we really manage the risks and exploit the opportunities with the tool (risk model) we now have? A model is a way of representing some feature of reality. Models are not true or false. They are simply useful or not useful for some purpose.

    Risk mapping is – apart from its introductory qualities to risk management – not useful for serious corporate risk analysis. It does not define total corporate risk nether does it measure it. Its focus on risk (hazard) also makes one forget about the opportunities, which has to be treated separately and not as what it really is – the other side of the probability distribution.

    The road ahead

    We need to move to quantitative analysis with variables that describes the operations, and where numerical values are calculated for both consequences and likelihood – combining risk and opportunity.

    This implies modeling the operations in sufficient detail to describe numerically what’s going on. In paper production this means modeling the market (demand and prices), competitor behavior (market shares and sales), fx-rates for input materials and possible exports, production (wood, chemicals, recycled paper, filler, pulp, water etc, cost, machine speeds, trim width, basis weight, total efficiency, max days of production, electricity consumption, heat cost and recovery packaging, manning level, hazards etc.), labor cost, distribution cost, rebates, commissions, fixed costs, maintenance and reinvestment, interest rates, taxes etc. All of which are stochastic variable.

    These variables, their shape and location are the basis for all uncertainty the firm faces whether it be risk or opportunities. The act of measuring their behavior and interrelationship helps improve precision and reduce uncertainty about the firm’s operations. (Hubbard, 2007)

    To us short term risk is about the location and shape of the EBITDA distribution for the next one to three years and long term risk about the location and shape of the today’s company’s equity value distribution, calculated by estimating the company’s operations over a ten to fifteen years horizon.  Risk is then the location and left tail of the distribution while the possible opportunities (upside) are in the right tail of the same distribution. And now all kinds of tools can be used to measure risk and opportunities.

    Risk mapping is in this context a little like treating a disease’s symptoms rather than the disease itself.

    References

    AS/NZS 4360:2004 http://www.saiglobal.com/shop/script/Details.asp?DocN=AS0733759041AT

    Demarco, T., (1982). Controlling Software Projects. Englewood Cliffs: Yourdon Press.

    Fenton, F. Neil, M. (2007, November). Managing Risk in the Modern World. Retrieved from http://www.lms.ac.uk/activities/comp_sci_com/KTR/apps_bayesian_networks.pdf

    Hubbard, D., (2007). How to Measure Anything. Chichester: John Wiley & Sons.

    Kagan, S. L. (1993). Defining, assessing and implementing readiness: Challenges and opportunities.

    Zacharias O., Panopoulos D., Askounis D.  (2008). Large Scale Program Risk Analysis Using a Risk Breakdown Structure. European Journal of Economics, Finance and Administrative Sciences, (12), 170-181.

  • Public Works Projects

    Public Works Projects

    This entry is part 2 of 4 in the series The fallacies of scenario analysis

     

    It always takes longer than you expect, even when you take into account Hofstadter’s Law. (Hofstadter,1999)

    In public works and large scale construction or engineering projects – where uncertainty mostly (only) concerns cost, a simplified scenario analysis is often used.

    Costing Errors

    An excellent study carried out by Flyvberg, Holm and Buhl (Flyvbjerg, Holm, Buhl2002) address the serious questions surrounding the chronic costing errors in public works projects. The purpose was to identify typical deviation from budget and the specifics of the major causes for these deviations:

    The main findings from the study reported in their article – all highly significant and most likely conservative -are as follows:

    In 9 out of 10 transportation infrastructure projects, costs are underestimated. For a randomly selected project, the probability of actual costs being larger than estimated costs is  0.86. The probability of actual costs being lower than or equal to estimated costs is only 0.14. For all project types, actual costs are on average 28% higher than estimated costs.

    Cost underestimation:

    – exists across 20 nations and 5 continents:  appears to be a global phenomena.
    – has not decreased over the past 70 years:  no improvement in cost estimate accuracy.
    – cannot be excused by error:  seems best explained by strategic misrepresentation, i.e. the planned,   systematic  distortion or misstatement of facts inn the budget process. (Jones, Euske,1991)

    Demand Forecast Errors

    The demand forecasts only adds more errors to the final equation (Flyvbjerg, Holm, Buhl, 2005):

    • 84 percent of rail passenger forecasts are wrong by more than ±20 percent.
    • 50 percent of road traffic forecasts are wrong by more than ±20 percent.
    • Errors in traffic forecasts are found in the 14 nations and 5 continents covered by the study.
    • Inaccuracy is constant for the 30-year period covered: no improvement over time.

    The Machiavellian Formulae

    Adding the cost and demand errors to other uncertain effects, we get :

    Machiavelli’s Formulae:
    Overestimated revenues + Overvalued development effects – Underestimated cost – Undervalued environmental impact = Project Approval (Flyvbjerg, 2007)

    Cost Projections

    Transportation infrastructure projects do not appear to be more prone to cost underestimation than are other types of large projects like: power plants, dams, water distribution, oil and gas extraction, information technology systems, aerospace systems, and weapons systems.

    All of the findings above should be considered forms of risk. As has been shown in cost engineering research, poor risk analysis account for many project cost overruns.
    Two components of errors in the cost estimate can easily be identified (Bertisen, 2008):

    • Economic components: these errors are the result of incorrectly forecasted exchange rates, inflation rates of unit prices, fuel prices, or other economic variables affecting the realized nominal cost. Many of these variables have positive skewed distribution. This will then feed through to positive skewness in the total cost distribution.
    • Engineering components: this relates to errors both in estimating unit prices and in the required quantities. There may also be an over- or underestimation of the contingency needed to capture excluded items. Costs and quantity errors are always limited on the downside. However, there is no limit to costs and quantities on the upside, though. For many cost and quantity items, there is also a small probability of a “catastrophic event”, which would dramatically increase costs or quantities.

    When combining these factors the result is likely to be a positive skewed cost distribution, with many small and large under run and overrun deviations (from most likely value) joined by a few very large or catastrophic overrun deviations.

    Since the total cost (distribution) is positively skewed, expected cost can be considerably higher than the calculated most likely cost.

    We will have these findings as a backcloth when we examine the Norwegian Ministry of Finance’s guidelines  for assessing risk in public works (Ministry of Finance, 2008, pp 3) (Total uncertainty equal to the sum of systematic and unsystematic uncertainty):

    Interpreting the guidelines, we find the following assumption and advices:

    1. Unsystematic risk cancels out looking at large portfolios of projects.
    2. All systematic risk is perfectly correlated to the business cycle.
    3. Total cost approximately normal distributed.

    Since total risk is equal to the sum of systematic and unsystematic risk will, by the 2nd assumption, unsystematic risks comprise all uncertainty not explained by the business cycle. That is it will be comprised of all uncertainty in planning, mass calculations etc. and production of the project.

    It is usually in these tasks that the projects inherent risks later are revealed. Based on the above studies it is reasonable to believe that the unsystematic risk have a skewed distribution and is located in its entirety on the positive part of the cost axis i.e. it will not cancel out even in a portfolio of projects.

    The 2nd assumption that all systematic risk is perfectly correlated to the business cycle is a convenient one. It opens for a simple summation of percentiles (10%/90%) for all cost variables to arrive at total cost percentiles. (see previous post in this series)

    The effect of this assumption is that the risk model becomes a perverted one, with only one stochastic variable. All the rest can be calculated from the outcomes of the “business cycle” distribution.

    Now we know that delivery time, quality and prices for all equipment, machinery and raw materials are dependent on the activity level in all countries demanding or producing the same items. So, even if there existed a “business cycle” for every item (and a measure for it) these cycles would not necessarily be perfectly synchronised and thus prove false the assumption.

    The 3rd assumption implies either that all individual cost distributions are “near normal” or that they are independent and identically-distributed with finite variance, so that the central limit theorem can be applied.

    However, the individual cost distributions will be the product of unit price, exchange rate and quantity so even if the elements in the multiplication has a normal distribution, the product will not have a normal distribution.

    Claiming the central limit theorem is also a no-go since the cost elements by the 2nd assumption is perfectly correlated, they can not be independent.

    All experience and every study concludes that the total cost distribution does not have a normal distribution. The cost distribution evidently is positively skewed with fat tails whereas the normal distribution is symmetric with thin tails.

    Our concerns about the wisdom of the 3rd assumption, was confirmed in 2014, see: The implementation of the Norwegian Governmental Project Risk Assessment Scheme and the following articles.

    The solution to all this is to establish a proper simulation model for every large project and do the Monte Carlo simulation necessary to establish the total cost distribution, and then calculate the risks involved.

    “If we arrive, as our forefathers did, at the scene of battle inadequately equipped, incorrectly trained and mentally unprepared, then this failure will be a criminal one because there has been ample warning” — (Elliot-Bateman, 1967)

    References

    Bertisen, J., Davis, Graham A. (2008). Bias and error in mine project capital cost estimation.. Engineering Economist, 01-APR-08

    Elliott-Bateman, M. (1967). Defeat in the East: the mark of Mao Tse-tung on war. London: Oxford University Press.

    Flyvbjerg Bent (2007), Truth and Lies about Megaprojects, Inaugural speech, Delft University of Technology, September 26.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2002), “Underestimating Costs in Public Works Projects: Error or Lie?” Journal of the American Planning Association, vol. 68, no. 3, 279-295.

    Flyvbjerg, Bent, Mette K. Skamris Holm, and Søren L. Buhl (2005), “How (In)accurate Are Demand Forecasts in Public Works Projects?” Journal of the American Planning Association, vol. 71, no. 2, 131-146.

    Hofstadter, D., (1999). Gödel, Escher, Bach. New York: Basic Books

    Jones, L.R., K.J. Euske (1991).Strategic Misrepresentation in Budgeting. Journal of Public Administration Research and Theory, 1(4), 437-460.

    Ministry of Finance, (Norway) (2008,). Systematisk usikkerhet. Retrieved July 3, 2009, from The Concept research programme Web site: http://www.ivt.ntnu.no/bat/pa/forskning/Concept/KS-ordningen/Dokumenter/Veileder%20nr%204%20Systematisk%20usikkerhet%2011_3_2008.pdf

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.

  • Two letters

    Two letters

    Dear S@R,

    I am not interested in the use of stochastic models, and particularly Monte Carlo simulations.  I believe that these approaches too often lead to underestimating risks of extreme events, by failing to indentify correlated variables, first order or second order variables, and correlations in sample populations. I believe that the use of these models carries an important responsibility in the way banks failed to address risks correctly.
    Best regards,
    NN

    Dear NN,

    We wholeheartedly agree on the errors you point out, especially for the banking sector. However this is per se not the fault of Monte Carlo simulation as a technique, but in the way some models has been implemented and later misused.

    We also have read the stories about bank risk managers (and modellers) forced by higher management to change important risk parameters to make further loans possible.

    We just do not relay only on normal variables with short slim tails and simple VaR calculations. For risk calculations we alternatively use shortfall and spectral risk, the latter to give progressively larger weights to losses that can be disastrous. This will be a topic in a future post on our Web site.

    However I beg to differ with you on the question of correlations. In my experience large correlation matrixs is a part of the problem you describe. Such correlation matrixs will undoubtedly contain spurious correlations giving false estimates of important relations. This is why we model all important relations, using the unexplained variance as a part of the uncertainty describing the problem under study – the company’s operations.

    Many claim that what killed Wall Street was uncritical use of David X. Li’s copula formula, where errors massively increase the risk of the whole equation blowing up (Salmon, 2009). We have therefore never used his work, relaying more on both B. Mandelbrot and Taleb Nasim’s views.

    As we se it, the use of copula’s formua was done to avoid serious statistical analysis and simulation work – which is what we do.

    If you should reconsider, we will be happy to meet with you to explain the nature of our work. To us nothing is better than a demanding customer.

    Best regards

    S@R

    References

    Salmon, Felix (2009,02,23). Recipe for Disaster: The Formula That Killed Wall Street. Wired Magazine, Retrieved 0702,2009, from http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all