Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
FRM – Page 3 – Strategy @ Risk

Tag: FRM

  • The Case of Enterprise Risk Management

    The Case of Enterprise Risk Management

    This entry is part 2 of 4 in the series A short presentation of S@R

     

    The underlying premise of enterprise risk management is that every entity exists to provide value for its stakeholders. All entities face uncertainty and the challenge for management is to determine how much uncertainty to accept as it strives to grow stakeholder value. Uncertainty presents both risk and opportunity, with the potential to erode or enhance value. Enterprise risk management enables management to effectively deal with uncertainty and associated risk and opportunity, enhancing the capacity to build value. (COSO, 2004)

    The evils of a single point estimate

    Enterprise risk management is a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

    Traditionally, when estimating costs, project value, equity value or budgeting, one number is generated – a single point estimate. There are many problems with this approach.  In budget work this point is too often given as the best the management can expect, but in some cases budgets are set artificially low generating bonuses for later performance beyond budget. The following graph depicts the first case.

    Budget_actual_expected

    Here, we have based on the production and market structure and on the managements assumptions of the variability of all relevant input and output variables simulated the probability distribution for next years EBITDA. The graph gives the budgeted value, the actual result and the expected value. Both budget and actual value are above expected value, but the budgeted value was far too high, giving with more than 80% probability a realized EBITDA lower than budget. In this case the board will be mislead with regard to the company’ ability to earn money and all subsequent decisions made based on the budget EBITDA can endanger the company.

    The organization’s ERM system should function to bring to the board’s attention the most significant risks affecting entity objectives and allow the board to understand and evaluate how these risks may be correlated, the manner in which they may affect the enterprise, and management’s mitigation or response strategies. (COSO, 2009)

    It would have been much more preferable to the board to be given both the budget value and the accompanying probability distribution allowing it to make independent judgment about the possible size of the next years EBITDA. Only then will the board – both from the shape of the distribution, its localization and the point estimate of budget EBITDA – be able to assess the risk and opportunity facing the company.

    Will point estimates cancel out errors?

    In the following we measure the deviation of the actual result from both from the budget value and from the expected value. The blue dots represent daughter companies located in different countries. For each company we have the deviation (in percent) of the budgeted EBITDA (bottom axis) and the expected value (left axis) from the actual EBITDA observed 1 ½ year later.

    If the deviation for a company falls in the upper right quadrant the deviation are positive for both budget and expected value – and the company is overachieving.

    If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the company is underachieving.

    If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – the company is overachieving but has had a to high budget.

    With left skewed EBITDA distributions there should not be any observations in the lower right quadrant that will only happen when the distributions is skewed to the right – and then there will not be any observations in the upper left quadrant.

    The graph below shows that two companies have seriously underperformed and that the budget process did not catch the risk they were facing.  The rest of the companies have done very well, some however have seriously underestimated opportunities manifested by the actual result. From an economic point of view, the mother company would of course have preferred all companies (blue dots) above the x-axis, but due to the stochastic nature of the EBITDA it have to accept that some always will fall below.  Risk wise, it would have preferred the companies to fall to the right of the y-axis but will due to budget uncertainties have to accept that some always will fall to the left. However, large deviations both below the x-axis and to the left of the y-axis add to the company risk.

    Budget_actual_expected#1

    A situation like the one given in the graph below is much to be preferred from the board’s point of view.

    Budget_actual_expected#2

    The graphs above, taken from real life – shows that budgeting errors will not be canceled out even across similar daughter companies. Consolidating the companies will give the mother company a left skewed EBITDA distribution. They also show that you need to be prepared for deviations both positive and negative – you need a plan. So how do you get a plan? You make a simulation model! (See Pdf: Short-presentation-of-S@R#2)

    Simulation

    The Latin verb simulare means to “to make like”, “to create an exact representation” or imitate. The purpose of a simulation model is to imitate the company and is environment, so that its functioning can be studied. The model can be a test bed for assumptions and decisions about the company. By creating a representation of the company a modeler can perform experiments that are impossible or prohibitively expensive in the real world. (Sterman, 1991)

    There are many different simulation techniques, including stochastic modeling, system dynamics, discrete simulation, etc. Despite the differences among them, all simulation techniques share a common approach to modeling.

    Key issues in simulation include acquisition of valid source information about the company, selection of key characteristics and behaviors, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

    Optimization models are prescriptive, but simulation models are descriptive. A simulation model does not calculate what should be done to reach a particular goal, but clarifies what could happen in a given situation. The purpose of simulations may be foresight (predicting how systems might behave in the future under assumed conditions) or policy design (designing new decision-making strategies or organizational structures and evaluating their effects on the behavior of the system). In other words, simulation models are “what if” tools. Often is such “what if” information more important than knowledge of the optimal decision.

    However, even with simulation models it is possible to mismanage risk by (Stulz, 2009):

    • Over-reliance on historical data
    • Using too narrow risk metrics , such as value at risk—probably the single most important measure in financial services—have underestimated risks
    • Overlooking knowable risks
    • Overlooking concealed risks
    • Failure to communicate effectively – failing to appreciate the complexity of the risks being managed.
    • Not managing risks in real time, you have to be able to monitor changing markets and,  respond to appropriately – You need a plan

    Being fully aware of the possible pitfalls we have methods and techniques’ that can overcome these issues and since we estimate the full probability distributions we can deploy a number of risk metrics  not having to relay on simple measures like value at risk – which we actually never uses.

    References

    COSO, (2004, September). Enterprise risk management — integrated framework. Retrieved from http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf

    COSO, (2009, October). Strengthening enterprise risk management for strategic advantage. Retrieved from http://www.coso.org/documents/COSO_09_board_position_final102309PRINTandWEBFINAL_000.pdf

    Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al. (eds.),
    Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.

    Stulz, R.M. (2009, March). Six ways companies mismanage risk. Harvard Business Review (The Magazine), Retrieved from http://hbr.org/2009/03/six-ways-companies-mismanage-risk/ar/1

    Enterprise risk management is a process, effected by an entity’s board of directors,

    management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • The Value of Information

    The Value of Information

    This entry is part 4 of 4 in the series A short presentation of S@R

     

    Enterprise risk management (ERM) only has value to those who know that the future is uncertain

    Businesses have three key needs:

    First, they need to have a product or service that people will buy. They need revenues.

    Second, they need to have the ability to provide that product or service at a cost less than what their customers will pay. They need profits. Once they have revenues and profits, their business is a valuable asset.

    So third, they need to have a system to avoid losing that asset because of unforeseen adverse experience. They need risk management.

    The top CFO concern is the firm’s ability to forecast results and the first stepping-stone in the process of forecasting results is to forecast demand – and this is where ERM starts.

    The main risk any firm faces is the variability (uncertainty) of demand. Since all production activities like procurement of raw materials, sizing of work force, investment in machinery etc. is based on expected demand the task of forecasting future demand is crucial. It is of course difficult and in most cases not possible to perfectly forecast demand, but it is always possible to make forecasts that give better results than mere educated guesses.

    We will attempt in the following to show the value of making good forecasts by estimating the daily probability distribution for demand. We will do this using a very simple model, assuming that:

    1. Daily demand is normal distributed with expected sales of 100 units and a standard deviation of 12 units,
    2. the product can not be stocked,
    3. it sells at $4 pr unit, has a variable production cost of $2 and a fixed production cost of $50.

    Now we need to forecast the daily sales. If we had perfect information about the demand, we would have a probability distribution for daily profit as given by the red histogram and line in the graphs below.

    • One form of forecast (average) is the educated guess using the average daily sales (blue histogram). As we can see from the graphs, this forecast method gives a large downside (too high production) and no upside (too low production).
    • A better method (limited information) would have been to forecast demand by its relation to some other observable variable. Let us assume that we have a forecast method that gives us a near perfect forecast in 50% of the cases and a probability distribution for the rest that is normal distributed with expected sales as for demand, but with a standard deviation of six units (green histogram).

    Profit-histogramWith the knowledge we have from (selecting strategy) we clearly se that the last forecast strategy is stochastic dominant to the use of average demand as forecast.

    ProfitSo, what is the value to the company of more informed forecasts than the mere use of expected sales? The graph below gives the distribution for the differences in profit (percentage) using the two methods. Over time, the second method  will give on average an 8% higher profit than just using the average demand as forecast.

    Diff-in-profitHowever, there is still another seven to eight percent room for further improvement in the forecasting procedure.

    If the company could be reasonable sure of the existence of a better forecast model than using the average, it would be a good strategy to put money into a betterment. In fact it could use up to 8% of all future profit if it knew that a method as good as or better than our second method existed.

  • WACC, Uncertainty and Infrastructure Regulation

    WACC, Uncertainty and Infrastructure Regulation

    This entry is part 2 of 2 in the series The Weighted Average Cost of Capital

     

    There is a growing consensus that the successful development of infrastructure – electricity, natural gas, telecommunications, water, and transportation – depends in no small part on the adoption of appropriate public policies and the effective implementation of these policies. Central to these policies is development of a regulatory apparatus that provides stability, protects consumers from the abuse of market power, guard’s consumers and operators against political opportunism, and provides incentives for service providers to operate efficiently and make the needed investments’ capital  (Jamison, & Berg, 2008, Overview).

    There are four primary approaches to regulating the overall price level – rate of return regulation (or cost of service), price cap regulation, revenue cap regulation, and benchmarking (or yardstick) regulation. Rate of return regulation adjusts overall price levels according to the operator’s accounting costs and cost of capital. In most cases, the regulator reviews the operator’s overall price level in response to a claim by the operator that the rate of return that it is receiving is less than its cost of capital, or in response to a suspicion of the regulator or claim by a consumer group that the actual rate of return is greater than the cost of capital (Jamison, & Berg, 2008, Price Level Regulation).

    We will in the following look at cost of service models (cost-based pricing); however some of the reasoning will also apply to the other approaches.  A number of different models exist:

    •    Long Run Average Total Cost – LRATC
    •    Long Run Incremental Cost – LRIC
    •    Long Run Marginal cost – LRMC
    •    Forward Looking Long Run Average Incremental Costs – FL-LRAIC
    •    Long Run Average Interconnection Costs – LRAIC
    •    Total Element Long Run Incremental Cost – TELRIC
    •    Total Service Long Run Incremental Cost – TSLRIC
    •    Etc.

    Where:
    Long run: The period over which all factors of production, including capital, are variable.
    Long Run Incremental Costs: The incremental costs that would arise in the long run with a defined increment to demand.
    Marginal cost: The increase in the forward-looking cost of a firm caused by an increase in its output of one unit.
    Long Run Average Interconnection Costs: The term used by the European Commission to describe LRIC with the increment defined as the total service.

    We will not discuss the merits and use of the individual methods only direct the attention on the fact that an essential ingredient in all methods is their treatment of capital and the calculation of capital cost – Wacc.

    Calculating Wacc a World without Uncertainty

    Calculating Wacc for the current year is a straight forward task, we know for certain the interest (risk free rate and credit risk premium) and tax rates, the budget values for debt and equity, the market premium and the company’s beta etc.

    There is however a small snag, should we use the book value of Equity or should we calculate the market value of Equity and use this in the Wacc calculations? The last approach is the recommended one (Copeland, Koller, & Murrin, 1994, p248-250), but this implies a company valuation with calculation of Wacc for every year in the forecast period. The difference between the two approaches can be large – it is only when book value equals market value for every year in the future that they will give the same Wacc.

    In the example below market value of equity is lower than book value hence market value Wacc is lower than book value Wacc. Since this company have a low and declining ROIC the value of equity is decreasing and hence also the Wacc.

    Wacc-and-Wacc-weights

    Calculating Wacc for a specific company for a number of years into the future ((For some telecom cases, up to 50 years.)) is not a straight forward task. Wacc is no longer a single value, but a time series with values varying from year to year.

    Using the average value of Wacc can quickly lead you astray. Using an average in e.g. an LRIC model for telecommunications regulation, to determine the price paid by competitors for services provided by an operator with significant market power (incumbent) will in the first years give a too low price and in the later years a to high price when the series is decreasing and vice versa. So the use of an average value for Wacc can either add to the incumbent’s problems or give him a windfall income.

    The same applies for the use of book value equity vs. market value equity. If for the incumbent the market value of equity is lower than the book value, the price paid by the competitors when book value Wacc is used will be to high and the incumbent will have a windfall gain and vise versa.

    Some advocates the use of a target capital structure (Copeland, Koller, & Murrin, 1994, p250) to avoid the computational difficulties (solving implicit equations) of using market value weights in the Wacc calculation. But in real life it can be very difficult to reach and maintain a fixed structure. And it does not solve the problems with market value of equity deviating from book value.

    Calculating Wacc a World with Uncertainty

    The future values for most, if not all variable will in the real world be highly uncertain – in the long run even the tax rates will vary.

    The ‘long run’ aspect of the methods therefore implies an ex-ante (before the fact) treatment of a number of variable; inflation, interest and tax rates, demand, investments etc. that have to be treated as stochastic variable.
    This is underlined by the fact that more and more central banks is presenting their forecasts of macro economic variable as density tables/charts (e.g. Federal Reserve Bank of Philadelphia, 2009) or as fan charts (Nakamura, & Shinichiro, 2008) like below from the Swedish Central Bank (Sveriges Riksbank, 2009):

    Riksbank_dec09

    Fan charts like this visualises the region of uncertainty or the possible yearly event space for central variable. These variables will also be important exogenous variables in any corporate valuation as value or cost drivers. Add to this all other variables that have to be taken into account to describe the corporate operation.

    Now, for every possible outcome of any of these variables we will have a different value of the company and is equity and hence it’s Wacc. So we will not have one time series of Wacc, but a large number of different time series all equally probable. Actually the probability of having a single series forecasted correctly is approximately zero.

    Then there is the question about how long it is feasible to forecast macro variables without having to use just the unconditional mean (Galbraith, John W. and Tkacz). In the charts above the ‘content horizon’ is set to approximately 30 month, in other the horizon can be 40 month or more (Adolfson, Andersson, Linde, Villani, & Vredin, 2007).

    As is evident from the charts the fan width is increasing as we lengthen the horizon. This is an effect from the forecast methods as the band of forecast uncertainty increases as we go farther and farther into the future.

    The future nominal values of GDP, costs, etc. will show even greater variation since these values will be dependent on the growth rates path’s to that point in time.

    Mont Carlo Simulation

    A possible solution to the problems discussed above is to use Monte Carlo techniques to forecast the company’s equity value distribution – coupled with market value weights calculation to forecast the corresponding yearly Wacc distributions:

    Wacc-2012

    This is the approach we have implemented in our models – it will not give a single value for Wacc but its distribution.  If you need a single value, the mean or mode from the yearly distributions is better than using the Wacc found from using average values of the exogenous variable – cf. Jensen’s inequality (Savage & Danziger, 2009).

    References

    Adolfson, A., Andersson, M.K., Linde, J., Villani, M., & Vredin, A. (2007). Modern forecasting models in action: improving macroeconomic analyses at central banks. International Journal of Central Banking, (December), 111-144.

    Copeland, T., Koller, T., & Murrin, J. (1994). Valuation. New York: Wiley.

    Copenhag Eneconomics. (2007, February 02). Cost of capital for broadcasting transmission . Retrieved from http://www.pts.se/upload/Documents/SE/WACCforBroadcasting.pdf

    Federal Reserve Bank of Philadelphia, Initials. (2009, November 16). Fourth quarter 2009 survey of professional forecasters. Retrieved from http://www.phil.frb.org/research-and-data/real-time-center/survey-of-professional-forecasters/2009/survq409.cfm

    Galbraith, John W. and Tkacz, Greg, Forecast Content and Content Horizons for Some Important Macroeconomic Time Series. Canadian Journal of Economics, Vol. 40, No. 3, pp. 935-953, August 2007. Available at SSRN: http://ssrn.com/abstract=1001798 or doi:10.1111/j.1365-2966.2007.00437.x

    Jamison, Mark A., & Berg, Sanford V. (2008, August 15). Annotated reading list for a body of knowledge on infrastructure regulation (Developed for the World Bank). Retrieved from http://www.regulationbodyofknowledge.org/

    Nakamura, K., & Shinichiro, N. (2008). The Uncertainty of the economic outlook and central banks’ communications. Bank of Japan Review, (June 2008), Retrieved from http://www.boj.or.jp/en/type/ronbun/rev/data/rev08e01.pdf

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Sveriges Riksbank, . (2009). The Economic outlook and inflation prospects. Monetary Policy Report, (October), p7. Retrieved from http://www.riksbank.com/upload/Dokument_riksbank/Kat_publicerat/Rapporter/2009/mpr_3_09oct.pdf

  • Selecting Strategy

    Selecting Strategy

    This entry is part 2 of 2 in the series Valuation

     

    This is an example of how S&R can define, analyze, visualize and help in selecting strategies, for a broad range of issues; financial, operational and strategic.

    Assume that we have performed (see: Corporate-risk-analysis) simulation of corporate equity value for two different strategies (A and B). The cumulative distributions are given in the figure below.

    Since the calculation is based on a full simulation of both P&L and Balance, the cost of implementing the different strategies is in calculated; hence we can directly use the distributions as basis for selecting the best strategy.

    cum-distr-a-and-b_strategy

    In this rater simple case, we intuitively find strategy B as the best; being further out to the right of strategy A for all probable values of equity. However to be able to select the best strategy from more complicated and larger sets of feasible strategies we need a more well-grounded method than mere intuition.

    The stochastic dominance approach, developed on the foundation of von Neumann and Morgenstern’s expected utility paradigm (Neumann, Morgenstern, 1953) is such a method.

    When there is no uncertainty the maximum return criterion can be used both to rank and select strategies. With uncertainty however, we have to look for the strategy that maximizes the firms expected utility.

    To specify a utility function (U) we must have a measure that uniquely identifies each strategy (business) outcome and a function that maps each outcome to its corresponding utility. However utility is purely an ordinal measure. In other words, utility can be used to establish the rank ordering of strategies, but cannot be used to determine the degree to which one is preferred over the other.

    A utility function thus measures the relative value that a firm places on a strategy outcome. Here lies a significant limitation of utility theory: we can compare competing strategies, but we cannot assess the absolute value of any of those strategies. In other words, there is no objective, absolute scale for the firm’s utility of a strategy outcome.

    Classical utility theory assumes that rational firms seek to maximize their expected utility and to choose among their strategic alternatives accordingly. Mathematically, this is expressed as:

    Strategy A is preferred to strategy B if and only if:
    EAU(X) ≥ EBU(X) , with at least one strict inequality.

    The features of the utility function reflect the risk/reward attitudes of the firm. These same features also determine what stochastic characteristics the strategy distributions must possess if one alternative is to be preferred over another. Evaluation of these characteristics is the basis of stochastic dominance analysis (Levy, 2006).

    Stochastic dominance as a generalization of utility theory eliminates the need to explicitly specify a firm’s utility function. Rather, general mathematical statements about wealth preference, risk aversion, etc. are used to develop decision rules for selecting between strategic alternatives.

    First order stochastic dominance.

    Assuming that U’≥ 0 i.e. the firm has increasing wealth preference, strategy A is preferred to strategy B (denoted as AD1B i.e. A dominates B by 1st order stochastic dominance) if:

    EAU(X) ≥ EBU(X)  ↔  SA(x) ≤ SB(x)

    Where S(x) is the strategy’s  distribution function and there is at least one strict inequality.

    If  AD1B , then for all values x, the probability of obtaining x or a value higher than x is larger under A than under B;

    Sufficient rule 1:   A dominates B if Min SA(x) ≥ Max SB(x)   (non overlapping)

    Sufficient rule 2:   A dominates B if SA(x) ≤ SB(x)  for all x   (SA ‘below’ SB)

    Most important Necessary rules:

    Necessary rule 1:  AD1B → Mean SA > Mean SB

    Necessary rule 2:  AD1B → Geometric Mean SA > Geometric Mean SB

    Necessary rule 3:  AD1B → Min SA(x) ≥  Min SB(x)

    For the case above we find that strategy B dominates strategy A – BD1A  – since the sufficient rule 2 for first order dominance is satisfied:

    strategy-a-and-b_strategy1

    And of course since one of the sufficient conditions is satisfied all of the necessary conditions are satisfied. So our intuition about B being the best strategy is confirmed. However there are cases where intuition will not work:

    cum-distr_strategy

    In this case the distributions cross and there is no first order stochastic dominance:

    strategy-1-and-2_strategy

    To be able to determine the dominant strategy we have to make further assumptions on the utility function – U” ≤ (risk aversion) etc.

    N-th Order Stochastic Dominance.

    With n-th order stochastic dominance we are able to rank a large class of strategies. N-th order dominance is defined by the n-th order distribution function:

    S^1(x)=S(x),  S^n(x)=int{-infty}{x}{S^(n-1)(u) du}

    where S(x) is the strategy’s distribution function.

    Then strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    SnA(x) ≤ SnB(x) , with at least one strict inequality and

    EAU(X) ≥ EBU(X) , with at least one strict inequality,

    for all U satisfying (-1)k U (k) ≤0 for k= 1,2,…,n. , with at least one strict inequality

    The last assumption implies that U has positive odd derivatives and negative even derivatives:

    U’  ≥0 → increasing wealth preference

    U”  ≤0 → risk aversion

    U’’’ ≥0 → ruin aversion (skewness preference)

    For higher derivatives the economic interpretation is more difficult.

    Calculating the n-th order distribution function when you only have observations of the first order distribution from Monte Carlo simulation can be difficult. We will instead use the lower partial moments (LPM) since (Ingersoll, 1987):

    SnA(x) ≡ LPMAn-1/(n-1)!

    Thus strategy A dominates strategy B in the sense of n-order stochastic dominance – ADnB  – if:

    LPMAn-1 ≤ LPMBn-1

    Now we have the necessary tools for selecting the dominant strategy of strategy #1 and #2. To se if we have 2nd order dominance, we calculate the first order lower partial moments – as shown in the graph below.

    2nd-order_strategy

    Since the curves of the lower moments still crosses both strategies is efficient i.e. none of them dominates the other. We therefore have to look further using the 2nd order LPM’s to investigate the possibility of 3rd order dominance:

    3rd-order_strategy

    However, it is first when we calculate the 4th order LPM’s that we can conclude with 5th order stochastic dominance of strategy #1 over strategy #2:

    5th-order_strategy

    We then have S1D5S2 and we need not look further since (Yamai, Yoshiba, 2002) have shown that:

    If: S1DnS2 then S1Dn+1S2

    So we end up with strategy #1 as the preferred strategy for a risk avers firm. It is characterized by a lower coefficient of variation (0.19) than strategy #2 (0.45), higher minimum value (160) than strategy#2 (25), higher median value (600) than strategy #2 (561). But it was not only these facts that gave us strategy #1 as stochastic dominant, because it has negative skewness (-0.73) against positive skewness (0.80) for strategy #2 and a lower expected value (571) than strategy #2 (648), but the ‘sum’ of all these characteristics.

    A digression

    It is tempting to assume that since strategy #1 is stochastic dominant strategy #2 for risk avers firms (with U”< 0) strategy #2 must be stochastic dominant for risk seeking firms (with U” >0) but this is necessarily not the case.

    However even if strategy #2 has a larger upside than strategy #1, it can be seen from the graphs of the two strategies upside potential ratio (Sortino, 1999):
    upside-ratio_strategythat if we believe that the outcome will be below a minimal acceptable return (MAR) of 400 then strategy #1 has a higher minimum value and upside potential than #2 and vice versa above 400.

    Rational firm’s should be risk averse below the benchmark MAR, and risk neutral above the MAR, i.e., they should have an aversion to outcomes that fall below the MAR . On the other hand the higher the outcomes are above the MAR the more they should like them (Fishburn, 1977). I.e. firm’s seek upside potential with downside protection.

    We will return later in this serie to  how the firm’s risk and opportunities can be calculated given the selected strategy.

    References

    Fishburn, P.C. (1977). Mean-Risk analysis with Risk Associated with Below Target Returns. American Economic Review, 67(2), 121-126.

    Ingersoll, J. E., Jr. (1987). Theory of Financial Decision Making. Rowman & Littlefield Publishers.

    Levy, H., (2006). Stochastic Dominance. Berlin: Springer.

    Neumann, J., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton: Princeton University Press.

    Sortino, F , Robert van der Meer, Auke Plantinga (1999).The Dutch Triangle. The Journal of Portfolio Management, 26(1)

    Yamai, Y., Toshinao Yoshiba (2002).Comparative Analysis of Expected Shortfall and Value-at-Risk (2): Expected Utility Maximization and Tail Risk. Monetary and Economic Studies, April, 95-115.

  • Top Ten Concerns of CFO’s – May 2009

    Top Ten Concerns of CFO’s – May 2009

    A poll of more than 1200 senior finance executives by CFO Europe together with Tilburg and Duke University ranks the ten top external and internal concerns in Europe, Asia and America (Jason, 2009).

    cfo_europe_top_ten1

    High in all regions we find as external concerns; consumer demand, interest rates, currency volatility and competition.

    For the internal concerns the ability to forecast results together with working capital management and balance sheet weakness ranked highest. This is concerns that balance simulation addresses with the purpose of calculating the effects of different strategies. Adding the uncertainty of future currency and interest rates, demand and competition you have all the ingredients implying the necessity of a stochastic simulation model.

    The risk that “now” has surfaced should compel more managers to look into the risk inherent in their operations. Even if you can’t plan for an uncertain future you can prepare for what it might bring.

    References

    Karaian, Jason (2009, May). Top Ten Concerns of CFO’s. CFO Europe, 12(1), 10-11.