Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
S@R – Page 9 – Strategy @ Risk

Author: S@R

  • The Risk of Bankruptcy

    The Risk of Bankruptcy

    This entry is part 1 of 4 in the series Risk of Bankruptcy

     

    Investors should be skeptical of history-based models. Constructed by a nerdy-sounding priesthood using esoteric terms such as beta, gamma, sigma and the like, these models tend to look impressive. Too often, though, investors forget to examine the assumptions behind the symbols. Our advice: Beware of geeks bearing formulas.  – Warren E. Buffett. ((Buffett, Warren E., “Shareholder Letters.” Berkshire Hathaway Inc. 27 February 2009,. Berkshire Hathaway Inc. 13 Mar 2009 <http://www.berkshirehathaway.com/letters/letters.html>.))

    Historic growth is usually a risky estimate for future growth. To be able to forecast a company’s future performance you have to make assumptions on the future most likely values and their event space of a large number of variables, and then calculate both the probability of future necessary cash infusions and if they do not materialized – the risk of bankruptcy.

    The following calculations are carried out using the Strategy& Risk simulation model. Such simulations can be carried out on all types of enterprises including the financial sector. There are several models in use for predicting bankruptcy and we have in our balance simulation model implemented two;  Altman’s Z-score model and the risk index Z developed by Hannan and Hanweck.

    Atman’s Z-score model is based on financial ratios and their relation to bankruptcy found from discriminant analysis. ((Altman, E. I.. “Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy.” The Journal of Finance 23(1968): 589-609. ))  The coefficients in the discriminant function has in later studies been revised – the Z’-score and Z’’-score models.

    Hannan and Hanweck’s probability of insolvency is based on the likelihood of return to assets being negative and larger then the capital-asset ratio. ((Timothy H., Hannan, Gerald A. Hanweck. “Bank Insolvency Risk and the Market for Large Certificates of Deposit.” Journal of Money, Credit and Banking 20(1988): 203-211.)) The Z index has been used  to forecast bank insolvency ((Kimball, Ralph C.. “Economic Profit and Performance Measurement in Banking.” New England Economic Review July/August(1998): 35-53.)) ((Jordan, John S.. “Problem Loans at New England Banks, 1989 to 1992: Evidence of Aggressive Loan Policies.” New England Economic Review January/February(1998): 23-38.)), but can profitably be used to study large private companies with low return to assets.

    We will here take a look at the Z-index and in a later post use the same data to calculate the Z-scores.

    The following calculations are based on forecasts, EBITDA and balance simulations – not on historic balance sheet data. The Z-index is defined as:

    Z=  (ROA+K)/sigma

    where ROA is the pre-tax return on assets, K the ratio of equity to assets, and s the standard deviation of pre-tax ROA. The Z-index give pr unit of standard deviation of ROA the decline in ROA the company can manage before equity is exhausted and becomes insolvent.

    We will in the simulation (250 runs) for every year in the 15 year forecast period – both forecast the yearly ROA and K, and use the variance in ROA to estimate s. For every value of Z – assuming a symmetric distribution – we can calculate the perceived probability (upper bound) of insolvency (p) from:

    P =  (1/2)*sigma^2/(E(ROA)+K)^2

    where the multiplication by (1/2) reflects the fact that insolvensy occurs only in the left tail of the distribution. The relation of p to Z is inverse one, with higher Z-ratios indicating low probability of insolvency.

    z-indexs-probability

    Since our simulation cover a 15 year period it is fully possible that multi-period losses, thru decline in K, can wipe out the equity and cause a failure of the company.

    In year five of the simulation the situation is as follows, the pre-tax return on assets is low – on average only 1.3% and in 20% of the cases it is zero or negative.

    pre-tax-roa

    However the ratio of equity to assets is high – on average 37% with standard deviation of only 1.2.

    ratio-of-equity-to-assets

    The distribution of the corresponding Z-Index values is given in the chart below. It is skewed with a long right tail; the mean is 32 with a minimum value of 16.

    z-index

    From the graph giving the relation between the Z-index and probability of insolvency it is clear that the company’s economic situation is far from being threatened. If we look at the distribution for the probability of insolvency as calculated from the estimated Z-index values this is confirmed having values in the range from 0.1 to 0.3.

    probability-of-insolvency

    Having the probability of insolvency pr year gives us the opportunity to calculate the probability of failure over the forecast period for any chosen strategy.

    If it can’t be expressed in figures, it is not science; it is opinion. It has long been known that one horse can run faster than another — but which one? Differences are crucial. ((Heinlein, Robert. Time Enough for Love. New York: Putnam, 1973))

    References

  • The Risk of Spreadsheet Errors

    The Risk of Spreadsheet Errors

    This entry is part 1 of 2 in the series Spreadsheet Errors

     

    Spreadsheets create an illusion of orderliness, accuracy, and integrity. The tidy rows and columns of data, instant calculations, eerily invisible updating, and other features of these ubiquitous instruments contribute to this soothing impression.  The quote are taken from Ivars Peterson’s MathTrek Column written in back in 2005, but it still applies to day. ((Peterson, Ivars. “The Risky Business of Spreadsheet Errors.” MAA Online December 19, 2005 26 Feb 2009 .))

    Over the years we have learned a good deal about spreadsheet errors we even have got a spread sheet risk interest group (EuSpRIG) ((EuSpRIG: http://www.eusprig.org/index.htm)).

    Audits done shows that nearly 90% of the spreadsheets contained serious errors. Code inspection experiments also shows that even experienced users have a hard time finding errors succeeding in only finding 54% on average.

    Panko (2009) summarized the results of seven field audits in which operational spreadsheets were examined, typically by an outsider to the organization. His results show that 94% of spreadsheets have errors and that the average cell error rates (the ratio of cells with errors to all cells with formulas) is 5.2%. ((Panko, Raymond R.. “What We Know About Spreadsheet Errors.” Spreadsheet Research (SSR. 2 16 2009. University of Hawai’i. 27 Feb 2009 . ))

    Some of the problems stems from the fact that a cell can contain any of the following: operational values, document properties, file names, sheet names, file paths, external links, formulas, hidden cells, nested Ifs, macros etc. and that the workbook can contain, hidden sheets and very hidden sheets.

    Add to this reuse and recirculation of workbooks and code; after cutting and pasting information, the spreadsheet might not work the way it did before — formulas can be damaged, links can be broken, or cells can be overwritten. How many uses version controls and change logs? In addition the spreadsheet is a perfect environment for perpetrating fraud due to the mixture of formulae and data.

    End-users and organizations that rely on spreadsheets generally do not fully recognize the risks of spreadsheet errors:  It is completely within the realms of possibility that a single, large, complex but erroneous spreadsheet could directly cause the accidental loss of a corporation or institution (Croll 2005)  ((Croll, Grenville J.. “The Importance and Criticality of Spreadsheets in the City of London.” Notes from Eusprig 2005 Conference . 2005. EuSpRIG. 2 Mar 2009 .))

    A very comprehensive literature review on empirical evidence of spreadsheet errors is given in the article Spreadsheet Accuracy Theory.  ((Kruck, S. E., Steven D. Sheetz. “Spreadsheet Accuracy Theory.” Journal of Information Systems Education 12(2007): 93-106.))

    EUSPRIG also publicises verified public reports with a quantified error or documented impact of spreadsheet errors. ((” Spreadsheet mistakes – news stories.” EuSpRIG. 2 Mar 2009 .))

    We will in the following use publicised data from a well documented study on spreadsheet errors. The data is the result of an audit of 50 completed and operational spreadsheets from a wide variety of sources. ((Powell, Stephen G., Kenneth R. Baker, Barry Lawson. “Errors in Operational Spreadsheets.” Tuck School of Business. November 15, 2007. Dartmouth College. 2 Mar 2009))

    Powell et alii settled for six error types:

    1. Hard-coding in a formula – one or more numbers appear in formulas
    2. Reference error – a formula contains one or more incorrect references to other cells
    3. Logic error – a formula is used incorrectly, leading to an incorrect result
    4. Copy/Paste error – a formula is wrong due to inaccurate use of copy/paste
    5. Omission error – a formula is wrong because one or more of its input cells is blank
    6. Data input error – an incorrect data input is used

    And these were again grouped as Wrong Result or Poor Practise depending on the errors effect on the calculation.

    Only three workbooks were without errors, giving a spreadsheet error rate of 94%. In the remaining 47 workbooks they found 483 instances ((An error instance is a single occurrence of one of the six errors in their taxonomy)) of errors; 281 giving wrong result and 202 involving poor practise.

    cell_errors_instances

    The distribution on the different types of error is given in the instances table. It is worth noting that in poor practice hard-coding errors was the most common while incorrect references and incorrectly used formulas was the most numerous errors in wrong result.

    cell_errors_cells

    The 483 instances involved 4,855 error cells, which with 270,722 cells audited gives a cell error rate of 1.79%. The corresponding distribution of errors is given in the cells table. The Cell Error Rate (CER) for wrong result is 0.87% while the CER for poor practise is 1.79%.

    In the following graph we have plotted the cell error rates against the proportion of spreadsheets having that error rate (zero CER is excluded). We can se that most spreadsheets have a low CER and only a few a high CER. This is more evident for wrong result than for poor practise.

    cell_error_rates_frequencie

    If we accumulate the above frequencies and include the spreadsheets with zero errors we get the “probability distributions” below. We find that 60% of the spread sheets have a CER giving a wrong result of 1% or more and that only 10% have a CER of 5% or more.

    cell_error_rates_accumulate

    The high percentage of spreadsheets having errors is due to the fact that bottom-line values are computed through long cascades of formula cells. Because in tasks that contain many sequential operations error rates multiply along cascades of subtasks, the fundamental equation for the bottom-line error rate is based on a memoryless geometric distribution over cell errors. ((Lorge, Irving, Herbert Solomon. “Two Models of Group Behavior in the Solution of Eureka-Type Problems.” Psykometrika 20(1955): 139-148. )):

    E=1-(1-e)^n

    Here, E is the bottom-line error rate, e is the cell error rate and n is the number of cells in the cascade. E indicates the probability of an incorrect result in the last cascade cell, given the probability of an error in each cascade cell is equal to the cell error rate. ((Bregar, Andrej. “Complexity Metrics for Spreadsheet Models.” Proceedings ofEuSpRIG 2004. http://www.eusprig.org/. 1 Mar 2009 .))

    In the figure below we have used the CER for wrong result (0.87%) and for poor practise (1.79%) to calculate the probability of a corresponding worksheet error, given the cascade length. For poor practice at a calculation cascade of 100 cells there is a probability of 84% an error and 65 cells it is 95%. For wrong result 100 cells give a probability of 58% for an error and at 343 cells it is 95%.

    cascading-probability

    Now if we consider a net present value calculation over a 10 year forecast period in a valuation problem it will easily have more than 343 cells that with high probability contains error.

    This is why S@R uses programming languages for simulation models. Of course will models like that also have errors, but it will not mix data and code, the quality control is easier, it will have columnar consistency, be protected by being compiled, having numerous intrinsic error checks, data entry controls and validation checks (see: Who we are).

    Efficient computing tools are essential for statistical research, consulting, and teaching. Generic packages such as Excel are not sufficient even for the teaching of statistics, let alone for research and consulting (American Statistical Association )

    References

  • Airport Simulation

    Airport Simulation

    This entry is part 1 of 4 in the series Airports

     

    The basic building block in airport simulation is the passenger (Pax) forecast. This is the basis for subsequent estimation of aircraft movements (ATM), investment in terminal buildings and airside installations, all traffic charges, tax free sales etc. In short it is the basic determinant of the airport’s economics.

    The forecast model is usually based on a logarithmic relation between Pax, GDP and airfare price movement. ((Manual on Air Traffic Forecasting. ICAO, 2006)), ((Howard, George P. et al. Airport Economic Planning. Cambridge: MIT Press, 1974.))

    There has been a large number of studies over time and across the world on Air Travel Demand Elasticities, a good survey is given in a Canadian study ((Gillen, David W.,William G. Morrison, Christopher Stewart . “Air Travel Demand Elasticities: Concepts, Issues and Measurement.” 24 Feb 2009 http://www.fin.gc.ca/consultresp/Airtravel/airtravStdy_-eng.asp)).

    In a recent project for an European airport – aimed at establishing an EBITDA model capable of simulating risk in its economic operations – we embedded the Pax forecast models in the EBITDA model. Since the seasonal variations in traffic are very pronounced and since the cycles are reverse for domestic and international traffic a good forecast model should attempt to forecast the seasonal variations for the different groups of travellers.

    int_dom-pax

    In the following graph we have done just that, by adding seasonal factors to the forecast model based on the relation between Pax and change in GDP and air fare cost. We have however accepted the fact that neither is the model specification complete, nor is the seasonal factors fixed and constant. We therefore apply Monte Carlo simulation using estimation and forecast errors as the stochastic parts. In the figure the green lines indicate the 95% limit, the blue the mean value and the red the 5% limit. Thus with 90% probability will the number of monthly Pax fall within these limits.

    pax

    From the graph we can clearly se the effects of estimation and forecast “errors” and the fact that it is international travel that increases most as GDP increases (summer effect).

    As an increase in GDP at this point of time is not exactly imminent we supply the following graph, displaying effects of different scenarios in growth in GDP and air fare cost.

    pax-gdp-and-price

    References

  • Budgeting

    Budgeting

    This entry is part 1 of 2 in the series Budgeting

     

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Admittedly a number of large public building projects are calculated this way, but more often than not is the aim only to calculate some percentile (usually 85%) as expected budget cost.

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall.  A manager’s subjective probability describes his personal judgement ebitabout how likely a particular event is to occur. It is not based on any precise computation but is a reasonable assessment by a knowledgeable person. Selecting the budget value however is more difficult. Should it be the “mean” or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average ((Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November (2002): 20-21.))

    When judging probability, people can locate the source of the uncertainty either in their environment or in their own imperfect knowledge ((Kahneman D, Tversky A . ” On the psychology of prediction.” Psychological Review 80(1973): 237-251)). When assessing uncertainty, people tend to underestimate it – often called overconfidence and hindsight bias.

    Overconfidence bias concerns the fact that people overestimate how much they actually know: when they are p percent sure that they have predicted correctly, they are in fact right on average less than p percent of the time ((Keren G.  “Calibration and probability judgments: Conceptual and methodological issues”. Acta Psychologica 77(1991): 217-273.)).

    Hindsight bias concerns the fact that people overestimate how much they would have known had they not possessed the correct answer: events which are given an average probability of p percent before they have occurred, are given, in hindsight, probabilities higher than p percent ((Fischhoff B.  “Hindsight=foresight: The effect of outcome knowledge on judgment under uncertainty”. Journal of Experimental Psychology: Human Perception and Performance 1(1975) 288-299.)).

    We will however not endeavor to ask for the managers subjective probabilities only ask for the range of possible values (5-95%) and their best guess of the most likely value. We will then use this to generate an appropriate log-normal distribution for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate we will use rectangular distributions.

    We will then proceed as if the distributions where known (Keynes):

    [Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.  ((John Maynard Keynes. ” General Theory of Employment, Quarterly Journal of Economics (1937))

    budget_actual_expected

    The data collection can easily be embedded in the ordinary budget process, by asking the managers to set the lower and upper 5% values for all variables demining the budget, and assuming that the budget figures are the most likely values.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – usually 1000 – of net revenue, operating expenses and finally EBIT (DA).

    In this case the budget was optimistic with ca 84% probability of having an outcome below and only with 26% probability of having an outcome above. The accounts also proved it to be high (actual) with final EBIT falling closer to the expected value. In our experience expected value is a better estimator for final result than the budget  EBIT.

    However, the most important part of this exercise is the shape of the cumulative distribution curve for EBIT. The shape gives a good picture of the uncertainty the company faces in the year to come, a flat curve indicates more uncertainty both in the budget forecast and the final result than a steeper curve.

    Wisely used the curve (distribution) can be used both to inform stakeholders about risk being faced and to make contingency plans foreseeing adverse events.percieved-uncertainty-in-ne

    Having the probability distributions for net revenue and operating expenses we can calculate and plot the manager’s perceived uncertainty by using coefficients of variation.

    In our material we find on average twice as much uncertainty in the forecasts for net revenue than for operating expenses.

    As many often have budget values above expected value they are exposing a downward risk. We can measure this risk by the Upside Potential Ratio, which is the expected return above budget value per unit of downside risk. It can be found using the upper and lower moments calculated at budget value.

    References

  • Fish farming

    Fish farming

    When we in 2002 were asked to look into the risk of Cod fish farming, we had to start with the basics; how do cod feed and grow at different locations and what is the mortality at the same locations.

    The first building block was Björn Björnsson’s paper; Björnsson, B., Steinarsson, A., Oddgeirsson, M. (2001). Optimal temperature for growth and feed conversion of immature cod. ICES Journal of Marine Science, 58: 29-38.

    Together with: Björn Björnsson, Marine Research Institute, Iceland and Nils Henrik Risebro, University of Oslo, Norway we did the study presented in the attached paper – Growth, mortality, feed conversion and optimal temperature for maximum rate of increase in biomass and earnings in cod fish farming. (Growth, mortality, feed conversion and optimal temperature for maximum …..)

    This formed the basis for a stochastic simulation model used to calculate the risk in investing in cod fish farming at different locations in Norway.

    simulation-model-for-fisher

    The stochastic part was taken from the “estimation errors” for the relations between growth, feed conversion, mortality etc. as function of deviation from optimal temperature.

    As optimal temperature  varies with cod size, temperature at a fixed location will during the year and over the production cycle deviate from optimal temperature. Locations with temperature profiles close to optimal temperature profile for growth in biomass will, other parameters held constant, are more favorable.

    The results that came out favorably for certain locations were subsequently used as basis for an IPO to finance the investment.

    The use of the model was presented as an article in Norsk Fiskeoppdrett 2002, #4 and 5. It can be downloaded here  (See: Cod fish farming), even if it is in Norwegian some of the graphs might be of interest.

    The following graph sums up the project. It is based on local yield in biomass relative to yield at optimal temperature profile for growth in biomass. Farming operation is simulated on different locations along the coast of Norway and local yield and its coefficient of variation (standard deviation divided by mean) is in the graph plotted against the locations position north. As we can see is not only the yield increasing as the location moves north, but also the coefficient of variation, indicating less risk in an investment.

    yield-as-function-of-positi

    The temperature profile for the locations was taken from the Institute of Marine Research publication: Hydrographic normals and long – term variations at fixed surface layer stations along the Norwegian coast from 1936 to 2000, Jan Aure and Øyvin Strand, Fisken og Havet, #13, 2001.

    Locations of fixed termografic stations along the coast of Norway.

    Locations of fixed termografic stations along the coast of Norway.

    The study gives the monthly mean and standard deviation of temperature (and salinity) in the surface layer at the coastal stations between Sognesjøen and Vardø, for the period 1936 – 1989.

    Monthly mean of temperature in the surface layer at all stations

    Monthly mean of temperature in the surface layer at all stations

    By employing a specific temperature profile in the simulation model we were able to estimate the probability distribution for one cycle biomass at that location as given in the figure below.

    position-n7024

    Having the probability distribution for production we added forecasts for cost and prices as well as for their variance. The probability distributions for production also give the probability distribution for the necessary investment, so that we in the end were able to calculate the probability distribution for value of the entity (equity).

    value-of-fish-farm-operatio

  • What is the correct company value?

    What is the correct company value?

    Nobel Prize winner in Economics, Milton Friedman, has said; “the only concept/theory which has gained universal acceptance by economists is that the value of an asset is determined by the expected benefits it will generate”.

    Value is not the same as price. Price is what the market is willing to pay. Even if the value is high, most want to pay as little as possible. One basic relationship will be the investor’s demand for return on capital – investor’s expected return rate. There will always be alternative investments, and in a free market, investor will compare the investment alternatives attractiveness against his demand for return on invested capital. If the expected return on invested capital exceeds the investments future capital proceeds, the investment is considered less attractive.

    value-vs-price-table

    One critical issue is therefore to estimate and fix the correct company value that reflects the real values in the company. In its simplest form this can be achieved through:

    Budget a simple cash flow for the forecast period with fixed interest cost throughout the period, and ad the value to the booked balance.

    This evaluation will be an indicator, but implies a series of simplifications that can distort the reality considerably. For instance, real balance value differs generally from book value. Proceeds/dividends are paid out according to legislation; also the level of debt will normally vary throughout the prognosis period. These are some factors that suggest that the mentioned premises opens for the possibility of substantial deviation compared to an integral and detailed evaluation of the company’s real values.

    A more correct value can be provided through:

    • Correcting the opening balance, forecast and budget operations, estimate complete result and balance sheets for the whole forecast period. Incorporate market weighted average cost of capital when discounting.

    The last method is considerably more demanding, but will give an evaluation result that can be tested and that also can take into consideration qualitative values that implicitly are part of the forecast.
    The result is then used as input in a risk analysis such that the probability distribution for the value of the chosen evaluation method will appear. With this method a more correct picture will appear of what the expected value is given the set of assumption and input.

    The better the value is explained, the more likely it is that the price will be “right”.

    The chart below illustrates the method.

    value-vs-price_chart1