Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Balance sheet simulation – Page 2 – Strategy @ Risk

Category: Balance sheet simulation

  • Stochastic Balance Simulation

    Stochastic Balance Simulation

    This entry is part 1 of 6 in the series Balance simulation

    Introduction

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on a single values forecasts; the expected or average value of the input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong (Savage, 2002).  In addition deterministic models will miss the important dimension of uncertainty – that gives both the different risks facing the company and the opportunities they produce.

    In contrast, a stochastic model will be calculated a large number of times with different values for the input variable drawn from all possible values of the individual variables. Each run will then give a probable realization of future cash flow or of the company’s equity value etc. With thousands of runs we can plot the relative frequencies of the calculated values:

    and thus, we have succeeded in generating the probability distribution for the company’s equity value. In insurance this type of technique is often called Dynamic Financial Analysis (DFA) which actually is a fitting name.

    The Balance Simulation Model

    The main tool in the S&R toolbox is the balance model. The starting point is the company’s balance, which is treated as the simulations opening balance. In the case of a greenfield project – new factories, power plants, airports, etc. built from scratch – the opening balance is empty.

    The successive balances are then built from the Profit & Loss, by simulation of the company’s operation thru an EBITDA model mimicking the real life operations. Investments can be driven by demand (capacity calculations) or by investment programs giving the necessary or planned production capacity. The model will throughout the simulation raise debt (short and/or long term) or equity (domestic or foreign) according to the financial strategy set out by the company and the difference between cash outflow and inflow adjusted for the minimum cash level.

    Since this is a dynamic model, it will raise equity when losses occur and/or the maximum Debt/equity ratio has been exceeded. On the other hand it will repay loans, pay dividend, repurchase shares or purchase excess marketable securities (cash above the need for the operations) – all in line with the board’s shareholder strategy.

    The ledger and Double-entry Bookkeeping

    The activity described in the EBITDA model; investments, purchase of raw materials, production, payment of wages, income from sales, payment of special taxes on investments etc. is registered as transactions in the ledger, following a standard chart of accounts with double-entry bookkeeping. In a similar fashion are all financial transactions; loans repayments, cash, taxes paid and deferred, Agio and Disagio, etc. posted in the ledger. Currently, approximately 400 accounts are in use.

    The Trial Balance and the Financial Statements

    The trial balance (Post-Closing) is compiled and checked for balance between total debts and total credits. The income statement is then prepared using revenue and expense accounts from the trial balance and the balance sheet is prepared from the asset and liability accounts by including net income with the other equity accounts – using the International Financial Reporting Standards (IFRS).

    The general purpose of producing the trial balance is to ensure that the entries in the ledger are mathematically correct. Have in mind that every run in a simulation will produce a number of entries in the ledger and that they might differ not only in size but also in type depending on the realized states of the company’s operations (see above). We therefore need to be sure that the final financial statements – for every run – are correctly produced, since they will be the basis for all further financial analysis of the company.

    There are of course other sources of errors in book keeping; compensating errors, errors of omission, errors of principle etc. but after many years of use – with millions of runs – we feel confident that the ledger and financial statements are produced correctly. The point is that serious problems need serious models.

    However there are more benefits to be had from simulating the ledger and trial balance:

    1. It increases the models transparency; the trial balance can be printed out and audited. Together with the models extensive reporting and error/consistency control, it is no longer a ‘black box’ to the user.
    2. It makes it easy to plug inn new EBITDA models for other types of industry giving an automated check for consistency with the main balance simulation model.
    3. It is used to ensure correct solving of all implicit equations in the model, the most obvious is of course the interest and bank balance equation (interest depends on the bank balance and the bank balance depends on the interest) but others like translation hedging and limits set by the company’s financial strategy, create large and complicated systems of simultaneous equations.
    4. The trial balance changes from year to year are also used to ensure correct year to year balance transition.

    Financial Analysis, Financial Measures and Valuation

    Given the framework described above financial analysis can be performed and the expected value, variability and probability distributions for the different types of ratios; profitability, liquidity, activity, debt and equity etc. can be calculated and given as graphs. All important measures are calculated at least twice from different starting points to ensure consistency and correct solving of implicit equations.

    The following table shows the reconciliation of Economic Profit, initially calculated from (ROIC-WACC) multiplied with Invested capital:

    The motivation for doing all these consistency controls – in all nearly one hundred – lies in previously experience from Cash Flow/ Valuation models written in Excel. The level of detail is more often than not so low that there is no way to establish if they are right or wrong.

    More interesting than ratios, are the yearly distributions for EBITDA, EBIT, NOPLAT, Profit (loss) for the period, Free cash Flow, Economic profit, ROIC, Wacc, Debt and Equity and Equity value etc. giving a visual picture of the uncertainties and risks the company faces:

    Financial analysis is the conversion of financial data into useful information for decision making. Therefore, virtually any use of financial statements or other financial data for some purpose is financial analysis and is the primary focus of accounting and finance. Financial analysis can be internal (e.g., decision analysis by a company using internal data to understand or improve management and operating results) or external (e.g., comprehensive analysis for the purposes of commercial lending, mergers and acquisition or investment activities). The key is how to analysis available data to make correct decisions.

     

    Input

    As input the model needs parameter values and operational data. The parameter values fall in seven groups:

    1. Parameters describing investors preferences; Market risk premium etc.
    2. Parameters describing the company’s financial strategy; Leverage, Long/Short-term Debt ratio, Expected Foreign/ Domestic Debt Ratio, Economic Depreciation, Maximum Dividend Pay-out Ratio, Translation Hedging Strategy etc.
    3. Parameters describing the economic regime under which it operates: Taxes, Depreciation Scheme etc.
    4. Opening Balance etc.

    Since the model have to produces stochastic forecasts of interest(s) and exchange rates it will need for every currency involved (included lower and upper 5% probability limit):

    1. The Yield curves,
    2. Expected yearly inflation
    3. Depending on the forecast method(s) chosen for the exchange rates; the different currencies expected risk premiums or real exchange rates etc.

    Since there is a large number of parameters they are usually read from an excel template but the program will if necessary ask for missing or report inconsistent values of the parameters.

    The company’s operations are best described through an EBITDA model even if prices, costs and production coefficients and their variability can be read from an excel template. A dedicated EBITDA model will always give the opportunity to give a more detailed and in some cases complex description of the operations, include forecast and demand models, ‘exotic’ taxes, real options strategies etc., etc.

    Output

    S@R has set out to create models that can give answers to both deterministic and stochastic questions the tables will answer most deterministic issues while graphs must be used to answer the risk and uncertainty related questions:

    [TABLE=6]

    1.    In all 27 different reports with more than 70 pages describing operations and the economics of operations.
    2.    In addition the probability distributions for all input and output variables are produced.

    Use

    By linking dedicated EBITDA models to holistic balance simulation, taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:
    1.    by a using a EBITDA model to describe the companies operations or
    2.    by using coefficients of fabrications (e.g. kg flour pr 1000 bread etc.) as direct input to the balance model.

    The first approach implies setting up a dedicated EBITDA performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.
    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modeling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analyzing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    Strategy@Risk takes advantage of a program language developed and used for financial risk simulation. We have used the program language for over 25years, and developed a series of simulation models for industry, banks and financial institutions.

    The language has as one of its strengths, to be able to solve implicit equations in multiple dimensions. For the specific problems we seek to solve, this is a necessity that provides the necessary degrees of freedom to formulate the approach to problems.

    The Strategy@Risk tools have highly advance properties:

    • Using models written in dedicated financial simulation language (with code and data separated; see The risk of spreadsheet errors).
    • Solving implicit systems of equations giving unique WACC calculated for every period ensuring that “Free Cash Flow” always equals “Economic Profit” value.
    • Programs and models in “windows end-user” style.
    • Extended test for consistency in input, calculations and results.
    • Transparent reporting of assumptions and results.

    References

    Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November 2002, pp. 20-21

    Mukherjee, Mukherjee (2003). Financial Accounting. New York: Harper Perennial, ISBN 9780070581555.

  • The Case of Enterprise Risk Management

    The Case of Enterprise Risk Management

    This entry is part 2 of 4 in the series A short presentation of S@R

     

    The underlying premise of enterprise risk management is that every entity exists to provide value for its stakeholders. All entities face uncertainty and the challenge for management is to determine how much uncertainty to accept as it strives to grow stakeholder value. Uncertainty presents both risk and opportunity, with the potential to erode or enhance value. Enterprise risk management enables management to effectively deal with uncertainty and associated risk and opportunity, enhancing the capacity to build value. (COSO, 2004)

    The evils of a single point estimate

    Enterprise risk management is a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

    Traditionally, when estimating costs, project value, equity value or budgeting, one number is generated – a single point estimate. There are many problems with this approach.  In budget work this point is too often given as the best the management can expect, but in some cases budgets are set artificially low generating bonuses for later performance beyond budget. The following graph depicts the first case.

    Budget_actual_expected

    Here, we have based on the production and market structure and on the managements assumptions of the variability of all relevant input and output variables simulated the probability distribution for next years EBITDA. The graph gives the budgeted value, the actual result and the expected value. Both budget and actual value are above expected value, but the budgeted value was far too high, giving with more than 80% probability a realized EBITDA lower than budget. In this case the board will be mislead with regard to the company’ ability to earn money and all subsequent decisions made based on the budget EBITDA can endanger the company.

    The organization’s ERM system should function to bring to the board’s attention the most significant risks affecting entity objectives and allow the board to understand and evaluate how these risks may be correlated, the manner in which they may affect the enterprise, and management’s mitigation or response strategies. (COSO, 2009)

    It would have been much more preferable to the board to be given both the budget value and the accompanying probability distribution allowing it to make independent judgment about the possible size of the next years EBITDA. Only then will the board – both from the shape of the distribution, its localization and the point estimate of budget EBITDA – be able to assess the risk and opportunity facing the company.

    Will point estimates cancel out errors?

    In the following we measure the deviation of the actual result from both from the budget value and from the expected value. The blue dots represent daughter companies located in different countries. For each company we have the deviation (in percent) of the budgeted EBITDA (bottom axis) and the expected value (left axis) from the actual EBITDA observed 1 ½ year later.

    If the deviation for a company falls in the upper right quadrant the deviation are positive for both budget and expected value – and the company is overachieving.

    If the deviation falls in the lower left quadrant the deviation are negative for both budget and expected value – and the company is underachieving.

    If the deviation falls in the upper left quadrant the deviation are negative for budget and positive for expected value – the company is overachieving but has had a to high budget.

    With left skewed EBITDA distributions there should not be any observations in the lower right quadrant that will only happen when the distributions is skewed to the right – and then there will not be any observations in the upper left quadrant.

    The graph below shows that two companies have seriously underperformed and that the budget process did not catch the risk they were facing.  The rest of the companies have done very well, some however have seriously underestimated opportunities manifested by the actual result. From an economic point of view, the mother company would of course have preferred all companies (blue dots) above the x-axis, but due to the stochastic nature of the EBITDA it have to accept that some always will fall below.  Risk wise, it would have preferred the companies to fall to the right of the y-axis but will due to budget uncertainties have to accept that some always will fall to the left. However, large deviations both below the x-axis and to the left of the y-axis add to the company risk.

    Budget_actual_expected#1

    A situation like the one given in the graph below is much to be preferred from the board’s point of view.

    Budget_actual_expected#2

    The graphs above, taken from real life – shows that budgeting errors will not be canceled out even across similar daughter companies. Consolidating the companies will give the mother company a left skewed EBITDA distribution. They also show that you need to be prepared for deviations both positive and negative – you need a plan. So how do you get a plan? You make a simulation model! (See Pdf: Short-presentation-of-S@R#2)

    Simulation

    The Latin verb simulare means to “to make like”, “to create an exact representation” or imitate. The purpose of a simulation model is to imitate the company and is environment, so that its functioning can be studied. The model can be a test bed for assumptions and decisions about the company. By creating a representation of the company a modeler can perform experiments that are impossible or prohibitively expensive in the real world. (Sterman, 1991)

    There are many different simulation techniques, including stochastic modeling, system dynamics, discrete simulation, etc. Despite the differences among them, all simulation techniques share a common approach to modeling.

    Key issues in simulation include acquisition of valid source information about the company, selection of key characteristics and behaviors, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.

    Optimization models are prescriptive, but simulation models are descriptive. A simulation model does not calculate what should be done to reach a particular goal, but clarifies what could happen in a given situation. The purpose of simulations may be foresight (predicting how systems might behave in the future under assumed conditions) or policy design (designing new decision-making strategies or organizational structures and evaluating their effects on the behavior of the system). In other words, simulation models are “what if” tools. Often is such “what if” information more important than knowledge of the optimal decision.

    However, even with simulation models it is possible to mismanage risk by (Stulz, 2009):

    • Over-reliance on historical data
    • Using too narrow risk metrics , such as value at risk—probably the single most important measure in financial services—have underestimated risks
    • Overlooking knowable risks
    • Overlooking concealed risks
    • Failure to communicate effectively – failing to appreciate the complexity of the risks being managed.
    • Not managing risks in real time, you have to be able to monitor changing markets and,  respond to appropriately – You need a plan

    Being fully aware of the possible pitfalls we have methods and techniques’ that can overcome these issues and since we estimate the full probability distributions we can deploy a number of risk metrics  not having to relay on simple measures like value at risk – which we actually never uses.

    References

    COSO, (2004, September). Enterprise risk management — integrated framework. Retrieved from http://www.coso.org/documents/COSO_ERM_ExecutiveSummary.pdf

    COSO, (2009, October). Strengthening enterprise risk management for strategic advantage. Retrieved from http://www.coso.org/documents/COSO_09_board_position_final102309PRINTandWEBFINAL_000.pdf

    Sterman, J. D. (1991). A Skeptic’s Guide to Computer Models. In Barney, G. O. et al. (eds.),
    Managing a Nation: The Microcomputer Software Catalog. Boulder, CO: Westview Press, 209-229.

    Stulz, R.M. (2009, March). Six ways companies mismanage risk. Harvard Business Review (The Magazine), Retrieved from http://hbr.org/2009/03/six-ways-companies-mismanage-risk/ar/1

    Enterprise risk management is a process, effected by an entity’s board of directors,

    management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. (COSO, 2004)

  • A short presentation of S@R

    A short presentation of S@R

    This entry is part 1 of 4 in the series A short presentation of S@R

     

    My general view would be that you should not take your intuitions at face value; overconfidence is a powerful source of illusions. Daniel Kahneman (“Strategic decisions: when,” 2010)

    Most companies have some sort of model describing the company’s operations. They are mostly used for budgeting, but in some cases also for forecasting cash flow and other important performance measures. Almost all are deterministic models based on expected or average values of input data; sales, cost, interest and currency rates etc. We know however that forecasts based on average values are on average wrong. In addition deterministic models will miss the important uncertainty dimension that gives both the different risks facing the company and the opportunities they produce.

    S@R has set out to create models (See Pdf: Short presentation of S@R) that can give answers to both deterministic and stochastic questions, by linking dedicated EBITDA models to holistic balance simulation taking into account all important factors describing the company. The basis is a real balance simulation model – not a simple cash flow forecast model.

    Generic Simulation_model

    Both the deterministic and stochastic balance simulation can be set about in two different alternatives:

    1. by a using a EBITDA model to describe the companies operations or,
    2. by using coefficients of fabrications  as direct input to the balance model.

    The first approach implies setting up a dedicated ebitda subroutine to the balance model. This will give detailed answers to a broad range of questions about operational performance and uncertainty, but entails a higher degree of effort from both the company and S@R.

    The use of coefficients of fabrications and their variations is a low effort (cost) alternative, using the internal accounting as basis. This will in many cases give a ‘good enough’ description of the company – its risks and opportunities: The data needed for the company’s economic environment (taxes, interest rates etc.) will be the same in both alternatives.

    EBITDA_model

    In some cases we have used both approaches for the same client, using the last approach for smaller daughter companies with production structures differing from the main companies.
    The second approach can also be considered as an introduction and stepping stone to a more holistic EBITDA model.

    What problems do we solve?

    • The aim regardless of approach is to quantify not only the company’s single and aggregated risks, but also the potential, thus making the company capable to perform detailed planning and of executing earlier and more apt actions against risk factors.
    • This will improve stability to budgets through higher insight in cost side risks and income-side potentials. This is achieved by an active budget-forecast process; the control-adjustment cycle will teach the company to better target realistic budgets – with better stability and increased company value as a result.
    • Experience shows that the mere act of quantifying uncertainty throughout the company – and thru modelling – describe the interactions and their effects on profit, in itself over time reduces total risk and increases profitability.
    • This is most clearly seen when effort is put into correctly evaluating strategies-projects and investments effects on the enterprise. The best way to do this is by comparing and choosing strategies by analysing the individual strategies risks and potential – and select the alternative that is dominant (stochastic) given the company’s chosen risk-profile.
    • Our aim is therefore to transform enterprise risk management from only safeguarding enterprise value to contribute to the increase and maximization of the firm’s value within the firm’s feasible set of possibilities.

    References

    Strategic decisions: when can you trust your gut?. (2010). McKinsey Quarterly, (March)

  • WACC, Uncertainty and Infrastructure Regulation

    WACC, Uncertainty and Infrastructure Regulation

    This entry is part 2 of 2 in the series The Weighted Average Cost of Capital

     

    There is a growing consensus that the successful development of infrastructure – electricity, natural gas, telecommunications, water, and transportation – depends in no small part on the adoption of appropriate public policies and the effective implementation of these policies. Central to these policies is development of a regulatory apparatus that provides stability, protects consumers from the abuse of market power, guard’s consumers and operators against political opportunism, and provides incentives for service providers to operate efficiently and make the needed investments’ capital  (Jamison, & Berg, 2008, Overview).

    There are four primary approaches to regulating the overall price level – rate of return regulation (or cost of service), price cap regulation, revenue cap regulation, and benchmarking (or yardstick) regulation. Rate of return regulation adjusts overall price levels according to the operator’s accounting costs and cost of capital. In most cases, the regulator reviews the operator’s overall price level in response to a claim by the operator that the rate of return that it is receiving is less than its cost of capital, or in response to a suspicion of the regulator or claim by a consumer group that the actual rate of return is greater than the cost of capital (Jamison, & Berg, 2008, Price Level Regulation).

    We will in the following look at cost of service models (cost-based pricing); however some of the reasoning will also apply to the other approaches.  A number of different models exist:

    •    Long Run Average Total Cost – LRATC
    •    Long Run Incremental Cost – LRIC
    •    Long Run Marginal cost – LRMC
    •    Forward Looking Long Run Average Incremental Costs – FL-LRAIC
    •    Long Run Average Interconnection Costs – LRAIC
    •    Total Element Long Run Incremental Cost – TELRIC
    •    Total Service Long Run Incremental Cost – TSLRIC
    •    Etc.

    Where:
    Long run: The period over which all factors of production, including capital, are variable.
    Long Run Incremental Costs: The incremental costs that would arise in the long run with a defined increment to demand.
    Marginal cost: The increase in the forward-looking cost of a firm caused by an increase in its output of one unit.
    Long Run Average Interconnection Costs: The term used by the European Commission to describe LRIC with the increment defined as the total service.

    We will not discuss the merits and use of the individual methods only direct the attention on the fact that an essential ingredient in all methods is their treatment of capital and the calculation of capital cost – Wacc.

    Calculating Wacc a World without Uncertainty

    Calculating Wacc for the current year is a straight forward task, we know for certain the interest (risk free rate and credit risk premium) and tax rates, the budget values for debt and equity, the market premium and the company’s beta etc.

    There is however a small snag, should we use the book value of Equity or should we calculate the market value of Equity and use this in the Wacc calculations? The last approach is the recommended one (Copeland, Koller, & Murrin, 1994, p248-250), but this implies a company valuation with calculation of Wacc for every year in the forecast period. The difference between the two approaches can be large – it is only when book value equals market value for every year in the future that they will give the same Wacc.

    In the example below market value of equity is lower than book value hence market value Wacc is lower than book value Wacc. Since this company have a low and declining ROIC the value of equity is decreasing and hence also the Wacc.

    Wacc-and-Wacc-weights

    Calculating Wacc for a specific company for a number of years into the future ((For some telecom cases, up to 50 years.)) is not a straight forward task. Wacc is no longer a single value, but a time series with values varying from year to year.

    Using the average value of Wacc can quickly lead you astray. Using an average in e.g. an LRIC model for telecommunications regulation, to determine the price paid by competitors for services provided by an operator with significant market power (incumbent) will in the first years give a too low price and in the later years a to high price when the series is decreasing and vice versa. So the use of an average value for Wacc can either add to the incumbent’s problems or give him a windfall income.

    The same applies for the use of book value equity vs. market value equity. If for the incumbent the market value of equity is lower than the book value, the price paid by the competitors when book value Wacc is used will be to high and the incumbent will have a windfall gain and vise versa.

    Some advocates the use of a target capital structure (Copeland, Koller, & Murrin, 1994, p250) to avoid the computational difficulties (solving implicit equations) of using market value weights in the Wacc calculation. But in real life it can be very difficult to reach and maintain a fixed structure. And it does not solve the problems with market value of equity deviating from book value.

    Calculating Wacc a World with Uncertainty

    The future values for most, if not all variable will in the real world be highly uncertain – in the long run even the tax rates will vary.

    The ‘long run’ aspect of the methods therefore implies an ex-ante (before the fact) treatment of a number of variable; inflation, interest and tax rates, demand, investments etc. that have to be treated as stochastic variable.
    This is underlined by the fact that more and more central banks is presenting their forecasts of macro economic variable as density tables/charts (e.g. Federal Reserve Bank of Philadelphia, 2009) or as fan charts (Nakamura, & Shinichiro, 2008) like below from the Swedish Central Bank (Sveriges Riksbank, 2009):

    Riksbank_dec09

    Fan charts like this visualises the region of uncertainty or the possible yearly event space for central variable. These variables will also be important exogenous variables in any corporate valuation as value or cost drivers. Add to this all other variables that have to be taken into account to describe the corporate operation.

    Now, for every possible outcome of any of these variables we will have a different value of the company and is equity and hence it’s Wacc. So we will not have one time series of Wacc, but a large number of different time series all equally probable. Actually the probability of having a single series forecasted correctly is approximately zero.

    Then there is the question about how long it is feasible to forecast macro variables without having to use just the unconditional mean (Galbraith, John W. and Tkacz). In the charts above the ‘content horizon’ is set to approximately 30 month, in other the horizon can be 40 month or more (Adolfson, Andersson, Linde, Villani, & Vredin, 2007).

    As is evident from the charts the fan width is increasing as we lengthen the horizon. This is an effect from the forecast methods as the band of forecast uncertainty increases as we go farther and farther into the future.

    The future nominal values of GDP, costs, etc. will show even greater variation since these values will be dependent on the growth rates path’s to that point in time.

    Mont Carlo Simulation

    A possible solution to the problems discussed above is to use Monte Carlo techniques to forecast the company’s equity value distribution – coupled with market value weights calculation to forecast the corresponding yearly Wacc distributions:

    Wacc-2012

    This is the approach we have implemented in our models – it will not give a single value for Wacc but its distribution.  If you need a single value, the mean or mode from the yearly distributions is better than using the Wacc found from using average values of the exogenous variable – cf. Jensen’s inequality (Savage & Danziger, 2009).

    References

    Adolfson, A., Andersson, M.K., Linde, J., Villani, M., & Vredin, A. (2007). Modern forecasting models in action: improving macroeconomic analyses at central banks. International Journal of Central Banking, (December), 111-144.

    Copeland, T., Koller, T., & Murrin, J. (1994). Valuation. New York: Wiley.

    Copenhag Eneconomics. (2007, February 02). Cost of capital for broadcasting transmission . Retrieved from http://www.pts.se/upload/Documents/SE/WACCforBroadcasting.pdf

    Federal Reserve Bank of Philadelphia, Initials. (2009, November 16). Fourth quarter 2009 survey of professional forecasters. Retrieved from http://www.phil.frb.org/research-and-data/real-time-center/survey-of-professional-forecasters/2009/survq409.cfm

    Galbraith, John W. and Tkacz, Greg, Forecast Content and Content Horizons for Some Important Macroeconomic Time Series. Canadian Journal of Economics, Vol. 40, No. 3, pp. 935-953, August 2007. Available at SSRN: http://ssrn.com/abstract=1001798 or doi:10.1111/j.1365-2966.2007.00437.x

    Jamison, Mark A., & Berg, Sanford V. (2008, August 15). Annotated reading list for a body of knowledge on infrastructure regulation (Developed for the World Bank). Retrieved from http://www.regulationbodyofknowledge.org/

    Nakamura, K., & Shinichiro, N. (2008). The Uncertainty of the economic outlook and central banks’ communications. Bank of Japan Review, (June 2008), Retrieved from http://www.boj.or.jp/en/type/ronbun/rev/data/rev08e01.pdf

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Sveriges Riksbank, . (2009). The Economic outlook and inflation prospects. Monetary Policy Report, (October), p7. Retrieved from http://www.riksbank.com/upload/Dokument_riksbank/Kat_publicerat/Rapporter/2009/mpr_3_09oct.pdf

  • The Probability of Bankruptcy

    The Probability of Bankruptcy

    This entry is part 3 of 4 in the series Risk of Bankruptcy

     

    In the simulation we have for every year calculated all four metrics, and over the 250 runs their mean and standard deviation. All metrics is thus based on the same data set. During the forecast period the company invested heavily, financed partly by equity and partly by loans. The operations admittedly give a low but fairly stable return to assets. It was however never at any time in need for capital infusion to avoid insolvency. Since we now “know” the future we can judge the metrics ability to predict bankruptcy.

    A good metric should have a low probability of rejecting a true hypothesis of bankruptcy (false positive) and a high probability of rejecting a false hypothesis of bankruptcy (false negative).

    In the figures below the more or less horizontal curve gives the most likely value of the metric, while the vertical red lines indicate the 90% event space. By visual inspection of the area covered by the red lines we can get an indication of the false negative and false positive rate.

    The Z-Index shows an increase over time in the probability of insolvency, but the probability is very low for all years in the forecast period. The most striking effect is the increase in variance as we move towards the end of the simulated period. This is caused by the fact that uncertainty is “accumulated” over the forecast period. However, according to the Z-index, this company will not be endangered inside the 15 year horizon.

    z-index_time_serie

    In our case the Z-Index correctly identifies the probability of insolvency as small. By inspecting the yearly outcomes represented by the vertical lines we also find an almost zero false negative rate.

    The Z-score metrics tells a different story. The Z’’-score starts in the grey area and eventually ends up in the distress zone. The two others put the company in the distress zone for the whole forecast period.

    z-scores_time_series

    Since the distress zone for the Z-score is below 1.8, a visual inspection of the area covered by the red lines indicates that most of the outcomes fall in the distress zone. The Z-score metrics in this case performs type II errors by giving false negative judgements. However it is not clear what this means – only that the company in some respect is similar to companies gone bankrupt.

    z-score_time_serie

    If we look at the Z metrics for the individual years we find that the Z-score have values from minus two to plus three, in fact it has a coefficient of variation ranging from 300% to 500%. In addition there is very little evidence of the expected cumulative effect.

    z-coeff-of-var

    The other two metrics (Z’ and Z’’) shows much less variation and the expected cumulative effect.  The Z’-score outcomes fall entirely in the distress zone, giving a 100% false negative rate.

    z-score_time_serie1

    The Z’’-score outcome falls mostly in the distress zone below 1.1, but more and more falls in the grey area as we move forward in time. If we combine the safe zone with the grey we get a much lower false negative rate than for both the Z and the Z’ score.

    z-score_time_serie2

    It is difficult to draw conclusions from this exercise, but it points to the possibility of high false negative rates for the Z metrics. Use of ratios in assessing a company’s performance is often questionable and a linear metric based on a few such ratios will obviously have limitations. The fact that the original sample consisted of the same number of healthy and bankrupt companies might also have contributed to a bias in the discriminant coefficients. In real life the failure rate is much lower than 50%!

  • Predicting Bankruptcy

    Predicting Bankruptcy

    This entry is part 2 of 4 in the series Risk of Bankruptcy

     

    The Z-score formula for predicting bankruptcy was developed in 1968 by Edward I. Altman. The Z-score is not intended to predict when a firm will file a formal declaration of bankruptcy in a district court. It is instead a measure of how closely a firm resembles other firms that have filed for bankruptcy.

    The Z-score is classification method using a multivariate discriminant function that measures corporate financial distress and predicts the likelihood of bankruptcy within two years. ((Altman, Edward I., “Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy”. Journal of Finance, (September 1968): pp. 589-609.))

    Others like Springate ((Springate, Gordon L.V., “Predicting the Possibility of Failure in a Canadian Firm”. Unpublished M.B.A. Research Project, Simon Fraser University, January 1978.)), Fulmer ((Fulmer, John G. Jr., Moon, James E., Gavin, Thomas A., Erwin, Michael J., “A Bankruptcy Classification Model For Small Firms”. Journal of Commercial Bank Lending (July 1984): pp. 25-37.)) and the CA-SCORE model ((“C.A. – Score, A Warning System for Small Business Failures”, Bilanas (June 1987): pp. 29-31.)) have later followed in Altman’s track using step-wise multiple discriminant analysis to evaluate a large number of financial ratio’s ability to discriminate between corporate future failures and successes.

    Since Altman’s discriminant function only is linear in the explanatory variables, there has been a number of attempts to capture non-linear relations thru other types of models ((Berg, Daniel. “Bankruptcy Prediction by Generalized Additive Models.” Statistical Research Report. January 2005. Dept. of Math. University of Oslo. 20 Mar 2009 <http://www.math.uio.no/eprint/stat_report/2005/01-05.pdf>.))  ((Dakovic, Rada,Claudia Czado,Daniel Berg. Bankruptcy prediction in Norway: a comparison study. June 2007. Dept. of Math. University of Oslo. 20 Mar 2009 <http://www.math.uio.no/eprint/stat_report/2007/04-07.pdf>.)). Even if some of these models shows a somewhat better predicting ability, we will use the better known Z-score model in the following.

    Studies measuring the effectiveness of the Z-score claims the model to be accurate with >70% reliability. Altman found that about 95% of the bankrupt firms were correctly classified as bankrupt. And roughly 80% of the sick, non-bankrupt firms were correctly classified as non-bankrupt (( Altman, Edward I.. “Revisiting Credit Scoring Models in a Basel 2 Environment.” Finance Working Paper Series . May 2002. Stern School of Business. 20 Mar 2009 <http://w4.stern.nyu.edu/finance/docs/WP/2002/html/wpa02041.html>. )). However others find that the Z-score tends to misclasifie the non-bankrupt firms ((Ricci, Cecilia Wagner. “Bankruptcy Prediction: The Case of the CLECS.” Mid-American Journal of Business 18(2003): 71-81.)).

    The Z-score combines four or five common business ratios using a linear discriminant function to determine the regions with high likelihood of bankruptcy. The discriminant coefficients (ratio value weights) were originally based on data from publicly held manufacturers, but have since been modified for private manufacturing, non-manufacturing and service companies.

    The original data sample consisted of 66 firms, half of which had filed for bankruptcy under Chapter 7. All businesses in the database were manufacturers and small firms with assets of <$1million was eliminated.

    The advantage of discriminant analysis is that many characteristics can be combined into a single score. A low score implies membership in one group, a high score implies membership in the other group, and a middling score causes uncertainty as to which group the subject belongs.

    The original score was as follows:

    Z = 1.2 WC/TA + 1.4 RE/TA + 3.3 EBIT/TA +0.6R ME/BL +0.999 S/TA
    where:

    WC/TA= Working Capital / Total Assets, RE/TA= Retained Earnings / Total Assets
    EBIT/TA = EBIT/ Total Assets, S/TA = Sales/ Total Assets
    ME/BL = Market Value of Equity / Book Value of Total Liabilities

    From about 1985 onwards, the Z-scores have gained acceptance by auditors, management accountants, courts, and database systems used for loan evaluation. It has been used in a variety of contexts and countries, but was designed originally for publicly held manufacturing companies with assets of more than $1 million. Later revisions take into account the book value of privately held shares, and the fact that turnover ratios vary widely in non-manufacturing industries:

    1. Z-score for publicly held Manufacturers
    2. Z’-score for private Firms
    3. Z’’-score for Manufacturers, Non-Manufacturer Industrials & Emerging Market Credits

    The estimated discriminant coefficients for the different models is given in the following table: [Table=3] and the accompanying borders of the different regions – risk zones – are given in the table below. [Table=4] In the following calculations we will use the estimated value of equity as a proxy for market capitalization. Actually it is the other way around since the market capitalization is a guesstimate of the intrinsic equity value.

    In our calculations the Z-score metrics will become stochastic variables with distributions derived both from the operational input distributions for sale, prices, costs etc. and the distributions for the financial variables like risk free interest rate, inflation etc. The figures below are taken from the fifth year in the simulation to be comparable with the previous Z-index calculation that gave a very low probability for insolvency.

    We have in the following calculated all three Z metrics, even when only the Z-score fits the company description.

    z-score

    Using the Z-score metric we find that the company with high probability will be found in the distress area – it can even have negative Z-score. The last is due to the fact that the company has negative working capital – being partly financed by its suppliers and partly to the use of calculated value of equity – which can be negative.

    The Z’’-score is even more somber giving no possibility for values outside the distress area:

    z-score1

    The Z’’-score however puts most of the observations in the gray area:

    z-score2

    Before drawing any conclusions we will in the next post look at the time series for both the Z-index and the Z-scores. Nevertheless one observation can be made – the Z metric is a stochastic variable with an event space that easily can encompass all three risk zones – we therefore need the probability distribution over the zones to forecast the risk of bankruptcy.

    References