Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Financial services – Strategy @ Risk

Tag: Financial services

  • Budgeting

    Budgeting

    This entry is part 1 of 2 in the series Budgeting

     

    Budgeting is one area that is well suited for Monte Carlo Simulation. Budgeting involves personal judgments about future values of large number of variables like; sales, prices, wages, down- time, error rates, exchange rates etc. – variables that describes the nature of the business.

    Everyone that has been involved in a budgeting process knows that it is an exercise in uncertainty; however it is seldom described in this way and even more seldom is uncertainty actually calculated as an integrated part of the budget.

    Admittedly a number of large public building projects are calculated this way, but more often than not is the aim only to calculate some percentile (usually 85%) as expected budget cost.

    Most managers and their staff have, based on experience, a good grasp of the range in which the values of their variables will fall.  A manager’s subjective probability describes his personal judgement ebitabout how likely a particular event is to occur. It is not based on any precise computation but is a reasonable assessment by a knowledgeable person. Selecting the budget value however is more difficult. Should it be the “mean” or the “most likely value” or should the manager just delegate fixing of the values to the responsible departments?

    Now we know that the budget values might be biased by a number of reasons – simplest by bonus schemes etc. – and that budgets based on average assumptions are wrong on average ((Savage, Sam L. “The Flaw of Averages”, Harvard Business Review, November (2002): 20-21.))

    When judging probability, people can locate the source of the uncertainty either in their environment or in their own imperfect knowledge ((Kahneman D, Tversky A . ” On the psychology of prediction.” Psychological Review 80(1973): 237-251)). When assessing uncertainty, people tend to underestimate it – often called overconfidence and hindsight bias.

    Overconfidence bias concerns the fact that people overestimate how much they actually know: when they are p percent sure that they have predicted correctly, they are in fact right on average less than p percent of the time ((Keren G.  “Calibration and probability judgments: Conceptual and methodological issues”. Acta Psychologica 77(1991): 217-273.)).

    Hindsight bias concerns the fact that people overestimate how much they would have known had they not possessed the correct answer: events which are given an average probability of p percent before they have occurred, are given, in hindsight, probabilities higher than p percent ((Fischhoff B.  “Hindsight=foresight: The effect of outcome knowledge on judgment under uncertainty”. Journal of Experimental Psychology: Human Perception and Performance 1(1975) 288-299.)).

    We will however not endeavor to ask for the managers subjective probabilities only ask for the range of possible values (5-95%) and their best guess of the most likely value. We will then use this to generate an appropriate log-normal distribution for sales, prices etc. For investments we will use triangular distributions to avoid long tails. Where, most likely values are hard to guesstimate we will use rectangular distributions.

    We will then proceed as if the distributions where known (Keynes):

    [Under uncertainty] there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability waiting to be summed.  ((John Maynard Keynes. ” General Theory of Employment, Quarterly Journal of Economics (1937))

    budget_actual_expected

    The data collection can easily be embedded in the ordinary budget process, by asking the managers to set the lower and upper 5% values for all variables demining the budget, and assuming that the budget figures are the most likely values.

    This gives us the opportunity to simulate (Monte Carlo) a number of possible outcomes – usually 1000 – of net revenue, operating expenses and finally EBIT (DA).

    In this case the budget was optimistic with ca 84% probability of having an outcome below and only with 26% probability of having an outcome above. The accounts also proved it to be high (actual) with final EBIT falling closer to the expected value. In our experience expected value is a better estimator for final result than the budget  EBIT.

    However, the most important part of this exercise is the shape of the cumulative distribution curve for EBIT. The shape gives a good picture of the uncertainty the company faces in the year to come, a flat curve indicates more uncertainty both in the budget forecast and the final result than a steeper curve.

    Wisely used the curve (distribution) can be used both to inform stakeholders about risk being faced and to make contingency plans foreseeing adverse events.percieved-uncertainty-in-ne

    Having the probability distributions for net revenue and operating expenses we can calculate and plot the manager’s perceived uncertainty by using coefficients of variation.

    In our material we find on average twice as much uncertainty in the forecasts for net revenue than for operating expenses.

    As many often have budget values above expected value they are exposing a downward risk. We can measure this risk by the Upside Potential Ratio, which is the expected return above budget value per unit of downside risk. It can be found using the upper and lower moments calculated at budget value.

    References

  • What we do; Predictive and Prescriptive Analytics

    What we do; Predictive and Prescriptive Analytics

    This entry is part 1 of 3 in the series What We Do

     

    Analytics is the discovery and communication of meaningful patterns in data. It is especially valuable in areas rich with recorded information – as in all economic activities. Analytics relies on the simultaneous application of statistical methods, simulation modeling and operations research to quantify performance.

    Prescriptive analytics goes beyond descriptive, diagnostic and predictive analytics; by being able to recommend specific courses of action and show the likely outcome of each decision.

    Predictive analytics will tell what probably will happen, but will leave it up to the client to figure out what to do with it.

    Prescriptive analytics will also tell what probably will happen, but in addition:  when it probably will happen and why it likely will happen, thus how to take advantage of this predictive future. Since there are always more than one course of action prescriptive analytics have to include: predicted consequences of actions, assessment of the value of the consequences and suggestions of the actions giving highest equity value for the company.

    By employing simulation modeling (Monte Carlo methods) we can give answers – by probability statements – to the critical question at the top of the value staircase.

     

    Prescriptive-analytics

     

    This feature is a basic element of the S@R balance simulation model, where the Monte Carlo simulation can be stopped at any point on the probability distribution for company value  (i.e. very high or very low value of company) giving full set of reports: P&L and balance sheet etc. – enabling a full postmortem analysis: what it was that happened and why it did happen.

    Different courses of actions to repeat or avoid the result with high probability can then be researched and assessed. The EBITDA client specific model will capture relationships among many factors to allow simultaneous assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions. Even the language we use to write the models are specially developed for making decision support systems.

    Our methods will as well include data and information visualization to clearly and effectively communicate both information and acquired knowledge – to reinforce comprehension and cognition.

    Firms may thus fruitfully apply analytics to business data, to describe, predict, and improve its business performance.

     

  • Who we are

    Who we are

    Strategy@Risk is operated by partners with long experience as CFO, CEO and board members in a range of businesses. As former university employees we can draw on academia when a project demands state of the art knowledge in a field.

    Strategy@Risk takes advantage of a program language developed and used for financial risk simulation. We have used the program language for over 25years, and developed a series of simulation models for industry, banks and financial institutions.

    The language has as one of its strengths, to be able to solve implicit equations in multiple dimensions. For the specific problems we seek to solve, this is a necessity that provides the necessary degrees of freedom to formulate the approach to problems.

    The Strategy@Risk tool has highly advance properties:

    • State of the art in financial- and international trade theory.
    • Double-entry bookkeeping, using accounts balancing as tool for solving simultaneous equations generating P&L and Balance Sheet.

    • Solving implicit systems of equations giving unique WACC calculated for every period ensuring that “Free Cash Flow” always equals “Economic Profit” value.

    • Programs and models in “windows end-user” style.
    • Extended test for consistency in input, calculations and results.
    • Transparent reporting of assumptions and results.

    In the Strategy@Risk framework all items, whether from the profit and loss account or from the balance sheet, will have individual probability distributions. These distributions are generated by the combination of distributions from factors of production that define the item.

    Variance will increase as we move down the items in the profit and loss account. The message is that even if there is a low variance in the input variables (sales, prices, exchange rates, costs etc.) metrics like Noplat, Free Cash Flow, and Economic Profit and ultimately the Value of Equity will have a much higher variance.

  • Projects we have done

    Projects we have done

    Consultancy, in contrast to selling software products, is quite a delicate process. Trust is the most important asset to successfully completing a project, and S@R customers consider discretion to be important. That’s why we have decided to publish relevant contents – which provide insight into our methods of operation – only accessible in anonymous form and often collected from different projects.

    The same applies to naming customers, but large projects has been performed in

    • Finance
    • Banking
    • Pulp & Paper
    • Airport Operations
    • Brewery
    • Aquaculture
    • Mining & Quarrying
    • Car parts
    • Rail coach production

    etc. –  all for multinational companies.