Warning: define(): Argument #3 ($case_insensitive) is ignored since declaration of case-insensitive constants is no longer supported in /home/u742613510/domains/strategy-at-risk.com/public_html/wp-content/plugins/wpmathpub/wpmathpub.php on line 65
Uncertainty – Strategy @ Risk

Tag: Uncertainty

  • We’ve Got Mail! (2)

    We’ve Got Mail! (2)

    This entry is part 2 of 2 in the series Self-applause

    From: slideshare

    Congrats – you are in the Top 25% of Most-View ed on SlideShare:

    Slideshare_table

    Even if we did not post anything new on SlideShare in 2014, our contribution ended up in the upper quartile of the most viewed articles on SlideShare. We believe that this shows the great current interest in uncertainty and risk related to economic issues.

    The article The Weighted Average Cost of Capital was originally published in the January/February 2003 issue of Financial Engineering News, and was many years later reprinted  as (among) “The best of Financial Engineering News”:

    reprint_fen

    And it is still a relevant and popular article both on SlideShare and here. Since we put up our home page in 2009 the number of visitors and page loads has increased steadily:

    Visitors_Page-Loads-2009_2014At the end of 2014, almost 80.000 had visited our home page, and read one or more of the articles posted there. The increase in visitors and articles read again shows the increased interest in uncertainty, risk and Monte Carlo simulation – topics we at the best of our ability try to cover.

    Thanks
    S@R

  • The risk of planes crashing due to volcanic ash

    The risk of planes crashing due to volcanic ash

    This entry is part 4 of 4 in the series Airports

    Eyjafjallajokull volcano

    When the Icelandic volcano Eyafjallajøkul had a large eruption in 2010 it lead to closed airspace all over Europe, with corresponding big losses for airlines.  In addition it led to significant problems for passengers who were stuck at various airports without getting home.  In Norway we got a new word: “Ash stuck” ((Askefast)) became a part of Norwegian vocabulary.

    The reason the planes were put on ground is that mineral particles in the volcanic ash may lead to damage to the plane’s engines, which in turn may lead to them crashing.  This happened in 1982, when a flight from British Airways almost crashed due to volcanic particles in the engines. The risk of the same happening in 2010 was probably not large, but the consequences would have been great should a plane crash.

    Using simulation software and a simple model I will show how this risk can be calculated, and hence why the airspace was closed over Europe in 2010 even if the risk was not high.  I have not calculated any effects following the closure, since this isn’t a big model nor an in depth analysis.  It is merely meant as an example of how different issues can be modeled using Monte Carlo simulation.  The variable values are not factual but my own simple estimates.  The goal in this article is to show an example of modeling, not to get a precise estimate of actual risk.

    To model the risk of dangerous ash in the air there are a few key questions that have to be asked and answered to describe the issue in a quantitative way.

    Is the ash dangerousVariable 1. Is the ash dangerous?

    We first have to model the risk of the ash being dangerous to plane engines.  I do that by using a so called discrete probability.  It has a value 0 if the ash is not dangerous and a value 1 if it is.  Then the probabilities for each of the alternatives are set.  I set them to:

    • 99% probability that the as IS NOT dangerous
    • 1% probability that the ash IS dangerous

    Number of planes in the air during 2 hoursVariable 2. How many planes are in the air?

    Secondly we have to estimate how many planes are in the air when the ash becomes a problem.  Daily around 30 000 planes are in the air over Europe.  We can assume that if planes start crashing or get in big trouble the rest will immediately be grounded.  Therefore I only use 2/24 of these planes in the calculation.

    • 2 500 planes are in the air when the problem occurs

    I use a normal distribution and set the standard deviation for planes in the air in a 2 hour period to 250 planes.  I have no views on whether the curve is skewed one way or the other.  I assume it may well be, since there probably are different numbers of planes in the air depending on weekday, whether it’s a holiday season and so on, but I’ll leave that estimate to the air authority staff.

    Number of passengers and crewVariable 3.  How many people are there in each plane?

    Thirdly I need an estimate on how many passengers and crew there are in each plane.  I assume the following; I disregard the fact that there are a lot of intercontinental flights over the Eyafjallajøkul volcano, likely with more passengers than the average plane over Europe.  The curve might be more skewed that what I assume:

    • Average number of passengers/crew: 70
    • Lowest number of passengers/crew: 60
    • Highest number of passengers/crew: 95

    The reason I’m using a skewed curve here is that the airline business is constantly under pressure to fill up their planes.  In addition the number of passengers will vary by weekday and so on.  I think it is reasonable to assume that there are likely more passengers per plane rather than fewer.

    Number of planes crashingVariable 4. How many of the planes which are in the air will crash?

    The last variable that needs to be modeled is how many planes will crash should the ash be dangerous.  I assume that maybe no planes actually crash, even though the ash gets into their engines.  This is the low end of the curve.  I have in addition assumed the following:

    • Expected number of planes that crash: 0, 01%
    • Maximum number of planes that crash: 1, 0%

    Now we have what we need to start calculating!

    The formula I use to calculate is as follows:

    If(“Dangerous ash”=0;0)

    If(“Dangerous ash”=1;”Number of planes in the air”x”Number of planes crashing”x”Number of passengers/crew per plane”)

    If the ash is not dangerous, variable 1 is equal to 0, no planes crash and nobody dies.  If the ash is dangerous the number of dead is a product of the number of planes, number of passengers/crew and the number of planes crashing.

    Running this model with a simulation tool gives the following result:

    Expected value - number of dead

    As the graph shows the expected value is low; 3 people, meaning that the probability for a major loss of planes is very low.  But the consequences may be devastatingly high.  In this model run there is a 1% probability that the ash is dangerous, and a 0, 01% probability that planes actually crash.  However the distribution has a long tail, and a bit out in the tail there is a probability that 1 000 people crash into their death. This is a so called shortfall risk or the risk of a black swan if you wish.  The probability is low, but the consequences are very big.

    This is the reason for the cautionary steps taken by air authorities.   Another reason is that the probabilities both for the ash being dangerous and that planes will crash because of it are unknown probabilities.  Thirdly, changes in variable values will have a big impact.

    If the probability of the ash being dangerous is 10% rather than 1% and the probability of planes crashing is 1% rather than 0,01%, as much as 200 dead (or 3 planes) is expected while the extreme outcome is close to 6 400 dead.

    Expected value - number of dead higher probability of crash

    This is a simplified example of the modeling that is likely to be behind the airspace being closed.  I don’t know what probabilities are used, but I’m sure this is how they think.

    How we assess risk depends on who we are.  Some of us have a high risk appetite, some have low.  I’m glad I’m not the one to make the decision on whether to close the airspace or not.  It is not an easy decision.

    My model is of course very simple.  There are many factors to take into account, like wind direction and – strength, intensity of eruption and a number of other factors I don’t know about.  But as an illustration both of the factors that need to be estimated in this case and as a generic modeling case this is a good example.

    Originally published in Norwegian.

  • We’ve Got Mail !

    We’ve Got Mail !

    This entry is part 1 of 2 in the series Self-applause

    SlideShare#2Thanks
    S@R

     

  • Inventory management – Some effects of risk pooling

    Inventory management – Some effects of risk pooling

    This entry is part 3 of 4 in the series Predictive Analytics

    Introduction

    The newsvendor described in the previous post has decided to branch out having news boys placed at strategic corners in the neighborhood. He will first consider three locations, but have six in his sights.

    The question to be pondered is how many of the newspaper he should order for these three locations and the possible effects on profit and risk (Eppen, 1979) and (Chang & Lin, 1991).

    He assumes that the demand distribution he experienced at the first location also will apply for the two others and that all locations (point of sales) can be served from a centralized inventory. For the sake of simplicity he further assumes that all points of sales can be restocked instantly (i.e. zero lead time) at zero cost, if necessary or advantageous by shipment from one of the other locations and that the demand at the different locations will be uncorrelated. The individual point of sales will initially have a working stock, but will have no need of safety stock.

    In short is this equivalent to having one inventory serve newspaper sales generated by three (or six) copies of the original demand distribution:

    The aggregated demand distribution for the three locations is still positively skewed (0.32) but much less than the original (0.78) and has a lower coefficient of variation – 27% – against 45% for the original ((The quartile variation has been reduced by 37%.)):

    The demand variability has thus been substantially reduced by this risk pooling ((We distinguish between ten main types of risk pooling that may reduce total demand and/or lead time variability (uncertainty): capacity pooling, central ordering, component commonality, inventory pooling, order splitting, postponement, product pooling, product substitution, transshipments, and virtual pooling. (Oeser, 2011)))  and the question now is how this will influence the vendor’s profit.

    Profit and Inventory level with Risk Pooling

    As in the previous post we have calculated profit and loss as:

    Profit = sales less production costs of both sold and unsold items
    Loss = value of lost sales (stock-out) and the cost of having produced and stocked more than can be expected to be sold

    The figure below indicates what will happen as we change the inventory level. We can see as we successively move to higher levels (from left to right on the x-axis) that expected profit (blue line) will increase to a point of maximum – ¤16541 at a level of 7149 units:

    Compared to the point of maximum profit for a single warehouse (profit ¤4963 at a level of 2729 units, see previous post), have this risk pooling increased the vendors profit by 11.1% while reducing his inventory by 12.7%. Centralization of the three inventories has thus been a successful operational hedge ((Risk pooling can be considered as a form of operational hedging. Operational hedging is risk mitigation using operational instruments.))  for our newsvendor by mitigating some, but not all, of the demand uncertainty.

    Since this risk mitigation was a success the newsvendor wants to calculate the possible benefits from serving six newsboys at different locations from the same inventory.

    Under the same assumptions, it turns out that this gives an even better result, with an increase in profit of almost 16% and at the same time reducing the inventory by 15%:

    The inventory ‘centralization’ has then both increased profit and reduced inventory level compared to a strategy with inventories held at each location.

    Centralizing inventory (inventory pooling) in a two-echelon supply chain may thus reduce costs and increase profits for the newsvendor carrying the inventory, but the individual newsboys may lose profits due to the pooling. On the other hand, the newsvendor will certainly lose profit if he allows the newsboys to decide the level of their own inventory and the centralized inventory.

    One of the reasons behind this conflict of interests is that each of the newsvendor and newsboys will benefit one-sidedly from shifting the demand risk to another party even though the performance may suffer as a result (Kemahloğlu-Ziya, 2004) and (Anupindi and Bassok 1999).

    In real life, the actual risk pooling effects would depend on the correlations between each locations demand. A positive correlation would reduce the effect while a negative correlation would increase the effects. If all locations were perfectly correlated (positive) the effect would be zero and a correlation coefficient of minus one would maximize the effects.

    The third effect

    The third direct effect of risk pooling is the reduced variability of expected profit. If we plot the profit variability, measured by its coefficient of variation (( The coefficient of variation is defined as the ratio of the standard deviation to the mean – also known as unitized risk.)) (CV) for the three sets of strategies discussed above; one single inventory (warehouse), three single inventories versus all three inventories centralized and six single inventories versus all six centralized.

    The graph below depicts the situation. The three curves show the CV for corporate profit given the three alternatives and the vertical lines the point of profit for each alternative.

    The angle of inclination for each curve shows the profits sensitivity for changes in the inventory level and the location each strategies impact on the predictability of realized profit.

    A single warehouse strategy (blue) gives clearly a much less ability to predict future profit than the ‘six centralized warehouse’ (purple) while the ‘three centralized warehouse’ (green) fall somewhere in between:

    So in addition to reduced costs and increased profits centralization, also gives a more predictable result, and lower sensitivity to inventory level and hence a greater leeway in the practical application of different policies for inventory planning.

    Summary

    We have thus shown through Monte-Carlo simulations, that the benefits of pooling will increase with the number of locations and that the benefits of risk pooling can be calculated without knowing the closed form ((In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain “well-known” functions.)) of the demand distribution.

    Since we do not need the closed form of the demand distribution, we are not limited to low demand variability or the possibility of negative demand (Normal distributions etc.). Expanding the scope of analysis to include stochastic supply, supply disruptions, information sharing, localization of inventory etc. is natural extensions of this method ((We will return to some of these issues in later posts.)).

    This opens for use of robust and efficient methods and techniques for solving problems in inventory management unrestricted by the form of the demand distribution and best of all, the results given as graphs will be more easily communicated to all parties than pure mathematical descriptions of the solutions.

    References

    Anupindi, R. & Bassok, Y. (1999). Centralization of stocks: Retailers vs. manufacturer.  Management Science 45(2), 178-191. doi: 10.1287/mnsc.45.2.178, accessed 09/12/2012.

    Chang, Pao-Long & Lin, C.-T. (1991). Centralized Effect on Expected Costs in a Multi-Location Newsboy Problem. Journal of the Operational Research Society of Japan, 34(1), 87–92.

    Eppen,G.D. (1979). Effects of centralization on expected costs in a multi-location newsboy problem. Management Science, 25(5), 498–501.

    Kemahlioğlu-Ziya, E. (2004). Formal methods of value sharing in supply chains. PhD thesis, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, July 2004. http://smartech.gatech.edu/bitstream/1853/4965/1/kemahlioglu ziya_eda_200407_phd.pdf, accessed 09/12/2012.

    OESER, G. (2011). Methods of Risk Pooling in Business Logistics and Their Application. Europa-Universität Viadrina Frankfurt (Oder). URL: http://opus.kobv.de/euv/volltexte/2011/45, accessed 09/12/2012.

    Endnotes

  • The tool that would improve everybody’s toolkit

    The tool that would improve everybody’s toolkit

    Edge, which every year ((http://www.edge.org/questioncenter.html))   invites scientists, philosophers, writers, thinkers and artists to opine on a major question of the moment, asked this year: “What scientific concept would improve everybody’s cognitive toolkit?”

    The questions are designed to provoke fascinating, yet inspiring answers, and are typically open-ended, such as:  “What will change everything” (2008), “What are you optimistic about?” (2007), and “How is the internet changing the way you think?” (Last’s years question). Often these questions ((Since 1998))  are turned into paperback books.

    This year many of the 151 contributors pointed to Risk and Uncertainty in their answers. In the following we bring excerpt from some of the answers. We will however advice the interested reader to look up the complete answers:

    A Probability Distribution

    The notion of a probability distribution would, I think, be a most useful addition to the intellectual toolkit of most people.

    Most quantities of interest, most projections, most numerical assessments are not point estimates. Rather they are rough distributions — not always normal, sometimes bi-modal, sometimes exponential, sometimes something else.

    Related ideas of mean, median, and variance are also important, of course, but the simple notion of a distribution implicitly suggests these and weans people from the illusion that certainty and precise numerical answers are always attainable.

    JOHN ALLEN PAULOS, Professor of Mathematics, Temple University, Philadelphia.

    Randomness

    The First Law of Randomness: There is such a thing as randomness.
    The Second Law of Randomness: Some events are impossible to predict.
    The Third Law of Randomness: Random events behave predictably in aggregate even if they’re not predictable individually

    CHARLES SEIFE, Professor of Journalism, New York University; formerly journalist, Science magazine; Author, Proofiness: The Dark Arts of Mathematical Deception.

    The Uselessness of Certainty

    Every knowledge, even the most solid, carries a margin of uncertainty. (I am very sure about my own name … but what if I just hit my head and got momentarily confused?) Knowledge itself is probabilistic in nature, a notion emphasized by some currents of philosophical pragmatism. Better understanding of the meaning of probability, and especially realizing that we never have, nor need, ‘scientifically proven’ facts, but only a sufficiently high degree of probability, in order to take decisions and act, would improve everybody’ conceptual toolkit.

    CARLO ROVELLI, Physicist, University of Aix-Marseille, France; Author, The First Scientist: Anaximander and the Nature of Science.

    Uncertainty

    Until we can quantify the uncertainty in our statements and our predictions, we have little idea of their power or significance. So too in the public sphere. Public policy performed in the absence of understanding quantitative uncertainties, or even understanding the difficulty of obtaining reliable estimates of uncertainties usually means bad public policy.

    LAWRENCE KRAUSS, Physicist, Foundation Professor & Director, Origins Project, Arizona State University; Author, A Universe from Nothing; Quantum Man: Richard Feynman’s Life in Science.

    Risk Literacy

    Literacy — the ability to read and write — is the precondition for an informed citizenship in a participatory democracy. But knowing how to read and write is no longer enough. The breakneck speed of technological innovation has made risk literacy as indispensable in the 21st century as reading and writing were in the 20th century. Risk literacy is the ability to deal with uncertainties in an informed way.

    GERD GIGERENZER, Psychologist; Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin; Author, Gut Feelings.

    Living is fatal

    The ability to reason clearly in the face of uncertainty. If everybody could learn to deal better with the unknown, then it would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps), but the chances for humanity as a whole.

    SETH LLOYD, Quantum Mechanical Engineer, MIT; Author, Programming the Universe.

    Uncalculated Risk

    We humans are terrible at dealing with probability. We are not merely bad at it, but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our wellbeing. This incompetence is reflected in our language, in which the common words used to convey likelihood are “probably” and “usually” — vaguely implying a 50% to 100% chance. Going beyond crude expression requires awkwardly geeky phrasing, such as “with 70% certainty,” likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness — the inability to deal with probability — may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.

    GARRETT LISI, Independent Theoretical Physicist

    And there is more … much more at the Edge site

  • Plans based on average assumptions ……

    Plans based on average assumptions ……

    This entry is part 3 of 4 in the series The fallacies of scenario analysis

     

    The Flaw of Averages states that: Plans based on the assumption that average conditions will occur are usually wrong. (Savage, 2002)

    Many economists use what they believe to be most likely ((Most likely estimates are often made in-house based on experience and knowledge about their operations.)) or average values ((Forecasts for many types of variable can be bought from suppliers of ‘consensus forecasts’.))  (Timmermann, 2006) (Gavin & Pande, 2008) as input for the exogenous variables in their spreadsheet calculations.

    We know however that:

    1. the probability for any variable to have outcomes equal to any of these values is close to zero,
    1. and that the probability of having outcomes for all the (exogenous) variables in the spreadsheet model equal to their average is virtually zero.

    So why do they do it? They obviously lack the necessary tools to calculate with uncertainty!

    But if a small deviation from the most likely value is admissible, how often will the use of a single estimate like the most probable value be ‘correct’?

    We can try to answer that by looking at some probability distributions that may represent the ‘mechanism’ generating some of these variables.

    Let’s assume that we are entering into a market with a new product, we know of course the maximum upper and lower limit of our future possible market share, but not the actual number so we guess it to be the average value = 0,5. Since we have no prior knowledge we have to assume that the market share is uniformly distributed between zero and one:

    If we then plan sales and production for a market share between 0, 4 and 0, 5 – we would out of a hundred trials only have guessed the market share correctly 13 times. In fact we would have overestimated the market share 31 times and underestimated it 56 times.

    Let’s assume a production process where the acceptable deviation from some fixed measurement is 0,5 mm and where the actual deviation have a normal distribution with expected deviation equal to zero, but with a standard deviation of one:

    Using the average deviation to calculate the expected error rate will falsely lead to us to believe it to be zero, while it in fact in the long run will be 64 %.

    Let’s assume that we have a contract for drilling a tunnel, and that the cost will depend on the hardness of the rock to be drilled. The contract states that we will have to pay a minimum of $ 0.5M and a maximum of $ 2M, with the most likely cost being $ 1M. The contract and our imperfect knowledge of the geology make us assume the cost distribution to be triangular:

    Using the average ((The bin containing the average in the histogram.)) as an estimate for expected cost will give a correct answer in only 14 out of a 100 trials, with cost being lower in 45 and higher in 41.

    Now, let’s assume that we are performing deep sea drilling for oil and that we have a single estimate for the cost to be $ 500M. However we expect the cost deviation to be distributed as in the figure below, with a typical small negative cost deviation and on average a small positive deviation:

    So, for all practical purposes this is considered as a low economic risk operation. What they have failed to do is to look at the tails of the cost deviation distribution that turns out to be Cauchy distributed with long tails, including the possibility of catastrophic events:

    The event far out on the right tail might be considered a Black Swan (Taleb, 2007), but as we now know they happen from time to time.

    So even more important than the fact that using a single estimate will prove you wrong most of the times it will also obscure what you do not know – the risk of being wrong.

    Don’t worry about the average, worry about how large the variations are, how frequent they occur and why they exists. (Fung, 2010)

    Rather than “Give me a number for my report,” what every executive should be saying is “Give me a distribution for my simulation.”(Savage, 2002)

    References

    Gavin,W.,T. & Pande,G.(2008), FOMC Consensus Forecasts, Federal Reserve Bank of St. Louis Review, May/June 2008, 90(3, Part 1), pp. 149-63.

    Fung, K., (2010). Numbers Rule Your World. New York: McGraw-Hill.

    Savage, L., S.,(2002). The Flaw of Averages. Harvard Business Review, (November), 20-21.

    Savage, L., S., & Danziger, J. (2009). The Flaw of Averages. New York: Wiley.

    Taleb, N., (2007). The Black Swan. New York: Random House.

    Timmermann, A.,(2006).  An Evaluation of the World Economic Outlook Forecasts, IMF Working Paper WP/06/59, www.imf.org/external/pubs/ft/wp/2006/wp0659.pdf

    Endnotes