I think Taleb's critique is related to Binmore's. The fat-tail problem can be interpreted as reflecting surprises. Surprises arise from the fact that your model of the world does not account for all possible events (e.g., US intelligence did not consider the 9/11 scenario as a possibility). The solution may be to adopt a Bayesian approac…
I think Taleb's critique is related to Binmore's. The fat-tail problem can be interpreted as reflecting surprises. Surprises arise from the fact that your model of the world does not account for all possible events (e.g., US intelligence did not consider the 9/11 scenario as a possibility). The solution may be to adopt a Bayesian approach but allocate some weight to the possibility of unknown events. This leads to different decision-making rules, such as the 'Precautionary Principle.' This recommendation aligns with Taleb's criticism. My colleague Quiggin has written on this: https://ageconsearch.umn.edu/record/149847/?v=pdf
This is likely a different discussion, perhaps you can write a post about the Bayes misapplication / ludic fallacy / precautionary principle & we can engage there. Taleb has a nuanced view on the precautionary principle which he calls the "non-naive" precautionary principle. For any innovation / "surprise", it distinguishes between thin and fat tails domains & local and systemic risks. It allows for local-bounded/thin tailed domain errors even if the problem is not well understood or complex.
I think Taleb's critique is related to Binmore's. The fat-tail problem can be interpreted as reflecting surprises. Surprises arise from the fact that your model of the world does not account for all possible events (e.g., US intelligence did not consider the 9/11 scenario as a possibility). The solution may be to adopt a Bayesian approach but allocate some weight to the possibility of unknown events. This leads to different decision-making rules, such as the 'Precautionary Principle.' This recommendation aligns with Taleb's criticism. My colleague Quiggin has written on this: https://ageconsearch.umn.edu/record/149847/?v=pdf
This is likely a different discussion, perhaps you can write a post about the Bayes misapplication / ludic fallacy / precautionary principle & we can engage there. Taleb has a nuanced view on the precautionary principle which he calls the "non-naive" precautionary principle. For any innovation / "surprise", it distinguishes between thin and fat tails domains & local and systemic risks. It allows for local-bounded/thin tailed domain errors even if the problem is not well understood or complex.
https://arxiv.org/abs/1410.5787
Thanks Cip, I read Taleb a long time ago. I'll put a note on possibly writing a post on that topic in the future and keep this as a reference for it.