Enjoyed reading this. I believe the ability to judge the confidence level (confidence of the confidence!) of a decision is very important here. I'd argue that judging the quality of a decision based on the outcome really depends on the type of decision. For instance, say you're risk-seeking and join a "next big thing" startup. After a few years, it goes bust. In this case, it's very difficult to judge the decision based on the outcome, because of the presence of many new facts post-decision.
On the other hand, say a lead researcher at a hypothetical company closedai, is deciding the research direction of a new project. I’d say the quality of this decision can be judged much more accurately from the outcome; because relatively fewer new facts are generated in between, and would depend much more on the research they did to make that decision.
Great article! Also think this is not discussed often enough in common settings, especially when we are evaluating our own decisions or when forming opinion about the quality of decisions by people around us.
Taleb also shares this idea of alternative histories in his books which I find particularly cool, somewhat similar to what you discussed. Link: https://coffeeandjunk.com/alternative-histories/
I think actually doing the overconfidence quiz you shared is a great way for people to see and realise first-hand how overconfident they are. Even (especially?) smart people have a bias in thinking that they are not being overconfident, even when they are.
Great read ! Thank you for actually posting it, I imagine you have it somewhere in your note sitting long time ago.
Clicking the subscribe button, thank you ❤️
Enjoyed the read Devansh, thanks for sharing.
Enjoyed reading this. I believe the ability to judge the confidence level (confidence of the confidence!) of a decision is very important here. I'd argue that judging the quality of a decision based on the outcome really depends on the type of decision. For instance, say you're risk-seeking and join a "next big thing" startup. After a few years, it goes bust. In this case, it's very difficult to judge the decision based on the outcome, because of the presence of many new facts post-decision.
On the other hand, say a lead researcher at a hypothetical company closedai, is deciding the research direction of a new project. I’d say the quality of this decision can be judged much more accurately from the outcome; because relatively fewer new facts are generated in between, and would depend much more on the research they did to make that decision.
Great observations!
Great article! Also think this is not discussed often enough in common settings, especially when we are evaluating our own decisions or when forming opinion about the quality of decisions by people around us.
Taleb also shares this idea of alternative histories in his books which I find particularly cool, somewhat similar to what you discussed. Link: https://coffeeandjunk.com/alternative-histories/
As for the confidence interval thing, there are a great bunch of related exercises online like this one: Link: https://acritch.com/media/mphd/calibration-exercises.pdf
Heyy thanks for sharing!!
I think actually doing the overconfidence quiz you shared is a great way for people to see and realise first-hand how overconfident they are. Even (especially?) smart people have a bias in thinking that they are not being overconfident, even when they are.