Decisions

How to use a decision tree

A decision tree is a useful tool for visualizing and organizing the different options and outcomes in a complex decision-making process. It can help individuals and organizations to make informed decisions by providing a structured and systematic approach to problem-solving.

To use a decision tree, one must first identify the problem or decision that needs to be made. Next, potential solutions or options must be generated and organized into a tree structure, with each branch representing a different option and its potential outcomes.

Each branch should lead to either a terminal node, which represents a final decision or outcome, or another branch, which represents a further decision that must be made based on the outcome of the previous branch. The branches should be labeled with the options and outcomes that are being considered, and the probabilities or expected values associated with each outcome should be estimated where possible.

Once the decision tree has been constructed, a decision-making process can be applied to determine the optimal decision. One common method is to use expected value analysis, which involves calculating the expected value of each terminal node and choosing the option that leads to the highest expected value. This approach considers both the potential outcomes and the probabilities of each outcome, taking into account both the risks and rewards associated with each option.

Expected value is the value of an outcome multiplied by the probability of it occurring. It is especially useful when you have a future uncertain event involving value. Expected value methods will be tripped up by new or disruptive technologies, which makes them useful as a way to explore hidden vulnerabilities in current technologies. If you find the expected value of a product feature is off-the-charts good, start looking for a way to disrupt yourself. If you don't, someone else will.

You can combine expected value with logic or decision trees, which are excellent tools for structuring muddled information. At the very beginning of a problem, where you may not even have a sense for its complexity, it’s worth experimenting with different types of logic trees. You can use a simple tree to visually lay out the components or levers of a problem. Levers and sub-levers won’t give you any sense for feedback loops, networks, or causal mechanisms but they will help you identify and structure what may be possible to model as a starting point.

For example, you need to decide whether to develop a new beverage using a totally novel ingredient. Your estimate of the development cost is $1m. This is quite speculative because you don’t yet know if the FDA will approve the use of the novel ingredient, nor do you know how well it will stand up to shelf-life trials. You estimate there’s a 70% chance that it will pass the shelf-life trials. If it doesn’t then you get 500k of your budget back because you don’t spend money on taking the product to market. Even if you are successful, there’s only a 60% chance that the FDA will approve the ingredient for use in beverages. If this happens, there’s a 10% chance that a competitor will beat you to market and 
wipe out your advantage. If everything goes to plan—the shelf life trials go well, the FDA approves and no competitor challenges you—the best estimate for a return on your development investment is $50m. How do you think about this decision and what is your expected value?

A tree can help unmuddle this muddle.

A decision tree showing expected payoff calculation

Trees such as these can help organize information where the nature of the problem is a series of cascading junctures that you can put data and analysis against. They are especially useful when you need to make sense of different choices with different probabilities and payoffs. They make it easier to communicate any rationale by making it easier to show the sources of uncertainty.