Humans co-evolved cognition and cooperation under resource constraints. Our intuitions and cognitive biases are “a feature not a bug.” Thinking that is fast, efficient and relies on previous experience is experienced as gut feel or as an instinctive conclusion. We cooperate because we rely on others for our knowledge—we are a community of mind—which gives us a feeling we know more than we do.
Intuition isn’t the opposite of rational or logical reasoning. It is the non-sequential process of obtaining and processing information, which takes into account both rational and emotional elements. The result is direct knowledge without the participation of rational inference. We are best at applying our intuitions in situations where our tacit knowledge is relevant and salient because our intuitions evolved in the physical world of our senses. Our intuition serves us well when we have many years of practice, experience and a learning process which translates huge amounts of facts into causal relationships and patterns which then take the form of tacit and explicit knowledge. They fail because the complexity of the world is beyond our intuitive grasp.
Big data, ML and AI now add to the complexity of using our intuition. AI uses training data to look for correlations that can then be used to make predictions. AI doesn’t necessarily need a human to guide it with a theory. Nor does it need a human to add any intuitions. Powerful AI will find correlations that humans can’t find. Which is why it’s valuable and used by the most powerful companies in the world.
So if AI and data can discover things that humans can’t but humans can bring the benefit of multiple contextual domains to spot new, perhaps counter-intuitive, relationships, how can we cooperate with machines so we learn even more?
Human learning is a paradox. As we learn more, we are less open to information from the environment. This dilemma can be thought of as an explore/exploit tradeoff. A classic example is how to choose a restaurant. You can either go out for a meal to a place you know you will get a good meal or try some place new. The first gives you good food but no new information, the second will give you new information and maybe a bad meal or maybe a great new place to add to your list. How to choose? Your intuitions may not be helpful when it comes to making this particular choice on this particular evening and, even though there are data—reviews or a recommendation from a friend perhaps—the data may not feel relevant or be decisive on this particular day. It turns out to be remarkably difficult to design a systematic strategy to solve this problem.
Perhaps the best way to solve this problem can be seen in nature—specifically human childhood, where children explore, gather new data through play, curiosity, “getting into everything,” while adults exploit. Alison Gopnik writes about this as a key component of the human life history and her ideas apply broadly to all human learning.
A learner can conduct a narrow search, only revising current hypotheses when the evidence is particularly strong and making small adjustments to current theories to accommodate new evidence. This strategy is most likely to quickly yield a “good enough” solution that will support immediate effective action. But it also means that the learner may fail to imagine a better alternative that is farther from the current hypothesis, such as a hypothesis about an unusual causal relation.
Alternatively, a learner can conduct a more exploratory search, moving to new hypotheses with only a small amount of evidence, and trying out potential hypotheses that are less like the current hypotheses. This strategy is less efficient if the learner’s starting hypothesis is reasonably good and may mean that the learner wastes time imagining unlikely possibilities. But it may also make the learner more likely to discover genuinely new ideas.
Gopnik
The general rule for solving this dilemma is “explore first, exploit later.” This is because knowledge accumulates as someone learns. For each of us as individuals, it makes sense to rely on knowledge that we’ve built. Likewise, it also makes sense that we become less motivated to go find new data. Time plays another role; the longer time goes on, the fewer opportunities there will be to take advantage of information gathered earlier on.
The fact that machine knowledge differs from human knowledge in specific ways offers a hint on tactics for humans to use data to update their intuitions.
Finally, human knowledge is highly dependent on context. We can get caught in “local minima,” a kind of sink-hole in the problem space. When this happens we need a change of scene—something completely new that allows us to explore again, find new knowledge and build a new perspective. Machines can’t easily go on sabbatical or wander around in a new part of hyperspace, exploring for new data for the sake of it without the help of human guidance. Machines aren’t curious nor do they have any implicit motivation to adjust their own models.
One of the core promises of AI is that it will help us gain insight into how human minds work in the first place. Humans evolved with significant resources constraints—limited energy, limited senses, limited life spans—which made us really good at working with limited resources. Now machines—with virtually unlimited computation and speed of connectivity—can help us as long as we design for accessibility and complementarity.
Perhaps, for humans, an upside of AI will be accessing the alien scale of the machine mind in our own intuitive discovery.