Air France flight AF447 took off from Rio de Janeiro, Argentina bound for Charles de Gaulle airport in Paris, France in May 2012. After a few hours at cruising altitude, the aircraft was approaching the intertropical convergence zone. There was a line of thunderstorms ahead. A few minutes later, hail peppered the outside of the aircraft and ice crystals began to form on the pitot tubes which measure airspeed. All three pitot tubes were quickly blocked by ice which caused significant data loss causing the autopilot to disconnect. The aircraft displays went dark and a rash of alerts sounded.
Forty-four seconds after the autopilot disconnected, with the pilots confused and reacting to false information, they incorrectly pulled the plane’s nose up. The wings lost their lift and the aircraft went into an aerodynamic stall. Instead of flying, it was falling. The pilots were surprised and confused and by the time they figured out what was happening, it was too late to recover.
The ground proximity warnings sounded. “We’re going to crash. This can’t be true. But what’s happening?” one of the co-pilots was heard to say.
Confusion overwhelmed the flight crew and their actions made everything worse. By the time they realized what was happening it was too late. The plane crashed into the Atlantic Ocean and all 228 passengers and crew died.
Human cognitive biases were a major factor in the crash. The crew anticipated turbulence and associated the pitch attitude and shaking with weather rather than a stall. This fit what they expected so they didn’t look for any alternative cause. Once the aircraft stalled, the crew discounted conflicting information and held to the notion that they had lost their instruments. They couldn’t shift their mindset from ambiguous signals to a lack of signals.
Their attention was tunneled. The co-pilots were worried about getting the pilot back to the flight deck. Critical information wasn’t put front and center, which made it difficult for the crew to recognize reality.
Cognitive biases are woven deep into our cognition. Even with the most sophisticated technology we can still be led astray by cognitive biases. They exist because evolution is a process that is sensitive to energy, time, and other resources. Evolution created brains that are primed for action not accuracy..
Your brain uses about ten watts of energy. That’s not very much: less than most light bulbs. Evolution designs for optimal efficiency so our brains have to make tradeoffs based on what resources are available. According to Tom Griffiths, professor of Information Technology, Consciousness, and Culture at Princeton University, our cognition has evolved to deal with three limitations.
We have a limited amount of time. The world only provides limited opportunities to learn behaviors that help us survive. Our limited life span imposes an upper bound on the amount of available data. A human who needed thousands of examples of a lion hiding in grass was a dead human. So we evolved to learn from a small number of examples. Learning from a small number of examples means that we have to generalize.
We only have a limited amount of computation. Each of us has a single brain with fixed computational capacity. We can’t directly transfer the contents of our brain to someone else. But this does mean that we are good at breaking problems into smaller problems and using solutions from one domain to another. Our brains are pattern completers.
Our minds have a constraint on our ability to communicate. We can only live inside our own minds so when we find problems or opportunities that go beyond our life span and computation capacity, we have language, institutions, and customs.
These constraints compound. Limited time amplifies the effect of limited computation, and restricted communication makes it harder to draw upon more computation.
For the vast majority of our evolution, cognitive biases have been mostly beneficial to humans. Fast, efficient and almost always good enough, cognitive biases operated in sync with our environment. Our cognitive biases have been honed to present the most actionable information for survival not to present the most accurate picture of the world.
Shortcuts in our reasoning make us fast decision makers. Unconscious biases can minimize errors that our conscious mind makes. Word and number problems are a job for our conscious mind (remember the cognitive reflection test and how easy it was to make an error). Yet when we plan to catch a ball or pick up a cup, our unconscious mind uses shortcuts that aren’t so error prone. In many ways, this is a central irony of being human in the digital age: our subconscious mind is less error prone yet its strongest capabilities aren’t a match for our modern environment.
In the age of the internet, information abundance, and artificially intelligent machines, cognitive biases are easily triggered or exploited by technology. Over a decade ago, E. O. Wilson summed it up: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”
Great Human Strength: We can be aware of the limits of our cognition and design technology to amplify our strengths.
Great Human Weakness: Cognitive biases can be psychological vulnerabilities in the modern digital world.
Machine Opportunity: Designs that reduce negative impacts of both conscious and unconscious bias.
Machine Threat: Designs that unhelpfully trigger cognitive biases or operate outside of our conscious awareness when this leads to bad outcomes.