Humans and machines increasingly work alongside each other. As AI gets better at doing tasks faster, cheaper and more reliably than humans or as AI discovers patterns that humans can’t, more technology will be deployed to enhance or replace human work. The role of humans in an AI world is often oversimplified; humans will deal with people while machines will deal with the rest. This oversimplification fails to take into account the multi-faceted nature of human reasoning.
We’ve been writing about the human-machine community for half a decade now, starting with our research as Intelligentsia Research on the rise of Machine Employees, at Quartz on how to talk about the fourth industrial revolution and as Sonder Studio, where we work at the intersection of humanbehavior and technology. A key conclusion from our early work—which was counter to the prevailing view at the time—was that human intuition would thrive in a world of AI because humans are uniquely adept in unpredictable situations; where data do not exist, where complexity is high and where context is critical.
As we know more about the real-world capabilities of AI and as human-centered design techniques reveal the true nature of human responses to predictive technologies, we learn that the story is more complicated and paradoxical. While humans have the upper hand in unpredictable environments because we can use our intuitions, creativity and social minds in flexible ways, we also have to take into account that human intuition is fallible, especially when we incorrectly apply our intuitions.
Try this example: How many rotations will the coin make before returning to its starting point?
The intuitive answer is one, but the correct answer is two. (For the full explanation, check out wikipedia).
Our intuitions align with real-world experience. Often we use rules and heuristics in new settings where things we know don’t apply. This is where our intuitions fail. We see circular objects roll along flat surfaces all the time. Our intuitions fail when applied to curved surfaces, a far less common occurrence.
When it comes to thinking about AI and humans working together, we find a paradox: the paradox of intuition. Human intuition excels in unpredictable situations but the success of our intuition is unpredictable. Why are humans better at “unpredictable” when human intuition can fail exactly at this point?
The resolution of this paradox is our consciousness, social skills and flexibility of mind. We may use social and cultural intuitions to pick up on cues from others that we are wrong, so re-examine the data or adjust the story we tell. We can be self-reflective and examine our own knowledge to intuit when our own internal models are outside of their intended range and initiate a search for new data. We can think counter-intuitively and flexibly to discover an “unknown known” based on models of cause and effect in novel situations.
A famous example is cited in Prediction Machines as perhaps the ultimate case of counter-intuitive reasoning from data. During WWII engineers recognized the need to better armor their bombers. The question was, where to put the armor? Planes that returned from German bombing raids brought data; the bullet holes. Were these the obvious places to protect the planes? They asked statistician Abraham Wald to help. He told them to protect the places without the bullet holes. This counter-intuitive recommendation came from his deep understanding (model) of how the data were generated which he then used in calculations of survivor bias to come up with his insight.
In the age of data, it is vital that humans use every opportunity to learn from new data and to use the power of human cause and effect reasoning to discover new phenomena that machines can’t. Humans can reason about processes beyond what the data initially tell us and discover things that are beyond the reach of AI because AI can’t reason casually.
The power of the human-machine community is when humans rapidly update intuitions, thereby enhancing our flexible, creative and counter-intuitive thinking abilities beyond that which a machine can do.