Back button
Back

Design more valuable AI by understanding how humans think

May 16, 2019
Sign post with two signs saying intuition and logic

Humans are intuitive thinkers. Daniel Kahneman, winner of the Nobel prize in economics describes human thinking in his book Thinking Fast and Slow. In the book, he details how people use two different systems for thinking; System 1 is intuitive, off-the-top of the head, while System 2 is where the cognitive work of analysis is done. System 1 feels easy. It’s supposed to; it evolved for quick assessment of threats and fast action. System 2 feels hard; it’s where we use our brains to make longer term judgments and do the difficult (slow) work of analysis. Humans are lazy, we will only use System 2 when we are forced to.

System 1 is full of shortcuts – heuristics – that make it fast. Often they work well but the tradeoff for being fast can be failure. These cognitive biases are both predictable and numerous. Some of the most widely recognized include:

  • confirmation bias, which is the tendency to favor information that confirms pre-existing beliefs,
  • the availability effect which is the tendency to overestimate the likelihood of events with greater “availability” in memory either because of recency or a strong emotional connection to the event and,
  • the endowment effect which is the tendency for people to demand more to give up an object than they would be willing to pay to acquire it.

For AI designers, there is one cognitive bias that is particularly important; it’s called the representative heuristic. Representativeness shapes how people intuit the likelihood of a relationship. In simple terms, people over-associate like-with-like. When people are asked to choose whether a woman, described as quiet by nature and who wears glasses to read, is a librarian or a farmer, most people choose librarian. This, of course, ignores the fact that there are vastly more women farmers in the world than there are women librarians. The representative heuristic goes beyond stereotyping; what is happening is that people ignore the base rate – the background rate of an occurrence that is important in calculating the actual probability of an event.

Here’s where it matters in AI: the false positive paradox. This is where false positive tests are more probable than true positive tests, and occur when the overall population has a low incidence of a condition and the incidence rate is lower than the false positive rate. The probability of a positive test result is determined not only by the accuracy of the test but by the characteristics of the sampled population.

For an AI application where the base rate is low (say, facial recognition technology to search for terrorists), the number of false positives (someone being identified as a terrorist when they are not) will actually be higher than the number of true positives (the number of actual terrorists who are accurately identified). Even with facial recognition technology being 99% accurate, if the population of non-terrorists is 99.99%, around 10,098 people will be identified as terrorists and about 99 of these will actually be terrorists.

This is so deeply counter-intuitive that AI designers can’t afford to rely on humans to detect and respond appropriately to such false positive paradoxes. In fact, because these applications are used in critical applications at such scale, AI designers need to work closely with business, civic and government leaders to ensure that design supports what to do when the AI makes mistakes.