Back button
Back

How to design culture in the age of AI

January 7, 2021
Group of people standing in a circle with their arms raised as if at a happy gathering

Humans are communal thinkers. Our collaborative minds have allowed us to create intelligence using the people around us. We access knowledge through our community, making us reliant on others for our understanding of the world. For this, we need to have a shared sense of purpose and intent; belonging. We experience this as culture. Our social brains constantly scan the world for culture cues. Are we in or are we out? Can we share and deliberate with others so as to progress our understanding of the world?

Machines are not in our community. The advent of modern AI systems, using vast data, operating beyond the comprehension and speed of the human mind means perhaps machines will know more than the collective human understanding. We are entering a future where the base unit of knowledge is deep in data, only readable by machine.

We value AI and big data precisely because it can find patterns and correlations in data that humans can’t find. Powerful AI weaves and builds relationships among classifiers, variables and desired outcomes in ways that reach far beyond human intuition. Networked systems propagate this knowledge instantly, to machines everywhere.

What does it mean to access knowledge from an AI when data and AI have no culture in themselves: no sense of belonging, no vulnerability, no intent or sense of purpose? How do data-first companies build data-first cultures? Is the very idea of a human-centered culture fundamentally incompatible with data-first design?

Designing culture in the age of AI starts with understanding what it means to belong when the group includes the alien knowledge of machines. While knowledge exists, it may not be accessible to all, which inevitably creates a power shift; from intuitive decision makers to reflective analysts, from traditional leaders to those who command the data, from front line workers to behind-the-scenes data scientists. But while decision making ability may shift, accountability does not. Leaders remain accountable for decisions, whether made by humans or by machines. Frontline staff are still expected to handle both the good and bad of customer experience.

We think it’s imperative that companies design for AI and data-first cultures, consciously and deliberately, not leaving culture to emerge on an ad-hoc basis. This means:

  • Consciously considering what mindsets are required. Is curiosity central? Is there a balance between the bias to experiment and gain more data versus efficiently harvesting existing knowledge and intuitions?
  • Defining data-first behaviors. Is questioning—of both data and people—rewarded? Both humans and machines can be wrong; do leaders make space for vulnerability, doubt and humility?
  • Ensuring systems support all employees having access to machine knowledge and data. Can non-technical people access data in an intuitive and user-friendly way? Are data systems developed as products that support non-technical decision making?
  • Putting in place methodologies to align human and machine knowledge. Are decision making processes designed for making the best of human intuition and machine learning? Can AI systems be interrogated for their reasoning? Are processes designed with human cognitive biases in mind?
  • Reinforcing healthy group habits with data. How does data access shift power balances? Do people present data honestly and objectively? Are people trained in debate and challenge in a productive way.
  • Establishing constant feedback. Do people constantly learn from a cycle of data, deliberation and decision? Do data processes match the cadence of human cognition?

Shared human knowledge developed in the context of our social evolution. Machines, while increasingly able to discern human intent, aren’t there yet. A data-first culture is designed to keep humans feeling they belong, have somewhere to put their knowledge and have a common purpose.