Back button

How to build AI that humans want

April 24, 2020
Woman smiling and sitting at a desk with a computer

Artificial intelligence is a new way to tell computers what people want. This means there are new requirements for AI design. Design has to take into account a whole host of new ideas — from telling computers about the values of individuals and society to allowing individual users to provide feedback to a machine in a personally meaningful way, dynamically, in real-time.

It means that machines need to build trust and that humans need to understand how a machine thinks, sees or even feels.

Understanding AI design requires understanding the science behind how it works. While many people from many different fields are involved in AI — computer science, product design, UX, AI research, law, philosophy — there are very few who focus on the specifics of AI design. We have devoted the past half decade to understanding this emerging skill set and developing the tools and techniques that are required for designing machines that humans want — human-centered AI design.

AI is fundamentally probabilistic. This means that it fails. Data can be biased, bias can perpetuate historical discrimination, harm happens before anyone can see it, the scale and pervasiveness of AI results in mistakes being propagated across vast numbers of users at digital speed. So AI design involves anticipating what harms can happen and to whom. It involves knitting the online and offline worlds so that people step in when only human touch will suffice.

Human biases such as confirmation bias and automation bias can amplify machine bias. Inequality is exacerbated. Accountability is lost. Human autonomy is interfered with. Many complain of the unintended consequences of AI yet we know that algorithms propagate along existing seams in the data representations of our societies. This means that many consequences are indeed predictable. But it takes a different process to understand them and design for them.

AI has made human behavior a design material. Because a good portion of AI’s behavior relies on its post-design experience, designers can no longer be concerned only with intent. Designers need to plan for consequences, to foresee how the human-machine system will operate and to broaden the scope of their designs to include human systems response and accountability.

We have customized tried-and-true design-thinking techniques to the design of intelligent machines — adding three entirely new components the traditional process and extending others.

We add components that are unique to AI, derived from the science of AI and extend the design choices to account for AI’s behavior “in the wild,” once it is beyond the reach of the designer.

  • Discover — how to find insight into the problem by understanding what’s possible with AI.
  • Define — how to decide what to focus on when humans and machines work together in an interactive, learning, adaptive, dynamic system.
  • Determine — how to determine the relationship with AI and what or how much to personalize.
  • Direct — how to make critical decisions on ethical questions around privacy, bias, explainability, equality and agency.
  • Develop — how to identify potential solutions in a probabilistic, learning system that will operate autonomously.
  • Deliver — how to make sure that you get what you intended, without unpleasant surprises. Discipline — how to keep it all on track and know AI “in the wild” works how you want.

By following this enhanced human-centered design process, designers can include more voices at the table, resolve tradeoffs and reveal opportunities for AI to create value. This is how to make AI that humans want.