Artificial intelligence is a new way to tell computers what people want. This means there are new requirements for AI design. Design has to take into account a whole host of new ideas — from telling computers about the values of individuals and society to allowing individual users to provide feedback to a machine in a personally meaningful way, dynamically, in real-time.
It means that machines need to build trust and that humans need to understand how a machine thinks, sees or even feels.
Understanding AI design requires understanding the science behind how it works. While many people from many different fields are involved in AI — computer science, product design, UX, AI research, law, philosophy — there are very few who focus on the specifics of AI design. We have devoted the past half decade to understanding this emerging skill set and developing the tools and techniques that are required for designing machines that humans want — human-centered AI design.
AI is fundamentally probabilistic. This means that it fails. Data can be biased, bias can perpetuate historical discrimination, harm happens before anyone can see it, the scale and pervasiveness of AI results in mistakes being propagated across vast numbers of users at digital speed. So AI design involves anticipating what harms can happen and to whom. It involves knitting the online and offline worlds so that people step in when only human touch will suffice.
Human biases such as confirmation bias and automation bias can amplify machine bias. Inequality is exacerbated. Accountability is lost. Human autonomy is interfered with. Many complain of the unintended consequences of AI yet we know that algorithms propagate along existing seams in the data representations of our societies. This means that many consequences are indeed predictable. But it takes a different process to understand them and design for them.
AI has made human behavior a design material. Because a good portion of AI’s behavior relies on its post-design experience, designers can no longer be concerned only with intent. Designers need to plan for consequences, to foresee how the human-machine system will operate and to broaden the scope of their designs to include human systems response and accountability.
We have customized tried-and-true design-thinking techniques to the design of intelligent machines — adding three entirely new components the traditional process and extending others.
We add components that are unique to AI, derived from the science of AI and extend the design choices to account for AI’s behavior “in the wild,” once it is beyond the reach of the designer.
By following this enhanced human-centered design process, designers can include more voices at the table, resolve tradeoffs and reveal opportunities for AI to create value. This is how to make AI that humans want.