Back button
Back

Human-centered AI for product managers and innovators

May 13, 2019
Photo of woman with eyes closed and abstract colorful hair and flowers streaming behind her

Now, machines learn. AI learns from human reactions and humans change their behavior as AI interacts with them. This means that AI is a new kind of relationship. Creating a relationship through an AI-enabled product means that design now needs to take into account an entirely new agent; the machine intelligence. What kind of relationship do you want to build? Where should friction be eliminated versus where is some friction a satisfying aspect of the human-machine relationship? How should a user learn about the intelligence it is interacting with? What human qualities matter at what stage in the relationship? For example, humor is an important technique that people use to test the boundaries on a voice assistant—“Alexa, tell me a joke”—but is this the right way to start a relationship with a healthcare or a financial services chatbot?

For product managers and innovators, there are many new things to consider when designing relationships with customers and AI. AI can be unpredictable; for example, computer vision technology can surface unexpected results when lighting is different or if the context has changed and voice assistants can be frustratingly inaccurate. When people expect more intelligence from an AI than they experience or when the AI behaves differently than their mental model of it, the model in their head gets broken. People then create conspiracy theories or hyper-personalized perceptions of the AI.

An example of this is Google Flights. Designers discovered that users had a pre-existing mental model based on a very human experience—an expectation that once someone knows you are willing to buy something, your negotiation power drops and a seller will raise the price of the good you have expressed interest in. When people went to Google Flights after their original search and they perceived that a price increase was because the algorithm “considered” them to be in a weaker negotiating position. The Google UX team had to come up with new ways to show users that the AI wasn’t hyper-personalizing based on prior search. The team had to explain that the algorithm was instead making predictions based multiple complex variables such as time until departure, likely weather at the destination, prior patterns of price and demand and seasonal variations. People oversimplified what the algorithm was doing rather than adjust their mental model for the reality of what the machine decided was important.

A key test of human-centered AI design is to understand the mental model of the user and other affected stakeholders and then be able to predict how they will react and respond when the AI does something they don’t understand or don’t like. Human-centered AI design is a repeatable and reliable process for ensuring users’ mental models are understood in the design process, which makes it possible to build better AI, faster.