Core to human-centered AI is explainability. If a machine cannot explain its reasoning in a way that humans understand and on human terms, the AI isn’t working for people.
Researchers from Georgia Institute of Technology, Cornell University and the University of Kentucky recently published the results of teaching a machine to generate conversational explanations of its model’s internal state and action data representations in real-time. They tested whether people like the machine to tell them how it made decisions, and what characteristics of explanations drove people’s perceptions of explainability.
Relatability
Relatability is key to understandability – when an AI uses natural language to explain itself, people put themselves in the AI’s shoes and evaluate understandability based on whether the AI gives the same reasons they would. People liked a machine to talk to them about why it did something, especially if it was able to demonstrate that it recognized the environmental conditions and adapted to them.
People feel confident when the AI thinks long term
Confidence relies on strategic detail (demonstrating long term reasoning), awareness (knowledge of the environment) and relatability (being perceived to be human-like in expressions).
Overall, confidence hinges most on the long term, because the wider view of the exchange between the human and the AI gives the human a greater perception of the AI being more predictive and intelligent. Humans have a mental model that the AI plans ahead so it is important not to disrupt this model. Humans also assume that AIs are prone to mistakes in the short term because this most closely resembles human judgment. Stating the obvious erodes confidence.
People like AIs to be human-like, but this depends on the person
Human-likeness is primarily driven by intelligibility (good grammar and making sense), relatability (seeming to “think like I would,” especially if emotions are involved) and strategic detail (being able to demonstrate that an AI can plan for the long term and analyze information). However, both these factors are dependent on the human; if a person has a mental model of humans being fallible, then errors in intelligibility are seen as human-like (ums and ahs like in Google Duplex or typos in the text). Similarly, if a person has a mental model that critical thinking and logical planning are associated with computers and algorithmic decision making while humans are more intuitive, then human-likeness is rated as lower by that individual.
Explainability has to balance long and short term objectives
It’s important to get the right balance between perceptions of the AI being able to convey long and short term explanations. Long term explanations are about strategic detail, awareness, relatability (which tend to make the AI human-like). Short term explanations are about intelligibility (which tend to make the AI easy to understand).
This means getting the balance right between clarity and level of detail, where the tradeoff is succinctness versus breadth. Being too succinct hampers higher order goals while being too broad risks being less focused.
When AI fails, people want detail
Failure relies on detailed rationales and explanatory power. This is because humans want machines to explain any failure in a way that they can then work out how to fix it.
People want to understand AI from AI’s perspective
Unexpected behavior is perceived differently than failure. People want to develop a mental model that understands the AI from the AI’s perspective. There is a balance to strike between succinctness and detail because a human needs enough detail to be convinced but it needs to be delivered succinctly enough that the explanation doesn’t conflict with a mental model where the human believes the AI knows its own inner state.
AI design is full of trade offs
There are tradeoffs with all AI design. This research points out some key ones, such as how conciseness can improve intelligibility and overall understandability but that it comes at the cost of strategic detail, which then can hurt confidence in an AI.
An AI can be configured in a hybrid way so that a more focused and short term model of the interaction can take over when a short and simple rationale is required while the long term model is activated when the AI has to communicate a failure or unexpected behavior.
Explainable AI is going to be key for AI product design and adoption. AI developers need to be able to tweak an input type to generate rationale styles that meet design needs. For example, a companion agent requires high relatability as a key design requirement for user engagement.