People come to AI with a mental model of its intelligence. Often these mental models are consistent between user groups. Understanding this is vital because if an AI performs poorly compared with the user’s expectations, the user will stop pushing the limits of what the AI can do so that, even when the AI improves, the user won’t discover its new intelligence. This is a particular problem with intelligent assistants such as Siri, Alexa and Google Assistant.
In the case of intelligent assistants, users tend to have one of three distinct mental models of the AI:
When designing a new system, understanding all of these is particularly important because users will bring existing knowledge to bear on the new system. A mental model that is out of step with what the system can do well will means the user is led astray, which ultimately leads to frustration.
People come to voice assistants with especially high expectations. They rapidly abandon exploring new functionality and simply stick with what was successful the first time around. This matters for designers because it is far more difficult to re-engage users and have them test new limits.
New users are valuable as they are the most likely to try new things but they are also the most fragile. Once they find the AI can’t do something, they stop trying and don’t come back later in the hope it can. So if you’re writing an Alexa skill and during user testing it has low success rates for commonly requested tasks, don’t release it. Exposing your customer base to a low-performing AI solution will prevent them from trying any improved solutions you might release in the future.