Human-centered design is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, usability knowledge, and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance. ISO 9241- 210:2010(E)
Human-centered design practices have been used in the design of some of the greatest products and services of our time. The practice of HCD continues to evolve as people discover new ways to connect with people and serve new needs.
AI and machine learning is changing how people think about human-centered design for the simple reason that an intelligent, learning system creates a new kind of relationship. In the same way that humans develop a “theory of mind” about the motivations and perspectives of other humans, people also have a “theory of machine” as to the motivations of an intelligent agent.
Even though AI is ubiquitous in our daily lives – search, maps, voice assistants, recommendation systems – the design of AI is often distinctly not human-centered. While developers have created immensely powerful AI, we are only beginning to understand how complex and far reaching an AI system can be. It is only in the last few years that the downside of AI has been appreciated; algorithmic bias, psychological manipulation, addictive tech and an AI’s ability to select us based on preferences and behavior.
Some AI is so powerful, so cheap and so easy to implement that it is sometimes adopted without full consideration of the impact on a broad stakeholder group. An example is facial recognition in policing. In late 2017, the Washington County Sheriff’s Office became the first law enforcement agency in the country known to use Amazon’s artificial-intelligence tool Rekognition. As the Washington Post reports, “almost overnight, deputies saw their investigative powers supercharged, allowing them to scan for matches of a suspect’s face across more than 300,000 mug shots taken at the county jail since 2001. A grainy picture of someone’s face — captured by a security camera, a social-media account or a deputy’s smartphone — can quickly become a link to their identity, including their name, family and address.” The technology is cheap and easy: Rekognition is easy to activate and requires no major technical infrastructure. Washington County spent about $700 to upload its first big haul of photos, and pays about $7 a month.
This is incredible technology, something that would have been the stuff of science fiction only a few years ago. The problem is that the users – law enforcement – are a separate group from the stakeholder population. The population is made up of law-abiding citizens identified as such, law- abiding citizens who are falsely identified as criminals, criminals who are accurately identified as criminals and criminals who are falsely not identified as criminals. This matrix – technically called a confusion matrix – of true negative, false negative, true positive and false negative exists for all AI applications and is a vital component of design. Every AI application should have an appropriate person whose job it is to decide on how the AI design will deal with false results. In the case of community policing, the result of not dealing with this rollout in an accountable and transparent way right from the start is that there is either law enforcement overreach or this valuable technology gets outlawed, as is happening in some parts of the USA.
Human-centered AI design goes beyond user interface design; it takes account of the broader implications of AI, including accountability for mistakes, ethics, bias and governance. It considers an AI to be an active agent with a distinct intelligence that it’s the responsibility of humans to design.