Plastic and metal are no longer the most valuable design materials; AI has made human behavior a design material. And because a good portion of AI’s behavior relies on its post-design experience, designers can no longer be concerned only with intent. Designers need to plan for consequences, to foresee how the human-machine system will operate and to broaden the scope of their designs to include human choices and agency.
Modern AI makes inferences about people, which means that we need to think differently —less about inputs and what we offer up by way of consent, but also about outputs and what is inferred about us using data from other sources. Data from various sources is used with the aim of creating the most personal view of each individual. The ultimate success of personalization is to understand who we are today and who we will be tomorrow. Understanding our futures — all human futures — is a valuable business.
Tristan Harris calls the data known about us a “digital voodoo doll.” The big platforms have the best voodoo dolls based on the quantity and quality of information gathered about each of us individually, and in aggregate. Any company that has access to these voodoo dolls can decide whether to “push in a pin” and manipulate us into action. This manipulation makes it easier for the human futures traders to make money. As with any tradable commodity, if you can know the future — if you can create the future — you have the best opportunity to make money. This means that the aim of personalization is not just to have the best predictions but to make those predictions more predictable.
This brings us to the paradox of personalization: the best way to personalize a person’s future is to make that future less personalized. To increase the efficiency of personalization, designers need to put individual consumers in a box that will fit the prediction of who they will be tomorrow. This state of over- personalization can feel accurate, but creepy. It can feel like magic, but also like dark sorcery. How did the AI know that I would want that? Or perhaps the question is, would I have wanted it if the AI hadn’t said that I would? Am I making choices for myself or is the AI making choices for me?
While human-futures design aims to create the future-you so that you will accept what is offered, human-centered design aims to understand the future-you and to create an offer you will want.
When AI is designed to predict human behavior, guide human behavior towards that end and then profit from that prediction, human agency is fundamentally disrupted. We become predictable; we become “unpeople.”
Human-centered AI design must be the way forward. If AI design is untempered by human concerns for fairness, equity, justifiability and human accountability, if it only optimizes for one thing (profit), or if it is solely designed to create a future-you rather than understanding the future-you, then we all lose— especially in our authentic connections with others.
Human connection is a complicated thing. It requires each person to have the agency to act, to choose to connect, to actively reach out. The deepest connections require each person to show up, to be vulnerable, to risk something. And they require each person to have empathy, try to understand the other.
In some ways, the goal of using AI to enhance human connection seems impossible. AI can easily be misused and remove human agency — not only by intent but also as a by-product of the AI solving a goal. AI cannot be vulnerable — it can fake emotions but it has no concept of risk or that gut fear of opening up. And AI cannot empathize — a non-conscious thing can’t understand what it is to be a conscious thing.
What if AI can understand what would free us and empower us — each of us individually — to give us agency to connect? Could that understanding be individualized and personalized? What if AI could understand what makes us feel safe, providing the setting to be vulnerable with each other? Could that understanding be personalized? And what if AI could help us understand each other, to break down our barriers and biases?
Today we have AI that helps us communicate with each other better. We have AI that can interpret our emotions. We have AI that can help us understand each other better. We even have AI that can help us understand our own internal biases.
Part of the power of AI is that it can find patterns we can’t see, it can understand the world in ways that we can’t. It’s exciting to think about the possibilities of how personalized AI could help us understand each other in ways that we can’t even contemplate today. If it can, perhaps it is possible that AI could help increase and enhance human connection.
This is our goal; to create a system so that people can design AI that humans want.