Back button

The Paradox Of Personalization. How AI and personal agency conflict

February 23, 2020
A digital image of a woman with colorful hair

At the heart of personalization, lies a natural tension – AI’s strength is predictability while humanity’s strength is unpredictability. It’s not an easy paradox to resolve.

AI makes predictions. Predictions are valuable – especially when predictions involve human behavior. In User Friendly, Kuang and Fabricant discuss how the most valuable raw material in product design is not glass or steel or plastic, it is human behavior. In Surveillance Capitalism, Zuboff details the perils we face with the capture of human “behavioral surplus,” which is used as the raw material for predictions, and monetized by AI-powered platforms. The trade in human futures is how Google and Facebook make their billions.

AI’s promise is personalization – delivering frictionless experiences that make our lives easier. We can offload cognitive, emotional or moral tasks to AI. For this we need AI that can predict our behavior. AI’s ability to predict human behavior is more than most people realize. AI can detect signals in our online actions that are beyond our comprehension, then turn these into patterns that are highly predictive.

AI can also influence our behavior at a subconscious level because our preferences are often formed by associative learning without us being consciously aware. This means if AI is able to predict our future preferences, it has the capacity to manipulate them. With digital technologies that operate beyond our awareness or our ability to conceptualize the data, these insights are available to a machine but not available to us. So when it comes to determining our future selves, AI may have the upper hand.

Here is the fundamental paradox with personalization: AI optimizes so that humans become more predictable, but our very human-ness revolves around our ability to be unpredictable. We even rely on unpredictability to experience connection with others. If everything was predictable, we wouldn’t need to work with each other to build a shared vision or tackle the unplanned.

In other words, AI’s efficiency goal is in conflict with human agency.

In democratic societies, agency is a central value. This brings a lot of messy inefficiency. Society and its institutions don’t actually optimize. It sounds counterintuitive but the social value of leaving a wide range of opportunities open for the future generally exceeds the value that society could realize by trying to optimize in the present. Our default in the US is to leave things undetermined. We need to play, to explore, to be unpredictable and to have unpredictability.

This doesn’t mean we want to be unpredictable all the time or not outsource some of our thinking and actions. It doesn’t mean that we can’t use technology to help us reach our personal goals. But it does mean that we have a fundamental conflict with AI and the commercial incentives behind any AI which is able to profit or self-deal as humans are made more predictable. One of the inequalities that AI introduces is how much we know about ourselves versus how much others know about us.

The real advantage the AI has is knowing us better than we know ourselves and offering us things in the moment that predictably steer us in those directions, rather than ones we may need to discover for ourselves. This is the market in human futures, where machine learning is prioritized over human learning and the real bias is a bias against humans.

This is not the same as old-school advertising or traditional technology. AI is different because it is role- creating. It learns from data, it creates its own knowledge that humans struggle to understand, it interferes with our agency in ways we can’t detect and it may be working towards an objective that is contrary to ours.

That doesn’t mean it’s not useful and helpful, nor that we can’t use it to better ourselves and our societies. But it does mean that we need to make sure we keep machines biased towards humans – that we bias human learning over machine learning.

The Turing Test is a famous concept. It is usually thought of as being a test of whether a machine can pass for a human. But perhaps we should flip this around and ask the question; what does it mean instead for a human to behave in such a way that she passes for a machine? And, now, how do we design to avoid it?

Personalization to a user-of-one incentivizes machines to solve for finer and finer predictions of our future selves. But, paradoxically, the prediction of our future selves reduces our ability to freely find those selves. The younger we are, the more pernicious this effect may be. Perhaps the ultimate protection we can give our kids is the right to figure themselves out before an AI does it for them.