Back button
Back

Mind for our Minds: Meaning

April 12, 2023
An image of a person standing in front of an open door surrounded by abstract colorful images

Mind for our Minds is an insight series on the future of the collective intelligence of humans and machines in a complex world.

Soren Kierkegaard, one of the fathers of existentialism, argued that it is the act of making choices that brings meaning to our lives; that through making choices, we live authentically, forming our own opinions, rather than being guided by the opinions of others or society as a whole. For Kierkegaard, understanding the meaning of our existence comes through true experiences when we make choices of our own, not following those of others. 

What would Kierkegaard, who died in Copenhagen in 1855, make of AI's pervasive predictions and filtering of our experiences? Big Tech has promised that personalized experiences are more meaningful; that increasing engagement reflects meaning. Kierkegaard might agree that reducing passive experiences could increase meaning. But that would only be true if the remaining experiences were active and authentic. 

One might argue that AI filtering on our behalf is essential in a modern-day existence. Content floods our attention. Data is beyond the limits of our comprehension. The opportunity for new information is overwhelming. Each notification teases the potential to close a knowledge gap. 

Even in 1846, Kierkegaard argued that the pursuit of knowledge was distracting people from finding meaning, writing "people in our time, because of so much knowledge, have forgotten what it means to exist." He argued that when presented with unlimited choices, we face a dizzying anxiety. The seemingly infinite opportunity to seek knowledge through the internet might seem so overwhelming as to require filtering. 

Indeed, by filtering our experiences and limiting our choices, Big Tech may be saving us from an existential crisis or angst. As Kierkegaard wrote: 

Standing on a cliff, a sense of disorientation and confusion cloud you. Not only are you afraid of falling, you also fear succumbing to the impulse of throwing yourself off. Nothing is holding you back. Dread, anxiety, and anguish rise to the surface.

Perhaps AI's filtering of experiences saves us from this anxiety and anguish as a parent saves a child, guiding us down a safe path and away from the scary cloud of confusion. Perhaps AI’s filtering reduces the choking uncertainty of where to pay attention. Perhaps AI’s filtering removes the paralysis of angst, allowing us to make meaningful choices.

That would be true if it were possible for an algorithm to understand each of us well enough to help us make our choices that we would have made on our own, to predict our choices well enough to be able to present a selection of choices that discards those we wouldn't make. Perhaps then. But not yet. And perhaps never. 

Despite the vast quantity of information AI can have about us, it is limited primarily to our activity online. AI can try to quantify meaning by our shares, comments, and emojis as well as how we cruise around the internet. But that is the limit of its understanding of us.

AI doesn’t know if we have a meaningful conversation offline about something we read online. It doesn’t know if we meditate to decrease our anxiety about a particular news story. Or if we ponder a question from a friend on a long walk. The only thing AI can attempt to predict is if we will interact with content online. Without understanding the rest of our lives, it isn’t possible for AI to know what will be meaningful.

AI can make inferences about our emotional state but it can’t really understand how we feel. AI systems can infer our emotional state based on mouse clicks, time spent, and plenty of other technical measurements. Facial recognition systems can recognize a smile but they can’t tell whether that smile is real or fake, whether that smile indicates true happiness or an attempt to put others at ease. AI’s empathetic limitations grow as we look into the future.

Each authentic choice we make involves an implicit prediction of the future. We aren’t just making a decision based on what we think will happen. We are making a decision based on how we will feel. Will we like that new school? Will we enjoy being married? Will we feel fulfilled in our new job? If AI can’t empathize with us in the moment, is there any hope it can empathize with how we will feel in the future? And, if it can’t, how can it help us make those choices for ourselves? 

Perhaps AI’s ability to see patterns across essentially unlimited dimensions in data could benefit us by suggesting ideas we wouldn’t uncover ourselves. Perhaps that might increase the serendipity surface of our lives, increasing the potential of finding something surprising and unusual.

Consider the large language models which are designed with a smidge of variety to create text that is more "human-like." Rather than always choosing the most likely next word or phrase in a sentence, the algorithm is tuned to choose something less than the most likely on average. Perhaps counterintuitively, it is the choice to use a less-than-most-likely next word that creates a variety in outcome and novelty of speech that sounds more human and less robotic.

The novelty of speech seems to provide opportunities for serendipity. It's easy to wonder, what will this AI tell me now? What new idea will it share? Or what wondrous happenstance might I stumble upon? But the novelty creates only an illusion of serendipity. The novelty is more about what words to use than what to say. The boundaries of the answer space is intentionally small. The AI may be tuned to be creative enough to sound like a human, but it is also tuned to be narrow enough to provide a useful answer. 

The reality is that the multi-dimensional, seemingly-endless variety is constrained by the boundaries of the training data. The training data may be vast—potentially including the entire documented history of society—but it is still a data set with limits. LLMs are trained on hundreds of billions of words—a staggering amount that would take a human thousands of years to read. But the data is still only words that have been digitized in some form of public form. 

The models have no concept of words that have been only spoken or only thought. The models have no concept of the images, sounds, relationships, emotions, and dreams that matter to us. The models have no embodied cognition that provides feelings of attraction, disgust, fear, and confidence. If models are providing predictions and recommendations without these core human senses, aren't those predictions and recommendations fundamentally limiting? 

The reality of AI is also that the seemingly-endless potential in its data is constrained by the objectives set by the AI’s designers and owners. Generating predictions costs money and those predictions need to be paid for by the user either with money or attention. If the user is paying with attention then someone else is paying for that attention. And that attention will be directed by whomever is paying, creating an inauthentic set of potential choices for the user. Is it possible to make an authentic choice if the options presented are inauthentic? Are we able to make choices of our own if the AI is directly our attention according to the direction of others?

The core philosophical issue with an attempt to make AI meaningful is the conundrum that the very act of choosing meaningful content for us means that the consumption of that content cannot be meaningful. By filtering our experiences, Big Tech’s AI removes our agency to choose. And by removing our choice, it eliminates our ability to live authentically. An inauthentic life has no meaning.

Read the more from the Mind for our Minds series:

Mind for our Minds: Introduction

Mind for our Minds: Meaning

Mind for our Minds: Culture