Back button

How giving yourself up to AI is an existential problem

January 20, 2020
A woman kneeling as if praying

We all want to be “better” versions of ourselves. It could be that we want to be fitter, healthier, more kind, better empathizers, more productive, better informed. The list is likely endless. We have our stated preferences. There are preferences we are aware of and preferences we aren’t conscious of. Our desired future self may or may not align with our expressed preferences in any given moment. Being human is complicated!

Often technology is imposed on us but when we choose to delegate physical, emotional, cognitive or ethical work to a machine, we outsource a part of ourselves. Outsourcing to an AI can make us more efficient but it can have unintended consequences and there are always trade-offs.

In Re-engineering Humanity by Brett Frischmann and Evan Selinger, the authors’ over-arching theme is understanding how AI is changing what it means to be human. The book weaves design, philosophy and technology together and offers clever examples and thought-experiments that illuminate just how counter AI can be to human agency.

Outsourcing ourselves to an AI comes with existential consequences:

  • Passivity – by accepting assistance, we make less of a personal effort. When passivity takes hold, we become spectators rather than active participants.
  • Decreased agency – when we participate in less of the process than we would if we performed it ourselves, it reduces our experience of the action itself because we’ve used less effort. As a result, our behavior becomes less intentional and we lose some amount of control.
  • Decreased responsibility – when we abdicate control, we become less culpable. This leaves us feeling less entitled to feel proud of positive results.
  • Increased ignorance – when we delegate tasks it limits our understanding of how something works. Even if proxies are reliable, we don’t fully know how they work on our behalf. This is a particular problem with AI because we translate our requests into algorithms which don’t always behave how we would.
  • Detachment – diminished participation leads to disengagement. The more intense the disengagement, the less intimate the experience. Ultimately this leads to alienation.
  • Decreased independence – when outsourcing becomes habitual, we become dependent on getting things done. We lose skills and motivation.
  • Increased helplessness – biased towards thinking we can’t affect big problems.

Human-centered design in the age of AI isn’t only about solving a user need—it’s about solving it in a way that doesn’t erode human agency over the longer term. AI can make us more efficient, help us be more productive and help us achieve our goals. In some areas, our skills may indeed reduce but this may be more than compensated by increasing skills in other domains of life.

The default design practice today is to make experiences “frictionless.” The authors lay out a challenge for technologists and designers – to create AI that re-thinks what outsourcing human physical, emotional, cognitive and ethical work means. “Frictionless” as a goal in itself may set humans on a path of reduced agency and autonomy – where we surrender any free-will to automated knowledge and action. Ultimately we become machines. We create a world where passing the Turing Test involves humans becoming more machine-like rather than machines passing for humans.

In a techno-social economy, as more human problems are subject to computation, human-centered AI will be AI that ultimately keeps humans in conscious control, even as we surrender more of ourselves to machine assistance.