Back button
Back

Decoding the AI Future: Generative AI blurs traditional boundaries between human creativity and machine tasks

August 9, 2023
Futuristic office scene with a woman working at a MacBook

Generative AI is in the "extreme headline" stage. Studies and model-based research provide some scientific grounding to understanding its impact on jobs, but anecdotes and dubious productivity-centered studies muddy the waters.

OpenAI’s ChatGPT reopened the job automation debate. In March, the company released a paper titled “GPTs are GPTs: An early look at the labor market impact potential of large language models,” where researchers concluded that 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of large language models. More recently, researchers at Princeton studied the exposure of professions to Generative AI, concluding that highly-educated, highly-paid, white-collar occupations may be most exposed.  

The traditional approach in AI design has often hinged on the dichotomy of automation versus augmentation. Machines have typically been trusted with predictable, repeatable, high-volume tasks or calculations, while humans have taken on tasks demanding creativity, emotional intelligence, and the capacity to navigate complex, ambiguous situations.

Recent developments prompt us to reevaluate this labor division between humans and machines. We now know that large language models can be creative, can reason analogically, can think in metaphors, have personalities and theory of mind, and have cognitive empathy, at least to some degree. This means we need to rethink the automation versus augmentation divide, especially given that increasing capabilities from AI will drive more understanding into human capabilities.

As we’ve seen before, this complexity is often glossed over when AI researchers announce hitting a human-level benchmark in a specific measure of intelligence, only to be used to automate a human process. This may result in "so-so automation," where the machine, although competent, fails to match human performance in real-world scenarios. Consequently, humans are left to handle marginal yet important tasks, often viewed as tedious or mundane.

This trap of automation restricts humans to the narrow confines set by the automation design, preventing them from tackling what they excel at—navigating complexity, handling unpredictability, and making context-dependent judgments, decisions, and actions.

Even when we think about augmentation rather than automation, the approach to work is inherently biased, with a narrow vision that often overlooks the complexity, sociality, and interrelatedness of human work so the practical implementation often falls short. The intricacy of decisions and actions—woven together by predictions, judgments, and actions—operates within a multifaceted system where feedback loops exist between the system and human cognition, encompassing both conscious and subconscious processes. Much of this is grounded in experiential knowledge and isn't easily quantifiable or observable.  

The deployment of technology doesn't simply occur; it's a deliberate choice. By breaking away from the rigid dichotomy of automation versus augmentation, we might discover a more versatile model for envisioning the future of work.

Read the full Decoding the AI Future series: