Ever since the first tool was wielded, humans have wrestled with technology’s double-edged sword: the fear of being usurped by our own creations. While we’re not quite unemployed en masse yet, it's naive to dismiss the angst that new technologies send through our job markets as mere growing pains that will be cured by retraining, resilience, or universal basic income. When it comes to people, the stakes are high. Our adaptability, though profound, is not instantaneous. We need time and resources to evolve alongside our tools.
Today’s Generative AI is an unprecedented technology. For the first time, we're not just delegating tasks to lifeless tools, but entrusting decisions to digital entities with their own semblance of “thought”. The quest for automation, at its core, has always prized decision making as the pinnacle of human utility. Now, that very endeavor threatens our claim to biological “peak cognition”.
Consider the two prevailing narratives for AI’s role in the world of work: automation and augmentation. Automation, in essence, moves work from humans to machines, giving machines duties once solely our domain. It shines in repetitive, mundane, or predictable tasks that are universally shirked by us. On the flip side, augmentation amplifies our abilities, making AI our intellectual sidekick rather than our replacement. It thrives when humans enjoy their role and AI can streamline the process, making it easier, faster, and better.
In the pre-generative AI era, we compartmentalized AI and human abilities neatly. We revered AI for tasks requiring unerring accuracy and repetitiveness, and retained our faith in human superiority for endeavors calling for creativity, empathy, and abstract reasoning.
However, as we'll see, these once-clear lines are blurring. The dawn of generative AI–and the AI evolution at large–reveals a startling truth: we barely understand our own intelligence. With generative AI we can even now imagine how machines can now make us more creative, caring, and connected. As burgeoning tools like advanced language models and image generators become standard in our creative, inventive, and communicative arsenal, we're forced to reevaluate our conception of human intellect and, by extension, our perception of work.
The traditional approach in AI design has often hinged on the dichotomy of automation versus augmentation. Machines have typically been trusted with predictable, repeatable, high-volume tasks or calculations, while humans have taken on tasks demanding creativity, emotional intelligence, and the capacity to navigate complex, ambiguous situations.
However, recent developments prompt us to reevaluate this labor division between humans and machines. On one hand, AI's growing potential in tasks requiring emotional awareness presents a compelling case for reducing bias and bolstering privacy.
On the other hand, as AI gets integrated into larger labor systems, a fascinating pattern emerges: one individual's automation morphs into another's augmentation. When a rare, scarce, or expert skill is automated, it suddenly opens up to a wider, less skilled demographic. While this may lead to a drop in wages for a select group of experts, it concurrently expands opportunities for many more. A classic example of this is the transformation of London's taxi industry: once dominated by a small number of drivers who had mastered the rigorous "The Knowledge" test, it's now flooded with Uber drivers tenfold the number of test passers, thanks to Google and Uber's automation efforts.
Generative AI introduces a twist to the tale. As a cultural technology, it democratizes access to human expertise in an unprecedented manner, blurring the boundary between 'human' and 'machine' through its creative, empathetic, and reasoning capabilities. Generative AI forces us to reconsider how much we truly understand about phenomena we label as human intelligence—creativity, empathy, reasoning, invention.
This gives rise to a crucial question: how should we approach the decision to complement or replace human labor in an era teeming with cognitive, creative, and cultural AI? If we automate an expert programmer’s skills with AI, are we democratizing the world of coding, potentially reducing the expert's wages but expanding the overall pie? Or are we instead amplifying their skills, turning programmers into a coding elite, and widening the divide between programmers and non-programmers?