Becoming an expert often involves a period of apprenticeship, but AI has the potential to disrupt this process fundamentally. What happens when AI demands high levels of human expertise, yet its usage inadvertently hinders the development of that expertise?
The aviation industry offers several case studies. In 2008, during a flight from Singapore to Perth, the Qantas Airbus A330's flight computer malfunctioned, causing the plane to violently nose-dive. The system mistakenly believed the aircraft was stalling.
Captain Kevin Sullivan, a former US Navy fighter pilot, instinctively grabbed the control stick when the plane's nose pitched down. He pulled back, but nothing happened. Then, he did something counterintuitive – he released the stick. Drawing upon his years of experience as a fighter pilot, Sullivan trusted his intuition and flew the plane manually. Throughout the ordeal, the pilots received no explanation from the computer and had to rely on their own judgment.
Almost an hour after the first dive, Sullivan managed to land the plane safely at a remote airfield in northwestern Australia. While nearly 100 people were injured, some critically, there were no fatalities. Sullivan's experience and expertise enabled him to recognize the computer's erroneous interpretation of the situation. His years of traditional, high-pressure flying had equipped him with the skills necessary to intervene and take control when automation failed.
Picture the stark contrast between the fighter jets Sullivan trained in and today's modern aircraft: Hand flying, a tactile experience where pilots' control sticks connect to the plane's parts through wires and pulleys, versus the sophisticated fly-by-wire systems of the 21st century. With modern systems, a side-stick's electronic connection tells the computer what to do, replacing the old mechanical link.
Fly-by-wire systems and cutting-edge electronics offer increased safety and ease of use. But, in the heat of an emergency, they can leave pilots feeling disoriented. Nowadays, regulations forbid pilots from hand flying at cruising altitude, meaning many have never experienced the sensation of an aircraft's natural response at that height.
QF72's harrowing story raises a crucial question about AI's role in the physical world: How can we strike a balance between AI systems designed to replace humans most of the time and the undeniable need for human expertise to ensure consistent reliability? AI systems can falter at critical moments, calling for human intervention. The real challenge lies in fostering and maintaining high levels of expertise in a world increasingly dependent on AI systems that replace human action in most situations.
Imagine this conundrum: we're trying to develop expertise, but the very nature of automation aims to eliminate the need for it. It's a mind-bending paradox that leaves us questioning the future of human proficiency in an increasingly automated world. It's an energetic debate that demands our attention and forces us to contemplate the consequences.
Take, for instance, the daunting task of integrating self-driving vehicles into our everyday lives. Sure, it's one thing to train a small, elite group like pilots, but it's a whole different ball game when we're talking about someone with only a few hours of driving experience taking the wheel of a self-driving car. The only viable long-term solution for widespread adoption of self-driving cars is to create a system where human intervention is never expected or required.
Now, let's consider the broader implications: What happens when AI empowers non-experts to perform tasks that were once exclusive to experts? The rapid advancements in large language models and generative AI are making this a reality, shaking up the very foundations of careers, jobs, and our overall well-being.
Will the push for automation lead to a world where human expertise is undervalued, or can we strike a delicate balance that allows both human mastery and AI to coexist and complement each other?
Recently, MIT researchers had humans working with a cutting-edge AI system to build a website, converting GPT-3 into HTML code. The results? Human programmers using GPT-3 are a whopping 30% faster. Even non-programmers, who would otherwise struggle with HTML code, can create websites from scratch using AI, and just as fast as expert programmers. Talk about a game-changing transfer of value from expert to non-expert. So, in a world where non-experts can perform tasks just as well as the pros, why bother paying for expertise?
But things aren't always as straightforward as they appear. Sure, the non-programmer might hit a roadblock and need expert help—at least for now. But expert programmers can harness the power of LLM’s to speed up their work, tackle mundane or repetitive tasks, and push the boundaries of what's possible. By doing so, they could have more time to focus on the bigger picture, like addressing data bias issues or bringing different stakeholders to the coding table. As AI changes the landscape, what we value in a programmer may evolve.
AI is shaking up the way humans develop expertise and how it's valued. It can make errors in unpredictable ways, struggle with adapting to new environments, and create more decisions for humans to handle. AI separates the prediction of an outcome from judging its meaning, adds complexity to decision-making, alters expectations of humans when machines fail, and changes the nature of learning. Paradoxically, as AI threatens the development of expertise, it only amplifies its ultimate value.