Back button
Back

How to make ethical AI without stymying technologists

August 19, 2019
Many paper cut outs of faces

Many companies implementing AI have done so with rapid adoption of sophisticated technologies. This has lead to technologists—data scientists, AI researchers and engineers—being far ahead of the rest of the business.

But now, business leaders are waking up to the hazards of this hands-off approach. Leaders are having to answer the question “is your AI ethical?” while technologists are wondering what it means for their workflow, metrics and decision making.

“Ethical AI” is a catch-all term for how to govern AI in a way that is consistent with the law and with the goals and values of a profession, a company or society more broadly. Which means that it must be done by cross functional teams focused on the broader goals, timelines and tradeoffs, involving many, complex, interrelated processes. Ethical AI is far more complex than it seems because it’s like designing an ethical artificial person.

Whether it is a board member expressing concern about a legal exposure, a customer worried about how they are categorized or an employee worried about the impact of AI on their job, it’s clear that AI design needs to move beyond the technologists, no matter how sophisticated and well-intentioned they are or how explainable the software appears to be.

But how? AI is an expert area. The details and mechanics of AI are highly specialized and, so far, the learning curve for a non-technical business leader to climb has been far too steep to be practically achievable. Many technical leaders are reluctant to relinquish control of the AI development process because it slows the process down. Having lawyers, marketers, customer agents and other stakeholders deeply involved in an AI development, with translators who are instrumental in developing shared understanding of benefits, risks and tradeoffs, doesn’t make for moving fast.

Making the transition to cross-functional teams and cross-cut design processes is the next step. But how can companies do this without simply pushing more decisions onto technical people or grinding things to a halt because of a lack of training and expertise on the business side? First, people need to be empowered and have access to a series of tools which are designed to help them cross the expertise chasm.

In our experience, there are three key steps that companies need to take to manage the transition to implementing ethical AI.

Develop business leaders’ confidence in the technology

Most people are not going to be data scientists or UX designers. But it is in these two disciplines that many ethical AI decisions are made; deep in the technology. AI is unlike traditional tech, it’s a mathematical optimization and prediction problem across vast data sets. Many decisions – in fact, much of the creativity – is made in the process of tweaking models and sorting data. This is the front line of data and algorithmic bias, fairness settings, proxy discrimination and deciding how much to allow the technology to do. AI can be so powerful that there is always the ethical dilemma of “just because wecan do something, doesn’t mean we should.”

Business leaders who are confident in the mechanisms of AI are better placed to ask the right questions at the right time, earn the respect of the technologists and be able to drive an effective cross- functional team.

Allocate time upfront to setting the goal for the AI

Setting a goal for AI takes skill and an understanding of the nuance of the task. There are many factors to consider that require a diverse perspective:

  • should the AI automate a human task or augment the humans that already do the work?
  • what is the “truth” in the data? Is that the right “truth” for the AI to have as a goal?
  • what other data sources can / should be considered and can this data be ethically gathered and managed?
  • how will the AI fail? What happens when it does? Who is accountable when it does?

These questions can take time to answer and can return ambiguous answers. For example, if the goal of the AI is to treat people “fairly,” there are different definitions that an AI needs to use that are important for users but can end up being mathematically mutually exclusive.

Sometimes an AI design process can highlight a gap in company policy, but leaving it to the AI project to decide on policy-by-proxy is not a good idea. While a situation like this may mean new people have to develop new policy and slow down the project, one of things people value in AI is that it has a knack for highlighting something humans don’t know. Committing to designing ethical AI is committing to discovering more about your own ethics.

Focus on how AI fails

When an AI system is being designed, there is an unavoidable step to go through, which is to decide how the system should be rewarded. At some point it will fail and deliver an unhelpful suggestion or react in an incorrect way. Ethical design takes account of the outcomes that are undesirable and maps them against key technical decisions. Cross functional teams should be able to map an outcome in one part of the business, or for one user group, and trace how this outcome impacts ethics, bias, equality, privacy, accountability or competition in the system for other users and for the company.

Perhaps the biggest challenge right now is that most teams lack the tools for bridging the gap between business leaders and technologists. Workshops and templates that are specifically designed for AI projects can scaffold the conversation, allowing business leaders to gain the trust of the technologists and for technologists to have the full assistance of their organizations in making decisions about ethical AI.