Back button

Case study: Helping Company X build more responsible AI

February 28, 2023
Two robots pushing baby strollers responsibly

When Company X, a Fortune 500 global retailer, started to train its employees in data science and AI, they were acutely aware of what was at stake. Adopting AI comes with risks, including privacy, workforce displacement, reputation, and equity and fairness.

Few companies care enough about these risks. In 2020, McKinsey published research with the conclusion that 70% of companies take no steps to mitigate anything other than cybersecurity and regulatory compliance. So Company X asked us to help them raise awareness and set the stage for responsible AI action for their data science team.

Sonder Studio started by asking their data scientists a basic question: Do you consider yourself responsible for the usages imagined from the applications/algorithms you create? 

The answers were broadly in line with our expectations—an almost 50:50 split of opinion. A little over half of data scientists consider themselves responsible, and around 45% say they are not.

This matters. It is a clear signal that companies must decide what they expect of their data scientists. It can’t be left ambiguous, or the quality of responsible AI development will literally be a coin toss.

Our approach was to bring the data scientists to the same point. Together we developed a common understanding of important concepts. Such as (TL:DR):

  • Dignity: All humans have the same amount. Dignity can’t be traded, or bought, and sold.
  • Privacy: Instead of thinking about inputs—what we offer up—we need to think about outputs—what they infer about us.
  • Explainability, justifiability, and accountability: Just because something is explained doesn’t mean it’s justified or that a person is accountable.
  • New inequalities and dilemmas: AI enables many problems to be seen as computationally optimizable, so a fundamental shift in power exists from the users of any social system to the designers of the replacement techno-social system.
  • Bias, Prejudice, and Fairness: The conflict between accuracy and fairness requires us to reason precisely about different treatments for different cohorts in the data.

These are complex issues with no easy or clear answer. But ignoring responsible AI is no longer an option. We helped Company X transition to a different way of working with data and AI, one where equity and fairness are top of mind. 

Where AI can be used for good.

Subscribe to our newsletter for a better look into the new responsibilities that come with AI. Subscribe here