When Company X, a Fortune 500 global retailer, started to train its employees in data science and AI, they were acutely aware of what was at stake. Adopting AI comes with risks, including privacy, workforce displacement, reputation, and equity and fairness.
Few companies care enough about these risks. In 2020, McKinsey published research with the conclusion that 70% of companies take no steps to mitigate anything other than cybersecurity and regulatory compliance. So Company X asked us to help them raise awareness and set the stage for responsible AI action for their data science team.
Sonder Studio started by asking their data scientists a basic question: Do you consider yourself responsible for the usages imagined from the applications/algorithms you create?
The answers were broadly in line with our expectations—an almost 50:50 split of opinion. A little over half of data scientists consider themselves responsible, and around 45% say they are not.
This matters. It is a clear signal that companies must decide what they expect of their data scientists. It can’t be left ambiguous, or the quality of responsible AI development will literally be a coin toss.
Our approach was to bring the data scientists to the same point. Together we developed a common understanding of important concepts. Such as (TL:DR):
These are complex issues with no easy or clear answer. But ignoring responsible AI is no longer an option. We helped Company X transition to a different way of working with data and AI, one where equity and fairness are top of mind.
Where AI can be used for good.
Subscribe to our newsletter for a better look into the new responsibilities that come with AI. Subscribe here.