AI is frequently used by people in positions of power on people who have less power. And as AI diffuses across more and more users, this power imbalance may concentrate even more. An effective way to understand how power is enacted in an AI system is to start with a power mapping exercise – and to think more about “studying up” – looking up at those who have the most agency and autonomy.
Not only is this good ethical practice but it can be used to improve the accuracy of predictions from an AI. This is because of what researchers call a “theory of agency,” which says that prediction accuracy is a by-product of agency.
We can use performance management scoring as an example. Any proxy for performance is most directly a by-product of a manager’s decision rather than the employee’s actual performance. It’s the manager’s decision that becomes directly datafied, before other measures of employee performance. This means that an AI to predict how a manager will score an employee will be more accurate than an AI that predicts how an employee will perform.
Somewhat counter-intuitively, there is no need for different data. The same data speaks differently depending on who is asking and what questions are asked of it. The exact same company data used by management to score and predict employee performance could also be used by employees to understand which managers promote in a biased way or who do not offer equal opportunities to the individuals on their teams.
While it may not always be realistic to flip power and create decision makers out of decision subjects, it is often valuable to change how to think about agency. This can lead to important insights around control and feedback as well as usability and explainability. In the case of performance management, the process of “studying up” could provide ways of identifying new opportunities for employees to find mentors or to understand how the cultures and values of the organization affect them personally.
Some questions to answer: