Artificial intelligence isn’t yet conscious but algorithms can still discriminate, sometimes subtly expressing the hidden biases of the programmers who created them. It’s a big, complicated problem, as AI systems become more enmeshed into out everyday lives.
But there may be fix — or at least a way to monitor algorithms and tell whether they’ve inappropriately discriminated against a demographic.
“Learned prediction rules are often too complex to understand.”
Proposed by a team of computer scientists from Google, the University of Chicago, and the University of Texas, Austin, the Equality of Opportunity in Supervised Learning approach analyzes the decisions that machine learning programs make — rather than the decision-making processes themselves — to detect discrimination. The very nature of these algorithms is to make decisions on their own, with their own logic, in a black box hidden from human review. As such, the researchers see gaining access to the black boxes as practically futile.
“Learned prediction rules are often too complex to understand,” University of Chicago computer scientist and co-author, Nathan Srebro, told Digital Trends. “Indeed, the whole point of machine learning is to automatically learn a [statistically] good rule…not one whose description necessarily makes sense to humans. With this view of learning in mind, we also wanted to be able to ensure a sense of non-discrimination while still treating learned rules as black boxes.”
Srebro and co-authors Moritz Hardt of Google and Eric Price of UT Austin developed an approach to analyze an algorithm’s decisions and make sure it didn’t discriminate in the decision-making process. To do this, they led with the anti-prejudicial principle that a decision about a particular person should not be solely based on that person’s demographic. In the case of an AI program, the algorithm’s decision about a person should not reveal anything about that person’s gender or race in a way that would be inappropriately discriminatory.
It’s a test that doesn’t solve the problem directly but helps flag and prevent discriminatory processes. For this reason, some researchers are wary.
“Machine learning is great if you’re using it to work out the best way to route an oil pipeline,” Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield, told The Guardian. “Until we know more about how biases work in them, I’d be very concerned about them making predictions that affect people’s lives.”
Srebro recognizes this concern but does not consider it sweeping critique of his teams approach. “I agree that in many applications with high-stakes impact on individuals, especially by government and judicial authorities, use of black box statistical predictors is not appropriate and transparency is vital,” he said. “In other situations, when used by commercial entities and when individual stakes are lower, black box statistical predictors might be appropriate and efficient. It might be hard to completely disallow them but still desirable to control for specific protected discrimination.”
The paper on Equality of Opportunity in Supervised Learning was one of a handful presented this month at the Neural Information Processing Systems (NIPS) in Barcelona, Spain, which offered approaches to detecting discrimination in algorithms, according to The Guardian.