Skip to main content

Researchers design new test to detect discrimination in AI programs

What-is-google-duplex
Image used with permission by copyright holder
Artificial intelligence isn’t yet conscious but algorithms can still discriminate, sometimes subtly expressing the hidden biases of the programmers who created them. It’s a big, complicated problem, as AI systems become more enmeshed into out everyday lives.

But there may be fix — or at least a way to monitor algorithms and tell whether they’ve inappropriately discriminated against a demographic.

“Learned prediction rules are often too complex to understand.”


Proposed by a team of computer scientists from Google, the University of Chicago, and the University of Texas, Austin, the Equality of Opportunity in Supervised Learning approach analyzes the decisions that machine learning programs make — rather than the decision-making processes themselves — to detect discrimination. The very nature of these algorithms is to make decisions on their own, with their own logic, in a black box hidden from human review. As such, the researchers see gaining access to the black boxes as practically futile.

“Learned prediction rules are often too complex to understand,” University of Chicago computer scientist and co-author, Nathan Srebro, told Digital Trends. “Indeed, the whole point of machine learning is to automatically learn a [statistically] good rule…not one whose description necessarily makes sense to humans.  With this view of learning in mind, we also wanted to be able to ensure a sense of non-discrimination while still treating learned rules as black boxes.”

Srebro and co-authors Moritz Hardt of Google and Eric Price of UT Austin developed an approach to analyze an algorithm’s decisions and make sure it didn’t discriminate in the decision-making process. To do this, they led with the anti-prejudicial principle that a decision about a particular person should not be solely based on that person’s demographic. In the case of an AI program, the algorithm’s decision about a person should not reveal anything about that person’s gender or race in a way that would be inappropriately discriminatory.

It’s a test that doesn’t solve the problem directly but helps flag and prevent discriminatory processes. For this reason, some researchers are wary.

“Machine learning is great if you’re using it to work out the best way to route an oil pipeline,” Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield, told The Guardian. “Until we know more about how biases work in them, I’d be very concerned about them making predictions that affect people’s lives.”

Srebro recognizes this concern but does not consider it sweeping critique of his teams approach. “I agree that in many applications with high-stakes impact on individuals, especially by government and judicial authorities, use of black box statistical predictors is not appropriate and transparency is vital,” he said. “In other situations, when used by commercial entities and when individual stakes are lower, black box statistical predictors might be appropriate and efficient. It might be hard to completely disallow them but still desirable to control for specific protected discrimination.”

The paper on Equality of Opportunity in Supervised Learning was one of a handful presented this month at the Neural Information Processing Systems (NIPS) in Barcelona, Spain, which offered approaches to detecting discrimination in algorithms, according to The Guardian.

Editors' Recommendations

Dyllan Furness
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more
Elon Musk’s new AI company aims to ‘understand the universe’
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

Elon Musk has just formed a new company that will seek to “understand the true nature of the universe.” No biggie, then.

Announced on Wednesday, the company, xAI, already has among its ranks artificial intelligence (AI) experts formerly of firms such as DeepMind, OpenAI, Google Research, Microsoft Research, and Tesla.

Read more
OpenAI building new team to stop superintelligent AI going rogue
A digital brain on a computer interface.

If the individuals who are at the very forefront of artificial intelligence technology are commenting about the potentially catastrophic effects of highly intelligent AI systems, then it's probably wise to sit up and take notice.

Just a couple of months ago, Geoffrey Hinton, a man considered one of the “godfathers” of AI for his pioneering work in the field, said that the technology's rapid pace of development meant that it was “not inconceivable” that superintelligent AI -- considered as being superior to the human mind -- could end up wiping out humanity.

Read more