Skip to main content

Researchers design new test to detect discrimination in AI programs

Artificial intelligence isn’t yet conscious but algorithms can still discriminate, sometimes subtly expressing the hidden biases of the programmers who created them. It’s a big, complicated problem, as AI systems become more enmeshed into out everyday lives.

But there may be fix — or at least a way to monitor algorithms and tell whether they’ve inappropriately discriminated against a demographic.

Recommended Videos

“Learned prediction rules are often too complex to understand.”


Proposed by a team of computer scientists from Google, the University of Chicago, and the University of Texas, Austin, the Equality of Opportunity in Supervised Learning approach analyzes the decisions that machine learning programs make — rather than the decision-making processes themselves — to detect discrimination. The very nature of these algorithms is to make decisions on their own, with their own logic, in a black box hidden from human review. As such, the researchers see gaining access to the black boxes as practically futile.

“Learned prediction rules are often too complex to understand,” University of Chicago computer scientist and co-author, Nathan Srebro, told Digital Trends. “Indeed, the whole point of machine learning is to automatically learn a [statistically] good rule…not one whose description necessarily makes sense to humans.  With this view of learning in mind, we also wanted to be able to ensure a sense of non-discrimination while still treating learned rules as black boxes.”

Srebro and co-authors Moritz Hardt of Google and Eric Price of UT Austin developed an approach to analyze an algorithm’s decisions and make sure it didn’t discriminate in the decision-making process. To do this, they led with the anti-prejudicial principle that a decision about a particular person should not be solely based on that person’s demographic. In the case of an AI program, the algorithm’s decision about a person should not reveal anything about that person’s gender or race in a way that would be inappropriately discriminatory.

It’s a test that doesn’t solve the problem directly but helps flag and prevent discriminatory processes. For this reason, some researchers are wary.

“Machine learning is great if you’re using it to work out the best way to route an oil pipeline,” Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield, told The Guardian. “Until we know more about how biases work in them, I’d be very concerned about them making predictions that affect people’s lives.”

Srebro recognizes this concern but does not consider it sweeping critique of his teams approach. “I agree that in many applications with high-stakes impact on individuals, especially by government and judicial authorities, use of black box statistical predictors is not appropriate and transparency is vital,” he said. “In other situations, when used by commercial entities and when individual stakes are lower, black box statistical predictors might be appropriate and efficient. It might be hard to completely disallow them but still desirable to control for specific protected discrimination.”

The paper on Equality of Opportunity in Supervised Learning was one of a handful presented this month at the Neural Information Processing Systems (NIPS) in Barcelona, Spain, which offered approaches to detecting discrimination in algorithms, according to The Guardian.

Dyllan Furness
Former Digital Trends Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Clinical test says AI can offer therapy as good as a certified expert
Interacting with Therabot AI App.

AI is being heavily pushed into the field of research and medical science. From drug discovery to diagnosing diseases, the results have been fairly encouraging. But when it comes to tasks where behavioral science and nuances come into the picture, things go haywire. It seems an expert-tuned approach is the best way forward.
Dartmouth College experts recently conducted the first clinical trial of an AI chatbot designed specifically for providing mental health assistance. Called Therabot, the AI assistant was tested in the form of an app among participants diagnosed with serious mental health problems across the United States.
“The improvements in symptoms we observed were comparable to what is reported for traditional outpatient therapy, suggesting this AI-assisted approach may offer clinically meaningful benefits,” notes Nicholas Jacobson, associate professor of biomedical data science and psychiatry at the Geisel School of Medicine.

A massive progress

Read more
Google is testing a new refresh shortcut for AI Mode
Google AI Mode for Search.

Google’s new AI Mode for search may soon get an update to make the feature easier to navigate after users input a query. 

The Gemini 2.0-powered AI-search function is an elevated search experience, providing a mix of contextual AI Overviews and relevant search links. Currently, Google has made AI Mode available to its Google One AI Premium subscribers as a preview. However, the brand may soon expand availability to free users, while also making it easier to reset an AI Mode conversation and remain on the same page. 

Read more
Microsoft 365 Copilot gets an AI Researcher that everyone will love
Researcher agent in action inside Microsoft 365 Copilot app.

Microsoft is late to the party, but it is finally bringing a deep research tool of its own to the Microsoft 365 Copilot platform across the web, mobile, and desktop. Unlike competitors such as Google Gemini, Perplexity, or OpenAI’s ChatGPT, all of which use the Deep Research name, Microsoft is going with the Researcher agent branding.
The overarching idea, however, isn’t too different. You tell the Copilot AI to come up with thoroughly researched material on a certain topic or create an action plan, and it will oblige by producing a detailed document that would otherwise take hours of human research and compilation. It’s all about performing complex, multi-step research on your behalf as an autonomous AI agent.
Just to avoid any confusion early on, Microsoft 365 Copilot is essentially the rebranded version of the erstwhile Microsoft 365 (Office) app. It is different from the standalone Copilot app, which is more like a general purpose AI chatbot application.
Researcher: A reasoning agent in Microsoft 365 Copilot
How Researcher agent works?
Underneath the Researcher agent, however, is OpenAI’s Deep Research model. But this is not a simple rip-off. Instead, the feature’s implementation in Microsoft 365 Copilot runs far deeper than the competition. That’s primarily because it can look at your own material, or a business’ internal data, as well.
Instead of pulling information solely from the internet, the Researcher agent can also take a look at internal documents such as emails, chats, internal meeting logs, calendars, transcripts, and shared documents. It can also reference data from external sources such as Salesforce, as well as other custom agents that are in use at a company.
“Researcher’s intelligence to reason and connect the dots leads to magical moments,” claims Microsoft. Researcher agent can be configured by users to reference data from the web, local files, meeting recordings, emails, chats, and sales agent, on an individual basis — all of them, or just a select few.

Why it stands out?

Read more