Skip to main content

Judgmental A.I. mirror rates how trustworthy you are based on your looks

Holding a mirror to artificial intelligence

As the success of the iPhone X’s Face ID confirms, lots of us are thrilled to bits at the idea of a machine that can identify us based on our facial features. But how happy would you be if a computer used your facial features to start making judgments about your age, your gender, your race, your attractiveness, your trustworthiness, or even how kind you are?

Recommended Videos

Chances are that, somewhere down the line, you’d start to get a bit freaked out. Especially if the A.I. in question was using this information in a way that controlled the opportunities or options that are made available to you.

Please enable Javascript to view this content

Exploring this tricky (and somewhat unsettling) side of artificial intelligence is a new project from researchers at the University of Melbourne in Australia. Taking the form of a smart biometric mirror, their device uses facial-recognition technology to analyze users’ faces, and then presents an assessment in the form of 14 different characteristics it has “learned” from what it’s seen.

“Initially, the system is quite secretive about what to expect,” Dr. Niels Wouters, one of the researchers who worked on the project, told Digital Trends. “Nothing more than, ‘hey, do you want to see what computers know about you?’ is what lures people in. But as they give consent to proceed and their photo is taken, it gradually shows how personal the feedback can get.”

Image used with permission by copyright holder

As Wouters points out, problematic elements are present from the beginning, although not all users may immediately realize it. For example, the system only allows binary genders, and can recognize just five ethnicities — meaning that an Asian student might be recognized as Hispanic, or an Indigenous Australian as African. Later assessment such as a person’s level of responsibility or emotional stability will likely prompt a response from everyone who uses the device.

The idea is to show the dangers of biased data sets, and the way that problematic or discriminatory behavior can become encoded in machine learning systems. This is something that Dr. Safiya Umoja Noble did a great job of discussing in her recent book Algorithms of Oppression.

“[At present, the discussion surrounding these kind of issues in A.I.] is mostly led by ethicists, academics, and technologists,” Wouters continued. “But with an increasing number of A.I. deployments in society, people need to be made more aware of what A.I. is, what it can do, how it can go wrong, and whether it’s even the next logical step in evolution we want to embrace.”

With artificial intelligence increasingly used to make judgements about everything from whether we’ll make a good employee to our levels of aggression, devices such as the Biometric Mirror will only become more relevant.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Photoshop AI thinks ‘happiness’ is a smile with rotten teeth
Phil Nickinson, as edited by Adobe Photoshop's Neural Filter.

You can't swing a dead cat these days without running into AI. And nowhere is that more true than in photography. I've certainly had fun with it on more than my share of photos. But the more I attempt to be a "serious" photographer, the less I want to rely on artificial intelligence to do my job for me.

That's not to say it doesn't have its place. Because it does. And at the end of the day, using AI filters isn't really any different than hitting "auto" in Photoshop or Lightroom and using those results. And AI certainly has its place in the world of art. (Though I'd probably put that place somewhere way in the back, behind the humans who make it all possible in the first place.)

Read more
Brave browser takes on ChatGPT, but not how you’d expect
brave browser

Artificial intelligence (AI) is all the rage these days, and a bunch of Silicon Valley heavyweights are vying with OpenAI’s ChatGPT to shake up the tech landscape. Brave is the latest contender to take a swing, and the privacy-focused company has just announced its own AI-based tool for its web browser.

Called Summarizer, the new feature will seek to give you a quick answer to anything you ask it. It does this by taking information from a variety of sources and rolling them into a single coherent text block at the top of your search results.

Read more
Meta made DALL-E for video, and it’s both creepy and amazing
A video created via AI, featuring a creature typing in a hat.

Meta unveiled a crazy artificial intelligence model that allows users to turn their typed descriptions into video. The system is called Make-A-Video and is the latest in a trend of AI generated content on the web.

The system accepts short descriptions like "a robot surfing a wave in the ocean” or "clown fish swimming through the coral reef" and dynamically generates a short GIF of the description. There are even three different styles of videos to choose from: surreal, realistic, and stylized.

Read more