Science fiction has given us many iconic malevolent A.I. characters. However, these are often figures like Terminator’s T-800 or Alien’s Ash who commit emotionless murder to pursue an end goal. Those which exhibit more unhinged paranoid behavior, like 2001: A Space Odyssey’s HAL 9000, frequently do so because of a fault in their programming, rather than through design.
That’s what makes MIT’s “Norman” project so intriguing. Named after Psycho’s Norman Bates, it’s a newly created artificial intelligence billed as the “world’s first psychopath A.I.” Shown randomly generated inkblot tests, it offers disturbing interpretations like “man shot dead in front of his screaming wife” or “man gets pulled into dough machine.” What caused it to have this terrible view of the world? Access to Reddit, of course.
Norman was trained on image captions from the infamous subreddit r/watchpeopledie, dedicated to documenting real instances of death. Due to ethical and technical concerns, as well as the graphic content of the videos contained in it, the A.I. was only given captions describing the pictures. However, since it has only observed horrifying image captions, it sees death in whichever subsequent picture it looks at. Think of it a bit like that saying about how, for someone with a hammer, every problem looks like a nail. Except that instead of nails, it sees people beaten to death with hammers.
If you’re wondering why on earth this would be close to a good idea, it’s because it’s meant to illustrate a problem concerning biased data sets. Essentially, the idea is that machine learning works by analyzing vast troves of data. Feed it biased data and you get algorithms that spit out the wrong responses — whether that be systemically racist results or, well, this kind of thing.
“Our group is currently releasing a new project to fight against machine learning-based bias and discrimination,” the researchers told Digital Trends.
In another possible future research direction, they are interested in expanding the inkblot aspect of the project to use data mining to see if there’s an explanation for why people see different things in inkblot tests. So far, they have collected more than 200,000 user responses. “We are hoping to analyze this data to see what kind of clusters these responses create,” they said. “For example, are there specific groups of people who respond to the inkblots quite differently than others?” (And are those people by any chance regular visitors of r/watchpeopledie, just like Norman?)
To be honest, we’re just relieved to hear that none of them are planning to apply any of Norman’s lessons to, say, making the next generation of Roomba more efficient. A murder-happy vacuum cleaner sounds like a really bad idea!
- Scientists are using A.I. to create artificial human genetic code
- PS4 vs. PS5
- How Tupac and Thanos led to Douglas, the most impressively humanlike A.I. yet
- Stuck in a time loop: The best TV and movies that keep repeating the same day
- Meet the startup that transports human organs via drone