Skip to main content

MIT researchers are working to create neural networks that are no longer black boxes

neural networks explain themselves avaexmachina
A24
Whether you like it — as companies like Google certainly do — or don’t entirely trust it — logical artificial intelligence proponent Selmer Bringsjord being one outspoken critic — there is no denying that brain-inspired deep learning neural networks have proven capable of making significant advances in a number of AI-related fields over the past decade.

But that is not to say it is perfect by any stretch of the imagination.

“Deep learning has led to some big advances in computer vision, natural language processing, and other areas,” Tommi Jaakkola, a Massachusetts Institute of Technology professor of electrical engineering and computer science, told Digital Trends. “It’s tremendously flexible in terms of learning input/output mappings, but the flexibility and power comes at a cost. That is it that it’s very difficult to work out why it is performing a certain prediction in a particular context.”

This black-boxed lack of transparency would be one thing if deep learning systems were still confined to being lab experiments, but they are not. Today, AI systems are increasingly rolling out into the real world — and that means they need to be available for scrutiny by humans.

“This becomes a real issue in any situation where there are consequences to making a prediction, or actions that are taken on the basis of that prediction,” Jaakkola said.

Fortunately, that is where a new project from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) comes into play. What researchers there have come up with is preliminary work showing that it is possible to train neural networks in such a way that they do not just offer predictions and classifications, but also rationalize their decision.

For the study, the researchers examined neural nets that were trained on textual data. This network was divided into two modules: one which extracted segments of text and scored them on their length and coherence, the second which performed the job of predicting or classifying.

A data set the researchers tested their system on was a group of reviews from a website in which users rated beers. The data the researchers used included both a text review and also a corresponding star review, ranked out of five. With these inputs and outputs, the researchers were able to fine-tune a system which “thought” along the same lines as human reviewers — thereby making its decisions more understandable.

Ultimately, the system’s agreement with human annotations was 96 percent and 95 percent, respectively, when predicting ratings of beer appearance and aroma, and 80 percent when predicting palate.

The research is still in its early stages, but it is an intriguing advance in developing AI systems which make sense to human creators and can justify decisions accordingly.

“The question of justifying predictions will be a prevalent issue across complex AI systems,” Jaakkola said. “They need to be able to communicate with people. Whether the solution is this particular architecture or not remains to be seen. Right now, we’re in the process of revising this work and making it more sophisticated. But it absolutely opens up an area of research that is very important.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Get ready to waste your day with this creepily accurate text-generating A.I.
ai spots writing by fake news feature

Whether you believe it was one of the most dangerous versions of artificial intelligence created or dismiss it as a massive unnecessary PR exercise, there’s no doubt that the GPT-2 algorithm created by research lab OpenA.I. caused a lot of buzz when it was announced earlier this year.

Revealed in February, OpenA.I. said it developed an algorithm too dangerous to release to the general public. Although only a text generator, GPT-2 supposedly generated text so crazily humanlike that it could convince people that they were reading a real text written by an actual person. To use it, all a user had to do would be to feed in the start of the document, and then let the A.I. take over to complete it. Give it the opening of a newspaper story, and it would even manufacture fictitious “quotes.” Predictably, news media went into overdrive describing this as the terrifying new face of fake news. And for potentially good reason.

Read more
Facebook is using A.I. to create the world’s most detailed population maps
facebook population density maps screen shot 2019 04 09 at 13 20 19

It’s probably fair to say that Facebook hasn’t had a banner year when it comes to good publicity. What just a few years ago seemed like a benevolent and cool tech giant is often now cast in a negative light, largely due to the lingering effects of controversies like the Cambridge Analytica scandal. But Facebook is working hard to show that that its expertise in A.I. has a valuable role to play as a force for good in the world.

With that in mind, A.I. experts and data scientists at Facebook have today shown off the world’s most accurate population density maps yet created. Building on work that dates back to 2016, the company has unveiled maps covering the majority of the African continent. Eventually, it says that this will expand to map nearly the whole world’s population. This will allow humanitarian agencies to determine how populations are distributed even in remote areas; opening up new opportunities for healthcare and relief workers to be able to distribute aid where necessary.

Read more
This AI cloned my voice using just three minutes of audio
acapela group voice cloning ad

There's a scene in Mission Impossible 3 that you might recall. In it, our hero Ethan Hunt (Tom Cruise) tackles the movie's villain, holds him at gunpoint, and forces him to read a bizarre series of sentences aloud.

"The pleasure of Busby's company is what I most enjoy," he reluctantly reads. "He put a tack on Miss Yancy's chair, and she called him a horrible boy. At the end of the month, he was flinging two kittens across the width of the room ..."

Read more