Skip to main content

Algorithm outperforms humans at spotting fake news

An artificial intelligence system that can tell the difference between real and fake newsoften with better success rates than its human counterparts — has been developed by researchers at the University of Michigan. Such a system may hep social media platforms, search engines, and news aggregators filter out articles meant to misinform.

“As anyone else, we have been disturbed by the negative effect that fake news can have in major political events [and] daily life,” Rada Mihalcea, a UM computer science professor who developed the system, told Digital Trends. “My group has done a significant amount of work on deception detection for nearly ten years. We saw an opportunity to address a major societal problem through the expertise we accumulated over the years.”

Recommended Videos

Mihalcea and her team developed a linguistic algorithm that analyzes written speech and looks for cues such as grammatical structure, punctuation, and complexity, which may offer telltale signs of fake news. Since many of today’s news aggregators and social media sites rely on human editors to spot misinformation, assistance from an automated system could help streamline the process.

To train their system, the researchers represented linguistic features like punctuation and word choice as data, then fed that data into an algorithm.

“Interestingly, what algorithms look for is not always intuitive for people to look for,” Mihalcea said. “In this and other research we have done on deception, we have found for instance that the use of the word ‘I’ is associated with truth. It is easy for an algorithm to count the number of times ‘I’ is said, and find the difference. People however do not do such counting naturally, and while it may be easy, it would distract them from the actual understanding of the text.”

The system demonstrated a 76-percent success rate at spotting fake news articles, compared to around 70 percent for humans. Mihalcea envisions such a system helping both news aggregators and end users distinguish between true and intentionally false stories.

The system can’t completely compensate for humans, however. For one, it doesn’t fact check, so well-meaning (but ultimately false) content will still slip through.

The researchers will present a paper detailing the system was presented at the International Conference on Computational Linguistics in Santa Fe, New Mexico on August 24.

Dyllan Furness
Former Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
TikTok wants to tweak its algorithm to avoid problematic content
TikTok's Play Store listing.

TikTok today announced it would be rolling out additional customization features to its "For You" feed, the endlessly scrolling video page that made the app so popular. The company acknowledged in a release today that its predictive algorithm could reinforce negative experiences by repeating videos about emotionally volatile topics -- like breakups.

TikTok came to popularity on the strength of its algorithm. Bordering on prescience, the app would recommend users' videos that adhered so closely to their tastes, it was even claimed to predict the sexuality of people who had no awareness of it themselves. At the same time, the algorithm's tendency to give you more of what you're interested in can lead to negative outcomes if what you are interested in is having a negative effect on your life.

Read more
Facial recognition tech for bears aims to keep humans safe
A brown bear in Hokkaido, Japan.

If bears could talk, they might voice privacy concerns. But their current inability to articulate thoughts means there isn’t much they can do about plans in Japan to use facial recognition to identify so-called "troublemakers" among its community.

With bears increasingly venturing into urban areas across Japan, and the number of bear attacks on the rise, the town of Shibetsu in the country’s northern prefecture of Hokkaido is hoping that artificial intelligence will help it to better manage the situation and keep people safe, the Mainichi Shimbun reported.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more