Skip to main content

Fake news? A.I. algorithm reveals political bias in the stories you read

Here in 2020, internet users have ready access to more news media than at any other point in history. But things aren’t perfect. Click-driven ad models, online filter bubbles, and the competition for readers’ attention means that political bias has become more entrenched than ever. In worst-case scenarios, this can tip over into fake news. Other times, it simply means readers receive a slanted version of events, without necessarily realizing that this is the case.

What if artificial intelligence could be used to accurately analyze political bias to help readers better understand the skew of whatever source they are reading? Such a tool could conceivably be used as a spellcheck- or grammar check-type function, only instead of letting you know when a word or sentence isn’t right, it would do the same thing for the neutrality of news media — whether that be reporting or opinion pieces.

That is what the creators of a new algorithm developed by The Bipartisan Press claims to be able to do. They have built an A.I. that can, they say, predict whether text leans left or right with more than 96% accuracy.

“The A.I., which is based on state-of-the-art technology like BERT and XLNet takes in text input, and outputs a numerical value to denote the bias,” Winston Wang, managing editor at the Bipartisan Press, told Digital Trends. “We used a scale of -1 to 1 for simplicity, where a negative value shows left bias, while a positive value shows right bias. The absolute value of the result shows the degree of bias. For example, an article with 0.8 has a considerable amount of pro-right bias.”

The system uses a variety of A.I. approaches, some of which are described here. The excitement of being able to use A.I. for a task such as this is that it opens up the possibility of assessing news coverage and other content at a scale that would be impossible for human raters. (And, provided it works as well as described, potentially with more consistent accuracy, too.)

Wang said that the team is currently working to create a Chrome browser plugin. This could be used to help educate readers on the biases of news articles they read. The A.I. could also potentially be utilized for better content-recommendation systems. Finally, it could help news writers be aware of biases they may not even realize they have, thereby giving them prompts for more objectivity.

You can check out a demo of the A.I. here. For what it’s worth, this story has “minimal” bias.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more