Skip to main content

A.I. predicts how you vote by looking at where you live on Google Street View

street view ai politics 22 artificialin
Image used with permission by copyright holder
Google Maps’ Street View feature is a great way to explore the world around you, but could it be revealing more about your neighborhood than you think? That’s quite possible, suggests new research coming out of Stanford University. Computer science researchers there have been demonstrated how deep learning artificial intelligence can scour the images on Google Street View and draw conclusions about issues like the political leaning of a particular area — just by looking at the cars parked out on the street.

“We wanted to show that useful insight can be gained from images, the same way people do this for social networks or other textual-based data,” Timnit Gebru, one of the lead researchers on the paper, told Digital Trends. “Some of the car-politics or car-race associations were intuitive, but still surprising that we could capture from our data.”

Recommended Videos

The deep learning neural network was trained on a dataset of more than 50 million Google Street View images from a variety of cities. This data was then compared to ground census data to help the algorithm make the right connections between race, education, income and voter preferences, and the make, model and year of every car produced since 1990. The artificial intelligence uncovered a number of intriguing tidbits — such as the fact that if the number of sedans in a neighborhood is greater than the number of pickups, there is an 88 percent chance the precinct votes Democrat. More pickups than sedans on your street? That means there’s an 82 percent chance you’re in Republican territory.

While Google is unlikely to add “likely voter demographic” as a tool on Street View anytime soon, the research demonstrates how impressive modern A.I. is — not just in identifying objects, but also at drawing actionable conclusions from this information. As Gebru points out, similar research could be used for exploring things like the links between neighborhoods and health or pollution levels.

A paper describing the work, “Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States,” was recently published in the journal Proceedings of the National Academy of Sciences.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution
Profile of head on computer chip artificial intelligence.

OpenAI’s GPT-2 text-generating algorithm was once considered too dangerous to release. Then it got released -- and the world kept on turning.

In retrospect, the comparatively small GPT-2 language model (a puny 1.5 billion parameters) looks paltry next to its sequel, GPT-3, which boasts a massive 175 billion parameters, was trained on 45 TB of text data, and cost a reported $12 million (at least) to build.

Read more
Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos
brain network on veins illustration

If Facebook has an unofficial slogan, an equivalent to Google’s “Don’t Be Evil” or Apple’s “Think Different,” it is “Move Fast and Break Things.” It means, at least in theory, that one should iterate to try news things and not be afraid of the possibility of failure. In 2021, however, with social media currently being blamed for a plethora of societal ills, the phrase should, perhaps, be modified to: “Move Fast and Fix Things.”

One of the many areas social media, not just Facebook, has been pilloried for is its spreading of certain images online. It’s a challenging problem by any stretch of the imagination: Some 4,000 photo uploads are made to Facebook every single second. That equates to 14.58 million images per hour, or 350 million photos each day. Handling this job manually would require every single Facebook employee to work 12-hour shifts, approving or vetoing an uploaded image every nine seconds.

Read more
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more