Crime-predicting A.I. isn’t science fiction. It’s about to roll out in India

Artificial intelligence programs promise to do everything, from predicting the weather to piloting autonomous cars. Now AI is being applied to video surveillance systems, promising to thwart criminal activity not by detecting crimes in progress but by identifying a crime–before it happens. The goal is to prevent violence such as sexual assaults, but could such admirable intentions turn into Minority Report-style pre-crime nightmares?

Such a possibility may seem like a plot line from an episode of Black Mirror, but it’s no longer the stuff of science fiction. Cortica, an Israeli company with deep roots in security and AI research, recently formed a partnership in India with Best Group to analyze the terabytes of data streaming from CCTV cameras in public areas. One of the goals is to improve safety in public places, such as city streets, bus stops, and train stations.

It’s already common for law enforcement in cities like London and New York to employ facial recognition and license plate matching as part of their video camera surveillance. But Cortica’s AI promises to take it much further by looking for “behavioral anomalies” that signal someone is about to commit a violent crime.

could ai based surveillance predict crime before it happens cortica example 1

The software is based on the type of military and government security screening systems that try to identify terrorists by monitoring people in real-time, looking for so-called micro-expressions — minuscule twitches or mannerisms that can belie a person’s nefarious intentions. Such telltale signs are so small they can elude an experienced detective but not the unblinking eye of AI.

At a meeting in Tel Aviv before the deal was announced, co-founder and COO Karina Odinaev explained that Cortica’s software is intended to address challenges in identifying objects that aren’t easily classified according to traditional stereotypes. One example Odinaev described involved corner cases (such as a bed falling off a truck on the highway) that are encountered in driving situations, precisely the sort of unique events that programs controlling autonomous cars will have to be able to handle in the future.

“For that, you need unsupervised learning,” Odinaev said. In other words, the software has to learn in the same way that humans learn.

Going directly to the brain

Cortica’s AI software monitors people in real-time, looking for micro-expressions — minuscule twitches or mannerisms that can belie a person’s nefarious intentions.

To create such a program, Cortica did not go the neural network route (which despite its name is based on probabilities and computing models rather than how actual brains work). Instead, Cortica went to the source, in this case a cortical segment of a rat’s brain. By keeping a piece of brain alive ex vivo (outside the body) and connecting it to a microelectrode array, Cortica was able to study how the cortex reacted to particular stimuli. By monitoring the electrical signals, the researchers were able to identify specific groups of neurons called cliques that processed specific concepts. From there, the company built signature files and mathematical models to simulate the original processes in the brain.

The result, according to Cortica, is an approach to AI that allows for advanced learning while remaining transparent. In other words, if the system makes a mistake — say, it falsely anticipates that a riot is about to break out or that a car ahead is about to pull out of a driveway — programmers can easily trace the problem back to the process or signature file responsible for the erroneous judgment. (Contrast this with so-called deep learning neural networks, which are essentially black boxes and may have to be completely re-trained if they make a mistake.)

Initially, Cortica’s Autonomous AI will be used by Best Group in India to analyze the massive amounts of data generated by cameras in public places to improve safety and efficiency. Best Group is a diversified company involved in infrastructure development and a major supplier to government and  construction clients. So it wants to learn how to tell when things are running smoothly — and when they’re not.

could ai based surveillance predict crime before it happens us technology artificial intelligence
A display showing a facial recognition system for law enforcement during the NVIDIA GPU Technology Conference, which showcases AI, deep learning, virtual reality and autonomous machines. Saul Loeb/AFP/Getty Images

But it is hoped that Cortica’s software will do considerably more in the future. It could be used in future robotaxis to monitor passenger behavior and prevent sexual assaults, for example. Cortica’s software can also combine data not just from video cameras, but also from drones and satellites. And it can learn to judge behavioral differences, not just between law abiding citizens and erstwhile criminals, but also between a peaceful crowded market and a political demonstration that’s about to turn violent.

Such predictive information would allow a city to deploy law enforcement to a potentially dangerous situation before lives are lost. However, in the wrong hands, it could also be abused. A despotic regime, for example, might use such information to suppress dissent and arrest people before they even had a chance to organize a protest.

Predictive crime software would allow a city to deploy law enforcement to a potentially dangerous situation before lives are lost. However, in the wrong hands, it could also be abused.

In New York City, during a demonstration of how Cortica’s Autonomous AI is being applied to autonomous cars, Cortica’s vice president, Patrick Flynn, explained that the company is focused on making the software efficient and reliable to deliver the most accurate classification data possible. What clients do with that information — stop a car or make it speed up to avoid an accident, for example — is up to them. The same would apply to how a city or government might allocate police resources.

“The policy decisions are strictly outside of Cortica’s area,” Flynn said.

Would we give up privacy for improved security?

Nevertheless, the marriage of AI to networks that are ubiquitous of webcams is starting to generate more anxiety about privacy and personal liberty. And it’s not just foreign despotic governments that people are worried about.

In New Orleans, Mayor Mitch Landrieu has proposed a $40 million crime-fighting surveillance plan, which includes networking together municipal cameras with the live feeds from private webcams operated by businesses and individuals. The proposal has already drawn public protests from immigrant workers concerned that federal immigration officials will use the cameras to hunt down undocumented workers and deport them.

Meanwhile, like subjects trapped in a Black Mirror world, consumers may already be unwittingly submitting themselves to such AI-powered surveillance. Google’s $249 Clips camera, for example, uses a rudimentary form of AI to automatically take pictures when it sees something it deems significant. Amazon, whose Alexa is already the subject of eavesdropping paranoia, has purchased popular video doorbell company Ring. GE Appliances is also planning to debut a video camera equipped hub for kitchens later this year. In Europe, Electrolux will debut a steam oven this year with a built-in webcam.

While these technologies raise the specter of Big Brother monitoring our every move, there’s still the laudable hope that using sophisticated AI like Cortica’s program could improve safety, efficiency, and save lives. One can’t help wondering, for example, what would have happened if such technology were available and used in the Uber that 19-year-old Nikolas Cruz took on his way to murder 17 people at Marjory Stoneman Douglas High School. The Uber driver didn’t notice anything amiss with Cruz, but could an AI equipped camera have detected microexpressions revealing his intentions and alerted the police? In the future, we may find out.

Emerging Tech

San Francisco won the battle, but the war on facial-recognition has just begun

San Francisco has become the first city in America to ban facial recognition. Well, kind of. While the ruling only covers certain applications, it's nonetheless vitally important. Here's why.
Emerging Tech

Soaring on air currents like birds could let drones fly for significantly longer

Birds are sometimes able to glide by catching rising air currents, known as thermals. This energy-saving technique could also be used by drones to allow them to remain airborne longer.
Gaming

The Astro C40 TR rivals the Xbox Elite, but it’s still hard to recommend

The Astro C40 TR is the best PS4 controller around. It checks all the boxes with a premium feel and all of the customization features you could ever need. But it's also a whopping $200, which makes it difficult to fully recommend.
Social Media

Update WhatsApp! Sophisticated attack installs spyware with just a call

A WhatsApp vulnerability left Android and iOS devices open to attack from sophisticated surveillance software that could be installed simply by calling the targeted person through the app.
Emerging Tech

USC’s penny-sized robotic bee is the most sci-fi thing you’ll see all week

Engineers at the University of Southern California in Los Angeles have built a bee-inspired robot that weighs just 95 milligrams and is smaller than a penny. Check it out in action here.
Emerging Tech

Purdue’s robotic hummingbird is nearly as nimble as the real thing

A team of engineers in Purdue University’s Bio-Robotics Lab have developed an impressively agile flying robot, modeled after the hummingbird. Check it out in all its robotic hovering glory.
Emerging Tech

Awesome Tech You Can’t Buy Yet: Insect drones and kinetic sculpture robots

Check out our roundup of the best new crowdfunding projects and product announcements that hit the web this week. You may not be able to buy this stuff yet, but it's fun to gawk!
Emerging Tech

Watch this drone dodge an incoming soccer ball autonomously

Most drones aren't very good at avoiding incoming objects. But now a team from the University of Zurich has developed a drone which can dodge, swoop, and dive to avoid an incoming football.
Emerging Tech

Experts warn 5G could interfere with weather forecasts, reducing accuracy by 30%

Experts and officials have warned that interference from 5G wireless radios could seriously compromise the ability to forecast weather, including the prediction of extreme weather events like hurricanes.
Emerging Tech

Chang’e 4 mission may have found minerals from beneath the surface of the moon

China's Chang'e 4 mission has made a major discovery: minerals that could be from beneath the surface of the moon. The lander spotted two unexpected mineral types which match what is believed to exist in the mantle.
Emerging Tech

See the impact site where the Beresheet spacecraft crashed into the moon

An image of the crash site of SpaceIL's ill-fated Beresheet spacecraft has been captured by NASA's Lunar Reconnaissance Orbiter and is being analyzed for information about the moon's soil.
Emerging Tech

See a fly-over of Mars and track the path Curiosity will take up Mount Sharp

A new animation from NASA shows a fly-over of Mount Sharp on Mars, the location where the Curiosity rover is currently exploring. It also shows the path that Curiosity will take over the next few years.
Emerging Tech

Two galaxies play tug of war in this spectacular Hubble image

Hubble has captured evidence of a nearby galactic neighbor affecting the the shape and star production of a galaxy. The irregular galaxy NGC 4485 has been pulled into an unusual shape due to the nearby and much larger galaxy NGC 4490.
Emerging Tech

The moon is shrinking as it loses heat, new images reveal

New research suggests the Moon is shrinking. NASA scientists have used data from their Lunar Reconnaissance Orbiter Camera to look at wrinkles in the surface of the Moon which are formed as it loses heat and shrinks in size.