Skip to main content

Crime-predicting A.I. isn’t science fiction. It’s about to roll out in India

Artificial intelligence programs promise to do everything, from predicting the weather to piloting autonomous cars. Now AI is being applied to video surveillance systems, promising to thwart criminal activity not by detecting crimes in progress but by identifying a crime–before it happens. The goal is to prevent violence such as sexual assaults, but could such admirable intentions turn into Minority Report-style pre-crime nightmares?

Such a possibility may seem like a plot line from an episode of Black Mirror, but it’s no longer the stuff of science fiction. Cortica, an Israeli company with deep roots in security and AI research, recently formed a partnership in India with Best Group to analyze the terabytes of data streaming from CCTV cameras in public areas. One of the goals is to improve safety in public places, such as city streets, bus stops, and train stations.

It’s already common for law enforcement in cities like London and New York to employ facial recognition and license plate matching as part of their video camera surveillance. But Cortica’s AI promises to take it much further by looking for “behavioral anomalies” that signal someone is about to commit a violent crime.

Image used with permission by copyright holder

The software is based on the type of military and government security screening systems that try to identify terrorists by monitoring people in real-time, looking for so-called micro-expressions — minuscule twitches or mannerisms that can belie a person’s nefarious intentions. Such telltale signs are so small they can elude an experienced detective but not the unblinking eye of AI.

At a meeting in Tel Aviv before the deal was announced, co-founder and COO Karina Odinaev explained that Cortica’s software is intended to address challenges in identifying objects that aren’t easily classified according to traditional stereotypes. One example Odinaev described involved corner cases (such as a bed falling off a truck on the highway) that are encountered in driving situations, precisely the sort of unique events that programs controlling autonomous cars will have to be able to handle in the future.

“For that, you need unsupervised learning,” Odinaev said. In other words, the software has to learn in the same way that humans learn.

Going directly to the brain

Cortica’s AI software monitors people in real-time, looking for micro-expressions — minuscule twitches or mannerisms that can belie a person’s nefarious intentions.

To create such a program, Cortica did not go the neural network route (which despite its name is based on probabilities and computing models rather than how actual brains work). Instead, Cortica went to the source, in this case a cortical segment of a rat’s brain. By keeping a piece of brain alive ex vivo (outside the body) and connecting it to a microelectrode array, Cortica was able to study how the cortex reacted to particular stimuli. By monitoring the electrical signals, the researchers were able to identify specific groups of neurons called cliques that processed specific concepts. From there, the company built signature files and mathematical models to simulate the original processes in the brain.

The result, according to Cortica, is an approach to AI that allows for advanced learning while remaining transparent. In other words, if the system makes a mistake — say, it falsely anticipates that a riot is about to break out or that a car ahead is about to pull out of a driveway — programmers can easily trace the problem back to the process or signature file responsible for the erroneous judgment. (Contrast this with so-called deep learning neural networks, which are essentially black boxes and may have to be completely re-trained if they make a mistake.)

Initially, Cortica’s Autonomous AI will be used by Best Group in India to analyze the massive amounts of data generated by cameras in public places to improve safety and efficiency. Best Group is a diversified company involved in infrastructure development and a major supplier to government and  construction clients. So it wants to learn how to tell when things are running smoothly — and when they’re not.

A display showing a facial recognition system for law enforcement during the NVIDIA GPU Technology Conference, which showcases AI, deep learning, virtual reality and autonomous machines. Saul Loeb/AFP/Getty Images

But it is hoped that Cortica’s software will do considerably more in the future. It could be used in future robotaxis to monitor passenger behavior and prevent sexual assaults, for example. Cortica’s software can also combine data not just from video cameras, but also from drones and satellites. And it can learn to judge behavioral differences, not just between law abiding citizens and erstwhile criminals, but also between a peaceful crowded market and a political demonstration that’s about to turn violent.

Such predictive information would allow a city to deploy law enforcement to a potentially dangerous situation before lives are lost. However, in the wrong hands, it could also be abused. A despotic regime, for example, might use such information to suppress dissent and arrest people before they even had a chance to organize a protest.

Predictive crime software would allow a city to deploy law enforcement to a potentially dangerous situation before lives are lost. However, in the wrong hands, it could also be abused.

In New York City, during a demonstration of how Cortica’s Autonomous AI is being applied to autonomous cars, Cortica’s vice president, Patrick Flynn, explained that the company is focused on making the software efficient and reliable to deliver the most accurate classification data possible. What clients do with that information — stop a car or make it speed up to avoid an accident, for example — is up to them. The same would apply to how a city or government might allocate police resources.

“The policy decisions are strictly outside of Cortica’s area,” Flynn said.

Would we give up privacy for improved security?

Nevertheless, the marriage of AI to networks that are ubiquitous of webcams is starting to generate more anxiety about privacy and personal liberty. And it’s not just foreign despotic governments that people are worried about.

In New Orleans, Mayor Mitch Landrieu has proposed a $40 million crime-fighting surveillance plan, which includes networking together municipal cameras with the live feeds from private webcams operated by businesses and individuals. The proposal has already drawn public protests from immigrant workers concerned that federal immigration officials will use the cameras to hunt down undocumented workers and deport them.

Algorithm Helping Police Predict Crime | Mach | NBC News

Meanwhile, like subjects trapped in a Black Mirror world, consumers may already be unwittingly submitting themselves to such AI-powered surveillance. Google’s $249 Clips camera, for example, uses a rudimentary form of AI to automatically take pictures when it sees something it deems significant. Amazon, whose Alexa is already the subject of eavesdropping paranoia, has purchased popular video doorbell company Ring. GE Appliances is also planning to debut a video camera equipped hub for kitchens later this year. In Europe, Electrolux will debut a steam oven this year with a built-in webcam.

While these technologies raise the specter of Big Brother monitoring our every move, there’s still the laudable hope that using sophisticated AI like Cortica’s program could improve safety, efficiency, and save lives. One can’t help wondering, for example, what would have happened if such technology were available and used in the Uber that 19-year-old Nikolas Cruz took on his way to murder 17 people at Marjory Stoneman Douglas High School. The Uber driver didn’t notice anything amiss with Cruz, but could an AI equipped camera have detected microexpressions revealing his intentions and alerted the police? In the future, we may find out.

John R. Quain
Former Digital Trends Contributor
John R. Quain writes for The New York Times, Men's Journal, and several other publications. He is also the personal…
What to expect at CES 2025: drone-launching vans, mondo TVs, AI everywhere
CES 2018 Show Floor

With 2024 behind us, all eyes in tech turn to Las Vegas, where tech monoliths and scrappy startups alike are suiting up to give us a glimpse of the future. What tech trends will set the world afire in 2025? While we won’t know all the details until we hit the carpets of the Las Vegas Convention Center, our team of reporters and editors have had an ear to the ground for months. And we have a pretty good idea what’s headed your way.

Here’s a sneak peek at all the gizmos, vehicles, technologies, and spectacles we expect to light up Las Vegas next week.
Computing

Read more
These unique smart glasses skirt hype and solve a real medical problem
Front view of the SolidddVision smartglasses.

Smart glasses are increasingly being pushed as the future of personal computing. But so far, an overwhelming majority have focused on aspects like social media sharing, pulling up AI agents, or media consumption. Soliddd wants to push smart glasses into a challenging niche of medical science.

At CES 2025, the New York City-based company introduced SolidddVision smart glasses. Soliddd claims these are “the first true vision correction for people living with vision loss due to macular degeneration.” Notably, these glasses won’t require any FDA clearance and will enter the market later this year.

Read more
People don’t trust tech. CES 2025 is a chance to change that
The LG booth at CES 2024.

When I attended my first Consumer Electronics Show in 2007, friends reacted as if I were going to the World’s Fair in 1933. What technological wonders would I see? What wizardry? What further evidence of mankind’s supremacy?

The stories I brought back seldom disappointed: TVs the size of bedroom walls! Flying cars! HD-DVDs! OK, maybe that last one is a poor example.

Read more