Skip to main content

Like Google for CCTV, software could help cops scour surveillance video quickly

Surveillance footage can be a great security tool, but it works best when you know the exact moment that you’re looking for. Do you know that there was a break-in at your offices between 1 a.m. and 1.15 a.m. on Tuesday morning? Provided you’ve got cameras in the right place, closed-circuit television (CCTV) footage can be exactly what you’re after in terms of evidence. But surveillance footage isn’t always quite so useful. If you’re monitoring a large number of cameras and are searching for something more open-ended — for instance, a sighting of a missing person — it can leave you in the position of having to search, eagle-eyed, through hundreds or even thousands of hours of video.

Things could be about to change, however, thanks to researchers from India’s Ahmedabad University and Lalbhai Dalpatbhai College of Engineering. They have developed what they hope could become the Google of surveillance video systems. It would allow people to enter a text-based description of a person of interest, and then have artificial intelligence (A.I.) scour the footage for a sign of them.

[Our] technology asks only the description [of a person] — for example, 180cm tall man with a white T-shirt and blue jeans — to search,” Hiren Galiyawala, one of the researchers on the project, told Digital Trends.

The technology is not yet perfect, and with some of the other technical limitations of surveillance footage, it may not be any time soon. For instance, Galiyawala notes that surveillance footage is usually of such low resolution that making out faces is difficult. (And don’t for a second imagine that the CSI-style tech that allows police to enhance blurry images actually exists!) That means that you’re limited to searching attributes like a person’s height, gender and clothing. Unless someone is wearing a particularly outlandish attire, you’re therefore unlikely to only find the specific person you’re hunting for in a large collection of surveillance footage. However, Galiyawala said this technique “can be used to reduce the search space in hours of surveillance footage.”

In tests, the technology was able to accurately find 28 out of 41 people to help prove its efficacy. The researchers now plan to further develop the technology by adding in more search signals, like the ability to search for particular body builds and more detailed information about clothing styles.

“Research is ongoing in this project,” Galiyawala said. “Future work will be focused on improving the accuracy of the system.” A paper describing the work is available to read online, and the work will be presented at next month’s International Conference on Advanced Video and Signal-based Surveillance in New Zealand.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more
To build a lifelike robotic hand, we first have to build a better robotic brain
Robot arm gripper

Our hands are like a bridge between the intentions laid out by the brain and the physical world, carrying out our wishes by letting us turn thoughts into actions. If robots are going to truly live up to their potential when it comes to interaction, it’s crucial that they therefore have some similar instrument at their disposal.

We know that roboticists are building some astonishingly intricate robot hands already. But they also need the smarts to control them -- being capable of properly gripping objects both according to their shape and their hardness or softness. You don’t want your future robot co-worker to crush your hand into gory mush when it shakes hands with you on its first day in the office.

Read more