Skip to main content

MIT’s new AI-squared can predict 85 percent of cyberattacks

AI²: an AI-driven predictive cybersecurity platform

When it comes to cybersecurity, it would appear that the best offense is a good defense, and by and large we have neither. Following a damning report of the U.S. government’s capabilities when it comes to online security and the emergence of yet another dangerous piece of malware that has already stolen some $4 million from dozens of banks and financial institutions, our digital defenses look to be down and out. But a new solution from the Massachusetts Institute of Technology may be our saving grace.

In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveil an artificial-intelligence platform called “AI squared” that claims to predict cyberattacks “significantly better than existing systems by continuously incorporating input from human experts.”

Combining the capabilities of humans and machines, AI-squared was shown to detect an impressive 85 percent of attacks, which represents a threefold improvement over previous benchmarks. In order to actually predict attacks, the system analyzes data and flags suspicious activity by combining data into certain patterns, all by way of unsupervised machine learning. These patterns are then presented to humans, who provide a bit more of a nuanced analysis as to whether the anomalies are actually attacks. The AI then combines the feedback from its human partner into its next cycle of examinations, constantly improving upon itself and its predictive abilities. 

“You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with former CSAIL postdoc Ignacio Arnaldo. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.” 

This new system could be a game changer in the security industry, in which human capital is precious. While cybersecurity experts don’t have the time to review all the data in the world that seems suspicious, having the help of a highly trained AI could significantly lessen their workload. CSAIL’s work “brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” says Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”

With AI-squared’s ability to examine enormous volumes of data on a daily basis, Veeramachaneni says that the machine has the potential to improve the entirety of the industry’s landscape.

“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni says. “That human-machine interaction creates a beautiful, cascading effect.”

Editors' Recommendations

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
AI can now steal your passwords with almost 100% accuracy — here’s how
A digital depiction of a laptop being hacked by a hacker.

Researchers at Cornell University have discovered a new way for AI tools to steal your data -- keystrokes. A new research paper details an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard.

The researchers accomplished this by training an AI model on the sound of keystrokes and deploying it on a nearby phone. The integrated microphone listened for keystrokes on a MacBook Pro and was able to reproduce them with 95% accuracy -- the highest accuracy the researchers have seen without the use of a large language model.

Read more
You can now ‘expand’ images in Photoshop using AI
Adobe's Generative Expand feature in Photoshop.

Adobe is updating its Photoshop app with a new AI-powered feature that will allow you to easily make an image larger and add context to the image based on its original subject.

The feature, called Generative Expand, is now available to Photoshop beta users. You'll be able to expand images in any direction and generate an additional scene for the space with a combination of traditional app tools and AI. It uses Photoshop's crop tool in conjunction with a text field for a prompt to create the details for the image you want to add.

Read more