When it comes to cybersecurity, it would appear that the best offense is a good defense, and by and large we have neither. Following a damning report of the U.S. government’s capabilities when it comes to online security and the emergence of yet another dangerous piece of malware that has already stolen some $4 million from dozens of banks and financial institutions, our digital defenses look to be down and out. But a new solution from the Massachusetts Institute of Technology may be our saving grace.
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveil an artificial-intelligence platform called “AI squared” that claims to predict cyberattacks “significantly better than existing systems by continuously incorporating input from human experts.”
Combining the capabilities of humans and machines, AI-squared was shown to detect an impressive 85 percent of attacks, which represents a threefold improvement over previous benchmarks. In order to actually predict attacks, the system analyzes data and flags suspicious activity by combining data into certain patterns, all by way of unsupervised machine learning. These patterns are then presented to humans, who provide a bit more of a nuanced analysis as to whether the anomalies are actually attacks. The AI then combines the feedback from its human partner into its next cycle of examinations, constantly improving upon itself and its predictive abilities.
“You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with former CSAIL postdoc Ignacio Arnaldo. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”
This new system could be a game changer in the security industry, in which human capital is precious. While cybersecurity experts don’t have the time to review all the data in the world that seems suspicious, having the help of a highly trained AI could significantly lessen their workload. CSAIL’s work “brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” says Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”
With AI-squared’s ability to examine enormous volumes of data on a daily basis, Veeramachaneni says that the machine has the potential to improve the entirety of the industry’s landscape.
“The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions,” Veeramachaneni says. “That human-machine interaction creates a beautiful, cascading effect.”
- OpenAI’s new AI-made videos are blowing people’s minds
- I tested Intel’s new overclocking tool, and it does AI all wrong
- OpenAI’s new tool can spot fake AI images, but there’s a catch
- Most people distrust AI and want regulation, says new survey
- You can finally use Adobe’s game-changing AI features in Photoshop, Premiere, and After Effects