Cybersecurity pros shouldn’t rely on artificial intelligence and machine learning just yet, according to a new report.
The report from security firm Carbon Black, which surveyed 410 cybersecurity researchers and 74 percent said that AI-driven security solutions are flawed, citing “high false-positive rates”, while 70 percent claimed attackers can bypass machine learning techniques.
The respondents did not write off AI or machine learning as unhelpful but rather said that they just aren’t there yet and cannot be solely relied on to make big decisions when it comes to security. AI and machine learning should be used “primarily to assist and augment human decision making,” said the report.
Eighty-seven percent of those surveyed said it will be more than three years before they really feel comfortable trusting AI to carry out any significant cybersecurity decisions.
AI and machine learning have become more prominent in cybersecurity research and commercial products as a way to keep up with an ever-evolving threat landscape.
Among these new threats are non-malware attacks or fileless attacks. As the names suggest, these are attacks that do not use any malicious file or program. Rather, they use existing software on a system, making them largely undetectable for traditional antivirus programs that rely on detecting suspicious-looking files before acting.
Sixty-four percent of Carbon Black’s respondents said that they had seen an increase in such tactics since early 2016.
“Non-malware attacks will become so widespread and target even the smallest business that users will become familiar with them,” one respondent said. “Most users seem to be familiar with the idea that their computer or network may have accidentally become infected with a virus, but rarely consider a person who is actually attacking them in a more proactive and targeted manner.”
Non-malware attacks will be the scourge of organizations over the next year, said the report, and will continue to need a human approach.
Perhaps AI is overpromising what it can do for security. It indicates a future where cybersecurity will be a battle of “machine versus machine”, according to the professionals surveyed in this report but for now, it very much remains “human versus human.”
- A.I. doesn’t usually forget anything, but Facebook’s new system does. Here’s why
- Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos
- Scientists are using A.I. to create artificial human genetic code
- The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
- A.I. hit some major milestones in 2020. Here’s a recap