How we view a weapon is always determined by who wields it, and where they’re pointing it. A gun can represent a tool of order, chaos, war, or peace, depending on the context. Cyber-weapons are no different. While westerners worry about Chinese and Russian hackers, we give less thought to the cyber-weapons our own military has used, with success, for years.
This shadowy game of international cyber-intrigue lurked out of sight for years until, finally, it was exposed by a major attack that’s become known as Stuxnet. The deployment of Stuxnet, and the investigations that tracked it, gave us a window into how global super-powers use cyber-weaponry to great effect.
Yet it’s not clear that Stuxnet’s lessons have taken hold. Though five years have passed since its discovery, most know little of its legacy, and even politicians campaigning on the promise of improved cyber-security seem ignorant of its impact. Here’s a primer on Stuxnet — and how its discovery pulled the veil from a previously unknown realm of high-stakes hacking.
Lifting the veil
The world at large first learned about Stuxnet in 2010 through a blog post by security expert Brian Krebs, reporting on the research of a Belarus-based security firm called VirusBlokAda. In the post, Krebs explained the attack could infect even a fully patched Windows 7 PC. That itself was news, but stranger still was its apparent target. The Windows OS was not the goal; instead, the malware was searching for supervisory command and data acquisition systems built by Siemens — systems often used to control public infrastructure.
Stuxnet ceased operation years ago, but its farthest reaching effects could well come from the concepts and ideas it introduced to malicious entities.
Elinor Mills, an internet security beat reporter working for CNET, was instantly grabbed by the headline. “Stuxnet was a moving target,” said Mills, who is now VP of content and media strategy with the Bateman Group. “Small bits of information would come out every couple of days or weeks, and everyone was trying to put the puzzle together.”
“The motivation wasn’t clear initially,” Mills wrote in email correspondence with Digital Trends. “When it became clear that many of the targets were in Iran and the victim systems were identified, the speculation began that it was the U.S. and Israel behind it.”
Slowly, evidence was revealed that showed Stuxnet was a computer worm developed by the United States and Israel. Its target was Iranian nuclear facilities, and specifically, the programmable logic controllers tasked with operating centrifuges that carried out uranium enrichment.
“What made Stuxnet a mainstream story was the political significance,” said Mills. “The fact that it was the U.S. and Israel against Iran, playing into the geopolitical tensions of the time, that was what put it on the cover of every newspaper.”
It’s impossible to discuss the legacy of Stuxnet without looking at both its political ramifications and its technical consequences — the former being inextricably linked to the latter. To properly engage with the subject, it’s important to understand how malware is developed.
How weapons are made
After thirty years working with companies like Symantec and Microsoft, and developing anti-virus software for various computer systems, Peter Ferrie has witnessed some dramatic changes to how malware is created. When he was first starting out, the work was done largely for the sake of curiosity.
“I have great nostalgia for the old days of amateur developers and the proof-of-concept releases, because they were interesting to anaylze, and everything was new,” Ferrie told me via email. He’s keen to stress that despite those fond memories, he’s steadfast in his belief that the field as a whole is inherently problematic.
“I don’t want to appear to be encouraging development of new malware with new techniques,” he continued. “I’d prefer that all of the malware went away, or even never existed in the first place. An alternative timeline might have me being a painter, or something.”
Because of his tenure working with anti-virus software, Ferrie could comment on general trends in malware development. According to him, progress tends to come in “fits and starts.”
“Once a technique is developed, we tend to see many examples of it being used, before another technique appears,” he explained. “Then we see many examples of the new one being used exclusively. When the new technique complements the existing one, then we see many examples that carry both of them.”
Stuxnet can serve as a blueprint for similar attacks, and its ideas will be adopted by future cyber criminals.
“In some cases, techniques fall out of favor for various reasons, and sometimes they reappear,” he continued. “The detection of virtual machines is a good example of one that lost favor and then returned. As corporations moved more assets into virtual environments, the detection of virtual machines almost disappeared because corporations (i.e. rich information targets) would have been excluded as a result. Now we see malware that behaves differently depending on whether the environment is virtualized or not.”
While it’s often possible for pieces of code to be re-used, both malware developers and security professionals are largely more concerned with techniques. Though Stuxnet ceased operation several years ago, its farthest reaching effects could well come from the concepts and ideas it introduced to malicious entities, thanks to its notoriety.
Sons of Stuxnet
When an organization detects signs of a malware attack, security experts are called in to try pick apart the fallout and determine what has taken place. This is what happened in the wake of Stuxnet, and it happened after unearthing a related piece of malware in 2011.
“We were asked to help in an incident response activity at a company, which had already discovered itself that they had been compromised,” said Levente Buttyán, an associate professor at the Budapest University of Technology and Economics, and leader of the institution’s Laboratory of Cryptography and Systems Security, otherwise known as CrySyS.
“Mainly by static analysis—looking into the binary—we discovered similarities to Stuxnet, which we knew and even had lectures on at our university,” Buttyán noted via email. “The similarities manifested themselves at different levels; modules used, loading sequence, configuration files and the way they are stored and used, code injection targets, hooked system functions, encryption routines used, constants used, kernel drivers digitally signed with compromised keys, and so on.”
The malware Buttyán and his team were working with is now referred to as Duqu, and is one of several threats thought to be closely linked with Stuxnet. A report published by Symantec, which included the CrySyS report as an appendix, stated that Duqu was written either by the creators of Student or a team with access to its source code.
“The U.S., Russia, China, Israel, the U.K., and other countries already have backdoors in many systems.”
That being said, given how difficult it is to obtain said source code, this kind of re-purposing isn’t the greatest threat that it poses. Buttyán suggested that it would likely be “easier to implement something from scratch” than attempt to grapple with a binary.
He and his team have other concerns. “One can of course use the tricks and ideas from Stuxnet in his or her own implementation. We see this a bigger problem in fact,” he told me. “Stuxnet can serve as a blueprint for similar attacks, and ideas will be adopted by cyber criminals, and lower profile attackers.”
It’s a relatively simple task for a skilled security expert to plug a particular hole in an organizations defenses, or for a software company to remedy individual exploits present in their product.
Conversely, it’s much more difficult to squash a particularly innovative idea. As more techniques are added to the malware developer’s playbook, there’s more potential threats for experts on the other side of the issue to take into consideration. That makes cyber-security more difficult with each passing day.
Standing in Stuxnet’s Shadow
We might see concepts introduced by Stuxnet in malware produced years from now, but the project set precedents politically, as well as technologically. It was the first example of the United States implementing a cyber-weapon intended to cause physical damage, something that’s more common now on a global scale than it was in 2010.
Of course, it’s easy to produce the counter-argument that the march of technology would have brought about this sort of malware sooner rather than later. The greater issue for the individual is how their government is making use of these digital armaments.
This week, technology and government found itself in a head-on collision as the FBI requested that Apple open up a backdoor into an iPhone belonging to a suspect in the San Bernardino terrorist attack. Digital Trends spoke about the relationship between these two stories with Kim Zetter, a senior staff writer with Wired, and the author of Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon.
“The FBI is saying it needs to come to Apple for assistance, because iPhones will only accept firmware updates that are signed by Apple,” she said when we spoke via telephone earlier this month. “While it’s comforting to think that only Apple can install malicious firmware on its iPhones, I don’t think we should believe that there aren’t people trying to obtain Apple’s keys—and possibly already being successful at it.”
In Zetter’s detailed and exhaustive book, she details how spoofed RealTek security certificates were used to facilitate attacks using Stuxnet. In theory, no one should’ve had access to that information — yet it was leaked. She also told us of a case where Adobe fell foul of similar tactics used by criminal hackers, which she covered for Wired. In that case, attackers first stole Adobe’s certificate credentials, then used them sign malware. With the credentials in-hand, the malware easily posed as authentic software.
While the FBI has asked Apple directly to comply, there may be other ways and means of gaining access to the necessary information. That includes physically infiltrating Apple. Last year, the Department of Defense published a document outlining its cyber strategy, building on the ‘five pillars’ previously in place to summarize military strategy pertaining to cyber-warfare.
The fourth of five strategic goals is summarized as follows:
Build and maintain viable cyber options and plan to use those options to control conflict escalation and to shape the conflict environment at all stages.
This carefully worded sentence is the latest evolution of a project that’s been active since the early 1990s. “U.S. Military offensive operations began as defensive programs,” Zetter told me. “In learning how to defend U.S. Military networks, they were also studying techniques for launching offensive attacks.”
Apple is aware of this possibility, which is why it announced on February 24, coinciding with Tim Cook’s interview on ABC, that it intends to build an iPhone even it cannot crack through any means. This specifically refers to the “firmware updates signed by Apple,” previously mentioned. Apple wants to build a system where even it cannot produce a new firmware signature on command, making it impossible for an attacker to circumvent iPhone encryption even if she had physical access to the company’s facilities.
That’s as complex a task as it sounds. But in the post-Stuxnet world, it’s the only way to guarantee security. Even a key held by a company’s most trusted employees, with access to its most secure database, cannot be assumed safe.
State-of-the-Art Cyberweapons and You
We know that Stuxnet was developed by the United States and Israel. We don’t know for sure whether any other nations were involved. We know even less about what projects are being developed under cover of darkness in countries around the world.
“I have no illusions that the U.S., Russia, China, Israel, the U.K., and other countries already have backdoors in many systems,” said Zetter. “They’re not necessarily ready to launch an attack, or even have intentions of launching an attack, but they definitely all want to be prepared, in the same way we have satellites over our heads watching troop movements and weapon movements. You don’t want to be surprised by anything.”
There may be other ways and means of gaining access to key information. That includes physically infiltrating the target.
With these practices seemingly so widespread, it’s strange to think that Stuxnet has fallen out of the spotlight, while a topic like Hillary Clinton’s inbox — technically simple, and limited in scope, by comparison — has drawn headlines for over a year.
“The media in general chooses topics that are easy to cover,” Zetter told me. “Hillary’s email server is easy to cover. It’s easy for readers to comprehend, it’s easy for non-tech writers to write about. Stuxnet is a complicated, technical issue. And, it’s not just the technical issue that’s complicated; it has all these legal issues and ethical issues that are wrapped around it.”
The ethics of using a weapon like Stuxnet are indeed complex. From one angle, it’s a destructive tool that’s already seems to be inspiring advances in malware development. However, another perspective might look at it as a last resort in a situation with globally significant ramifications.
“There are definite advantages to digital weapons like this. If you can avoid all-out war and you can buy time—and that’s what Stuxnet was designed to do, essentially. It wasn’t designed to wipe out Iran’s nuclear program, or it would have been much more destructive. It was designed to buy diplomacy more time, and it was successful in that sense.”
Whatever your position on the Iran nuclear deal, it’s impossible to ignore the debt that talks owed to the effective use of Stuxnet. If the weapon hadn’t been deployed to run interference, things could have played out very differently. Still, there are questions to be asked about whether the ends justified the means, and whether we can expect the same level of care to be upheld going forward.
“I think there are uses for this kind of activity,” said Zetter. “I’m not naive. We don’t question other types of covert activity, and this is just another method of covert activity. But it’s different in the sense that, with digital systems, you don’t know what the repercussions and the wider effects will be.”
“Stuxnet was very precise. I have no doubt that we’ll see attacks in the future that aren’t precise, and that do cause collateral damage. Then, perhaps—depending on what that collateral damage is—we, as a society, will have open discussions about this.”
Obviously, those discussions aren’t taking place right now. The use of cyberweapons isn’t something we see discussed on television news, and it’s not something that’s brought up as a talking point during presidential debates. There’s a distinct lack of public interest, or at least a lack of public awareness.
“The average American doesn’t know what Stuxnet is,” Zetter said. “You have small swaths of people on the East and the West coast who keep up on these things, and care about these things, but the vast majority of Middle America is more concerned about making their car payments, and keeping their home heated, and whether or not their sports team won.”
Stuxnet should have a larger place in the public consciousness—not to service fearmongering about technological boogeymen, but to act as a necessary reminder of the real state of cyber-weaponry. Government sanctioned malware might sound like airport novel fare, but it’s far from theoretical—and that’s been true for several years, at least.
A cyber-weaponry arms race is underway, and the nature of international espionage means that we’re not privy to the rate that it’s accelerating at. Stuxnet’s greatest legacy should perhaps be to illustrate one of the roles that technology plays in a modern military, and how powerful these weapons already are. At the same time, it’s crucial to acknowledge that in the opinions of many—and based on several different metrics—the project was a resounding success.
“There is a role for covert operations,” replied Zetter when I asked for her opinion on whether the use of Stuxnet was ethically sound. “But I think that, as we discovered with the Snowden revelations, the oversight in Washington D.C. is inadequate. There’s no way that we can trust places like the NSA or the U.S. Cyber Command to make proper decisions on their own.”