About Me

My photo
Bay Area, CA, United States
I'm a computer security professional, most interested in cybercrime and computer forensics. I'm also on Twitter @bond_alexander All opinions are my own unless explicitly stated.

Monday, July 11, 2011

Sabotage, Stuxnet and the future of cyber attacks

Last year, before LulzSec and Sony's epic fail, the big topic in computer security was the uniquely sophisticated and targeted malware known as Stuxnet. I blogged about it back in September. Now, Kim Zetter of Wired Magazine's Threat Level blog just posted a great overview of the effort to reverse engineer Stuxnet. If you haven't read it yet, you should now. Not only does it present a lot of great info on Stuxnet, it also gives some good insight into malware reverse engineering in general. The rest of this post will presume you've read it.

Most of the article is excellent, well researched and well written. However, I take serious issue with one of Ralph Langner's quotes towards the end of the article. Here's the excerpt:
They will likely have no second chance to unleash their weapon now. Langner has called Stuxnet a one-shot weapon. Once it was discovered, the attackers would never be able to use it or a similar ploy again without Iran growing immediately suspicious of malfunctioning equipment.

“The attackers had to bet on the assumption that the victim had no clue about cybersecurity, and that no independent third party would successfully analyze the weapon and make results public early, thereby giving the victim a chance to defuse the weapon in time,” Langner said.
In an ideal world, Langner would be completely correct, but in practical terms he's wrong. I have great respect for Langner, his expertise and his work, but it seems that almost daily I'm reading about people falling for the same attacks over and over again. As just one example, Stuxnet spread from network to network through infected USB drives. This isn't a new attack, back in 2008 the Department of Defense was hit by a major attack spread through USB. That virus was successful, but was a re-use of a virus from 2007. One would hope that the US Government takes information security seriously, but just this year DHS tested how many employees would pick up an infected USB drive and plug it into a secure system. Result: 60%. If there was a company or government logo on the drive, it was up to 90%. Old and well-known attacks work, even on high-value targets that really ought to know better. Similarly, although 0-day exploits are highly valued for malware and hacking attempts, the majority of malware out there is successful using exploits for which patches are available.

However, let's give the Iran's Atomic Energy Organization the benefit of the doubt. Let's presume since Stuxnet, they're keeping updated with every critical security patch for every piece of software they run -- an impressive feat! That can't keep them safe, new exploits are discovered daily. To get a sense of the scale of the problem, take a look at the Exploit Database and remember that those are only the exploits that are discovered by responsible security researchers, not criminals. To further complicate issues, Stuxnet has a compartmental structure. From the article: "[Stuxnet] contained multiple components, all compartmentalized into different locations to make it easy to swap out functions and modify the malware as needed." It seems apparent that the authors of Stuxnet could simply swap in new 0-day attacks and continue as before. In fact, earlier this year a security researcher discovered a serious bug in Siemens' industrial control software and wrote proof-of-concept malware to exploit it. He claims that Siemens didn't take aggressive enough action to patch the exploit.

Frankly, the recent hacking of Lockheed, Sony, Oak Ridge National Labs, Sony, InfraGard, Sony, RSA, Sony, HBGary, Sony, and assorted government contractors proves that any network can be penetrated. The only difference now is that people are more aware that attacks on the sophistication level of Stuxnet are possible. This gives incident responders a better chance to identify and react to malware and breaches. This is what Zetter referred to when she wrote "the attackers would never be able to use it or a similar ploy again without Iran growing immediately suspicious of malfunctioning equipment." The difficulty is, equipment malfunctions. Software has bugs and hardware fails, particularly when you're a country dealing with jury-rigged equipment smuggled in under trade embargoes. For any given failure, cyber attack is the least likely cause, that's why Iran's centrifuge failure rate could increase dramatically for months before a cause was found.

To further complicate the issue, I find it highly unlikely that Iran has sufficient personnel with the skills necessary for incident response and advanced malware reverse-engineering. Quite frankly, even the US government is having problems recruiting and retaining people with those skills. It's hard to imagine that Iran has an easier time with this problem.

Quite frankly, in my opinion the only limiting factor on cyber attacks against physical infrastructure is the will and resources to do it again. It's only a matter of time before another powerful and skilled group decides they want to execute a similar attack.