I spoke with Jake Williams, an incident responder extraordinaire, who teaches SANS' FOR610: Reverse-Engineering Malware course. In the second part of the interview, Jake shared advice on acting upon the findings produced by the malware analyst. He also clarified the role of indicators of compromise (IOCs) in the incident response effort. (See Part 1 if you missed it.)
Last week we were talking about reporting in the context of malware reverse-engineering: how to create a useful report and whether people actually read such reports. Can you tell me how someone might act on information in a malware analysis report you provide?
Sure. That's a great question. We talked about how I like to create three different sections of the report: an executive summary, a section with high level indicators of compromise and findings for incident response personnel, and a very detailed technical section with all other findings. I'll only address how to act on the first two, since there are too many variables for the highly technical portion.
In my experience, executives and management like to know about capabilities. "What did the malware have the capability to do?" Well, actually, they want to know what the malware did. In other words, what did it steal from their network? Unfortunately, as you are well aware, we often lack enough information when responding to an incident to say conclusively what the malware did. We are normally limited to saying things like "there was no indication of data exfiltration" with the caveat that we are basing this assertion on incomplete information. However, understanding malware's capabilities can help us understand what its intent was (and how much management should care). Can I give you an example of where that might be relevant?
Please, go ahead.
I once responded to an incident where we found a piece of malware on a corporate file server. It was performing process injection and reading the contents of process memory using some non-standard techniques. At first glance, this looked really nasty. Upon further investigation, I discovered that the malware was looking to steal account information from an online multiplayer role playing game popular in South Korea. We eventually figured out how it got on the server in the first place, but that is less relevant to our current discussion.
Imagine two possibilities for the report. The first is one where the malware analyst tries to "wow" management with all sorts of technical jargon, eventually concluding that the malware was probably not a threat since it is related to online gaming. Management has some pretty tense moments reading that report (if they finish it at all). If on the other hand you let everyone know up front, in the executive summary, that the malware was related to online gaming, your customers can appropriately act on the information. Let me be clear: any malware infection, especially on a server is a big deal. However, based on the analysis, this incident might be less important than another the customer is currently working.
Sure, that makes sense. Most, if not all organizations have limited resources and have to prioritize their efforts. Knowing the capabilities of the specimen helps in with the triage process. Can you also talk about the importance of discussing the signs of the malware infection-indicators of compromise-in the report?
Right. So I also make it a point to create another section that lists indicators of compromise, related filenames, and anything that IR personnel can act on. For instance, a malware dropper may pick from 20 different names for the file it writes to disk. The customer is already infected with this malware in one location. It is likely they are compromised elsewhere as well. If the IR team is only looking for one filename, they'll only detect 5% of the infections. Registry keys, default folders, and services names are all things that also tend to be more or less static throughout various samples in a malware family. I usually place these in tables, grouping similar objects together to call them out for the IR team. If the malware propagates using a particular exploit, I call out the exploit, as well as software versions impacted, in this section. If the exploit involves MS Windows, I include the hotfix number that patches the exploit. Both of these help IT and IR personnel determine their exposure in the rest of the network.
How do you suggest organizations use the IOCs you provide?
Ideally, IOCs are used to scan other hosts on the network for signs of the same infection (or infection by a malware variant). This may lead to the discovery of variants of the malware in the network. Or you may have found an older version first, but the IOCs you extract help you find a newer, previously unknown version. The question then is how to use IOCs to scan the network. If you are lucky enough to be working at a site that has deployed Mandiant's MIR or HBGary's Active Defense then scanning is an easy task. If your organization is yet to deploy one of these tools, then Active Directory can be used to push custom shell, .vbs, or PowerShell scripts to scan for these indicators. When new malware is discovered, it should be analyzed to extract unique IOCs. These are then fed back into the system and scans are repeated until no new malware is found. It's sort of like the old shampoo commercial where they suggest you "lather, rinse, repeat."
In your experience, how effective is this? How much time does it take?
The technique is extremely effective. I use it every time I run an IR (providing the customer puts the network in scope for the incident). The time expenditure grows quickly, so this isn't for the weak of heart. Some customers just want you to analyze one piece of malware, provide them some results, and get out of the way. Others want to do everything possible to ensure that they kick an adversary out of their network completely, the first time. This method is particularly well suited to the latter group.
Well Jake, thanks for your time and insights! As a follow up, take a look at the final installment of this interview series, where Jake will discuss his perspective on the various types of malware analysis approaches. Those who missed Part 1 of this interview can read it here.
Jake Williams and Lenny Zeltser will be co-teaching the FOR610: Reverse-Engineering Malware course on-line live March 28-April 29, 2013. Get a choice of a MacBook Air, Toshiba Portege Ultrabook or $850 discount when you register for this class before March 13, 2013.