One of the biggest complaints that many have in the DFIR community is the lack of realistic data to learn from. Starting a year ago, I planned to change that through creating a realistic scenario based on experiences from the entire cadre of instructors at SANS and additional experts who reviewed and advised the attack "script". We created an incredibly rich and realistic attack scenario across multiple windows-based systems in enterprise environment. The attack scenario was created for the new FOR508: Advanced Forensics and Incident Response course. Our main goal was to place the student in the middle of a real attack that they have to response to.
The purpose is to give attendees of the new FOR508 real filesystem and memory images that they will examine in class to detect, identify, and forensicate APT-based activities across these systems in class. The goal is to give students who attend the course "real world" data to analyze. The goal was to create attack data to use in our courses at SANS so our students could have a direct feel for what it is like to investigate advanced adversaries.
This past week, we ran through the exercise. I had a team of attackers mimic the actions of an advanced adversary similar to the APT. Having seen APT tactics first hand, I scripted the exercise but also wanted to create a realistic environment that would mimic many organizations home enterprise networks. My attack team (John Strand and Tim Tomes) learned quickly the difference between a penetration test and a test that duplicated typical APT actions.
Over the week, I learned some very valuable lessons by being able to observe the attack team first-hand. More in future blog articles, but the first question I had on my list was: "Is A/V really dead?"
Is A/V really dead?
Over the years, I knew that it can be circumvented, but until I helped plan out and execute this exercise I was exposed to the truth first hand. In many incidents over the years (including many APT ones), we and other IR teams have found that A/V detected signs of intrusions, but they were often ignored. I expected at least some of those signs to exist this past week while running through the exercises we were creating. I had hoped differently, but after a week of exploiting a network using the same APT techniques that we have seen our adversaries use, I think it paints a very dark picture for how useful A/V in stopping advanced and capable adversaries. This isn't an anti-AV or HIDS write-up but to give you something to think about when it comes to what we blindly are looking for. I would never recommend someone go without it as AV still stops very basic and simple attacks, but it is clear that in order to find and defend against advanced adversaries we need to do more than rely on A/V. Having A/V state that they can defeat advanced adversaries such as the APT has been a challenge for them and they anti-virus industry has not been very open about this very clear fact.
To be honest, I actually had some hope for some of the enterprise level A/V and HIDS products to catch some of the more basic techniques we used (as I wanted the artifact to be discovered by attendees), but A/V proved easy to circumvent by my team. While I'm sure all many of these products stop low-hanging fruit attacks, we found that we basically did whatever we wanted without our enterprise managed host-based A/V and security suite sending up a flare.
What is bundled into this suite? ->Anti-virus, Anti-spyware, Safe surfing, Anti-spam, Device Control, Onsite Management, Host Intrusion Prevention System HIPS bundled in McAfee Endpoint Protection Suite — http://shop.mcafee.com/Catalog.aspx. I also separately purchase their desktop host intrusion prevention piece and built that into McAfee EPO and deployed that across the environment as well.
To help understand how this might have happened, many have asked for the details of the network and the attack.
The Windows Based Enterprise Network:
The network was setup to mimic a standard "protected" enterprise network using standard compliance checklists. We did not include any additional security measures that usually are implemented post APT incidents. This was supposed to mimic a network at a "Day 0" compromise, not actively hunting, using threat intelligence, white listing, and more. However, we did have a substantial firewall, A/V, host based IDS, patching automatically done, and more. We also had fairly restrictive users on it, but included some bad habits found in most enterprise settings (poor admin password policy, local admin accounts w/same password, XP user with admin rights locally)
- Full auditing turned on per recommended guidelines
- Users are restricted to being a users (cannot even install a program if they wanted to)
- Windows DC set up and configured by Jason Fossen (our Windows Security Course) he didn't tighten down the network more than what is expected in real enterprise networks
- Systems installed and have real software on it that is used (Office, Adobe, Skype, Tweetdeck, Email, Dropbox, Firefox, Chrome)
- Fully patched (Patches are automatically installed)
- Enterprise Incident Response agents (F-Response Enterprise)
- Enterprise A/V and On-Scan capability (McAfee Endpoint Protection — Anti-virus, Anti-spyware, Safe surfing, Anti-spam, Device Control, Onsite Management, Host Intrusion Prevention (HIPS) )
- Firewall only allowed inbound 25 and outbound 25, 80, 443 only.
- The "APT actors" has hit 4 of the systems in this enterprise network network. (Win2008R2 Domain Controller, Win764bit, Win732bit, WinXP).
- Users have been "using" this network for over a year prior to the attack. That way it has the look and feel of something real. These users have setup social media (yes they are on twitter... you might be friends with them), email, skype, etc. Each character user has a backstory and a reason to be there working.
Bad habits we included and commonly see in most enterprise networks:
- Local Admin User (SRL-Helpdesk) found on each system w/same password
- A regular user with local admin rights on an XP machine.
Malware Used (non-public):
- C2 Beacon — Port 80 C2 channel encoded in XMLRPC traffic. Meterpreter backend - What this malware did was beacon out every 32 seconds to a specific IP address over port 80 looking like traditional web traffic. The command and control channel was embedded into XMLRPC traffic. The command and control shell we decided to use a meterpreter backend as developing a new one was too costly and we found that it was still "good enough" for our requirements. Malware detected on Microsoft Security Essentials due to payload, but not in McAfee's products (I know — odd!).
- C2 Channel — Custom Meterpreter Backed based executable. Will connect out over port 80. It doesn't have persistence or a beacon interval. Must be started to connect.
Malware Used (Public):
The evasion technique is pretty simple, wrap the executable into a python script (you can also use perl and Ruby) then insert it into a good executable or export to a new one.
- Poison Ivy - Straight export to Python Array. Pretty sad that it worked actually. This is where I had hoped to create some alerts that I would have had to suppress.
- Psexec - Not malware
- Radmin - No encoding needed. Apparently this backdoor is OK?
- mimikatz - No encoding. Again another place hoping to suppress some alerts so we could find them in the "system forensics" piece of the exercise.
APT Attack Phases
This exercise and challenge will be used to show real adversary traces in network captures, host systems, memory, hibernation/pagefiles etc. Hopefully the malware will be used in FOR610: (malware analysis) for additional exercises. The network captures will hopefully be used in FOR558 (network forensics).
And through the week none of the defenses we had put in place did not matter what-so-ever. It was quite simple to evade any detection. Our APT "team" consisted of John Strand and Tim Tomes.
- Phase 1 — Spearphsing attack (w/signed Java Applet attack — public) and malware C2 beacon installation (custom malware - encapsulated port 80 http traffic and POISON IVY)
- Phase 2 — Lateral movement to other systems, malware utilities download, install additional beacons, and obtain domain admin credentials
- Phase 3 — Search for intellectual propery, profile network, dump email, dump enterprise hashes
- Phase 4- Collect data to exfiltrate and copy to staging system. Rar up data using complex passphrase
- Phase 5 — Exfiltrate rar files from staging server, perform cleanup on staging server
In the end, we will have created authentic memory captures on each box, network captures, malware samples, in addition to full disk images w/Restore Points (XP) and VSS for (Win7 and Win2008) machines.
Why did we choose McAfee's product?
I have seen a lot of enterprise managed A/V and HIPS suites and none of them have fared well against the APT actors and malware. It is too easy to obscure the malware to avoid detection so any A/V choice here (McAfee, Symantec, etc) would have yielded similar results. And that matches what myself and many others have witnessed in seeing these products at locations where APT adversaries roamed freely for months before detection. In the end, it really would not have mattered to choose product X over Y. We wanted to select a product where most attendees of FOR508 would feel at home when performing incident response.
In order to set up a realistic environment I wanted to go with one of the product choices that was implemented in more environments so that attendees could easily "identify" with their own enterprise networks through the lens of this exercise. I asked the SANS advisory board for their recommendations in late August 2011 and found that most seemed to lean toward McAfee EPO. This agreed with me as the DoD Host Based Security System HBSS also implemented similar functionality found in the A/V products we ended up using.
Furthermore, when we installed the product we did not tighten the configuration options beyond what the default settings had chosen for itself. I literally mimicked an admin purchasing a product, installing it, and crossing my fingers — hoping it worked correctly. In most environments we have investigated, this was typical (w/ standard out-of-box settings.) We also came to find out that the .dat files were not automatically updated. The system apparently needed some care and feeding a bit. However, having said that, the attack team verified during the test that the malware used (public and proprietary) evaded detection using the latest .dat files as of week of 2 April 2012.
We uploaded all the log files for their team to review to get a better sense of what we had installed, if anything was incorrectly applied, and for additional feedback. We used their analysis tool called WebMer found here: http://mer.mcafee.com/enduser/downloadlatestmer.aspx to collect all the log files, operational parameters, and more from each system. McAfee verified via a conference call that the system was installed correctly and operational during the attack, but that some settings could have been implemented that would have slowed the speed of the advance, but not stopped it.
Their team determined we could have implemented several key things to have help slow the attack:
From McAfee — snip —
- Prevent Windows Process spoofing — enable for BLOCK/REPORT
- Prevent all programs from running files from the Temp folder — enable for REPORT
- Prevent creation of new executable files in the Windows folder — enable for REPORT
End From McAfee —-snip—-
If we enabled these protections, it would not have really been a stopping point but it would have created logs. I actually regret not having implemented these prior to the test exercise as it would have been great to have someone see the logs during the investigation.
Overall, the point of the exercise was not to embarrass anyone. I wanted to come as close to "real" as I could get. As a result, we knew we had to include real world implementations of some of the best tools money can buy. In the end, this isn't about trying to shame anyone or pick on a A/V vendor. It is about reporting "What happened?" and "What did we notice?" Hopefully everyone learns something in the end about the exercise and benefits from it.
Some IR/Forensic Results from the Attack
Some results from doing some quick Forensic Analysis of these machines
Timeline Analysis of Spearphishing Attack
Memory Analysis (Quick hits using Redline without analysis - Yes... for in-depth analysis I would also use Volatility)
We used a combination of custom crafted malware and well-known malware such as Poison Ivy, metasploit, and more. We used simple A/V evasion to get around it and we NEVER turned it off. RESULT-> NOT A PEEP from A/V. Yes it was installed correctly as it did detect the un-armored metasploit payload quickly and killed it (a test to make sure it DID in fact work as I became worried it really didn't work or was setup wrong). I would gladly let anyone from McAfee look at our setup to make sure we didn't make a mistake, but I followed their guide to the letter and used recommended settings when installing the product (They took us up on that, and we sent in the logs from all 4 systems). I also have found a lot of clients with incorrect installed Enterprise products, so it is clearly possible I mundged something up during the install. If we are wrong, then we are wrong and we can go back and do run through it again after we apply their suggestions as we have it snapshotted inside an ESX server. I was actually anticipating it would find at least ONE thing we did. Nothing was found.
If anyone needs just a little proof that you are using A/V products to mainly defend against low-skilled attackers, then there it is. I asked that the attack team use skills learned in most "Penetration Testing" courses. They didn't use anything really advanced, which is one of the reasons many argue that even the "Advanced Persistence Threat" isn't really that advanced. We even made many mistakes during the attack. Even then... nothing was found and nothing was automatically blocked. If this were a real compromise, we could have been on this network for months or years prior to anyone finding us. Just like in the real world.