SANS Digital Forensics and Incident Response Blog

Favoring Frameworks for Intrusion Detection and Prevention

Revealing, maturing, and utilizing indicators through their lifecycle is the analytical engine behind Security Intelligence (or, if you prefer, Intel-driven CND). Each of these actions can be enhanced with custom, FOSS, and COTS tools, but perhaps no aspect relies on tools more heavily than the act of leveraging intelligence. The data rates and sizes of today's computers and networks mean that only through the use of automation can intelligence be leveraged - manual searching and correlation by analysts is clearly impossible. Thus, the ability to codify intelligence into network and host security tools establishes the practical limits of an organization's ability to use that intelligence.

In the days where authentication, authorization, and anti-virus defined the tools of the security trade, the universe of malicious activity significantly impacting organizational risk was far more constrained than it is today. This was the assumption as vendors developed early intrusion detection systems, as well. Perhaps without realizing it, organizations using these products were outsourcing risk management to the software vendors who built and provided updates to those tools. As network defenders at individual organizations began to appreciate the broader cadre of adversaries, activities, and vulnerabilities composing their risk profile, the need grew for greater flexibility. Marty Roesch's Snort IDS, and perhaps Stephen Northcutt's Shadow IDS before it, were early tools that provided increased flexibility in the form of an open signature language. The flexibility of this signature language, combined with the failure of Snort's competitors to evolve their own, was one of the reasons for its rapid adoption as a key CND tool. Signature languages, while an improvement, are still fundamentally limited by their syntax. They meet the needs of atomic and some basic computed indicators, but fail for more complex computed and behavioral indicators. This is particularly true when contextual information must be considered outside of a single file or packet for complete indicator definition, or when brute-force analysis of files and protocols is needed. To fully leverage intelligence on adversaries, analysts today require frameworks that expose the structure and corresponding data of network protocols and computer systems for additional inspection.

There will always be a place for third-party-supplied signatures addressing ubiquitous threat actors, malicious tools, and vulnerabilities. Those signatures only fulfill basic security risk management needs; the lowest common denominator, however. Certain industries face different, specific threat landscapes, and in some cases industry-specific software presenting limited-scope vulnerabilities. Finally, of course, many organizations have individual sets of network defense needs that must be met. For a good example of industry- and company-specific risks, one needs to look no further than the recent data breach at Renault. Although it appears to have been an inside job, this story reminds us that the dynamics of vulnerabilities and threats must be addressed down to an individual company level. If vendors cannot empower analysts to use their technologies for leveraging niche market intelligence, their tools do not meet the modern needs of network defenders. The only way to empower analysts in such a way is to present network and host data structured as it will be used, for arbitrary analysis limited only by the capabilities of the underlying O/S and hardware.

So what, then, do I mean by a "framework?" For the purposes of this article, I use that term to refer to software that will collect data from hosts or network communications and provide it meaningful structure representative of how the data is used, associated metadata, and any corresponding hooks into analytical processes on that data. As an example, consider protocols at different OSI layers, portable executable attributes and sections, volatile memory objects, etc. What sort of capabilities should these frameworks bring to bear? My experience is that, irrespective of network or host context, these solutions need to be able to support:

  • Bulk surveillance or data collection
  • Offline analysis, with latency tolerance in detection mode
  • Inline analysis, with latency intolerance in prevention mode
  • Data structured according to its context (network/protocol, host/file/process), and corresponding metadata, available for custom analysis
  • Integration of custom analytical code taking any executable form recognized by the underlying O/S
  • Classic signature-based analysis
  • A modular architecture for analysis procedures
  • Scalability in the form of a tiered or distributed architecture

Engineers implementing these tools must take into account certain considerations in the implementation of these tools, namely:

  • Ease of setup
  • Out of box capabilities (no additional work)
  • Vendor / community support
  • Adoption by industry
  • Costs (purchase, O&M effort including infrastructure management, licensing)

So where should one turn to find these capabilities? Sadly, the options are quite thin at this point. The canonical example on the network IDS/IPS side, Snort (or the COTS products Sourcefire has based on it), is a signature-based tool with all the aforementioned issues, though many engineering challenges have been solved. Sourcefire's first effort to move toward a framework, Shared Object rules, never seemed to be fully integrated into their product offering. Razorback, a proof-of-concept tool from some of the motivated smart folks at Sourcefire's VRT, shows great promise in many of the framework provisions. The latest version I've seen, however, is still in the earliest phases of development, and leaves open significant issues such as data capture and management. My colleague Charles Smutz's Vortex IDS addresses bulk surveillance and latency-tolerant detection with custom tool integration, but lacks in many other areas such as signature-based analysis and integrated OSI layer 4+ protocol parsing. The folks who have built Bro similarly solve a subset of these challenges, but not enough to make the tool enterprise ready for many environments.

The story is even worse on the host side. Long dominated by the AV industry which still considers even its signatures to be proprietary information, these COTS tools only provide the most limited of sets of capabilities (try detecting, but not deleting, encrypted archives such as ZIP across your enterprise). Mandiant's MIR is a great point-solution, but it is a remote tool not designed for real-time host monitoring, falls short on bulk data collection, and is not yet scalable. Yara and ClamAV (another Sourcefire tool), though signature-based, show the most promise on flexible host detection, but there have been no efforts to make these tools enterprise-ready. As an aside, I'm still trying to demystify Sourcefire's purchase of Immunet - I'm sure they'd say this is a move in the direction I endorse here, but it's just too early, with too many variables, to say for sure.

Today's detection and prevention tools are built by vendors focused on common threats & vulnerabilities using often-closed signature languages, limiting the ability of analysts to leverage intelligence applicable to their threat landscape. Tools that operate on more flexible frameworks are too immature for adoption by all but the most highly funded and trained teams. By evolving detection frameworks, vendors can enable analysts to be more effective at defending their networks, and carve out new business models in the form of the highly customized defenses that are becoming a business necessity in more and more industries.

Michael is a senior member of Lockheed Martin's Computer Incident Response Team. He has lectured for various audiences including SANS, IEEE, the annual DC3 CyberCrime Convention, and teaches an introductory class on cryptography. His current work consists of security intelligence analysis and development of new tools and techniques for incident response. Michael holds a BS in computer engineering, an MS in computer science, has earned GCIA (#592) and GCFA (#711) gold certifications alongside various others, and is a professional member of ACM and IEEE.

4 Comments

Posted January 10, 2011 at 7:28 PM | Permalink | Reply

Joel Esler

I think you may be confused about what Shared Object rules were intended to do (and actually do). They are part of the product and used quite heavily. Maybe I can help you understand?

Posted January 10, 2011 at 10:03 PM | Permalink | Reply

Matt Watchinski

ClamAV ''" You might want to check out the 3.0 beta. OnAccess and OnDemand scanning using custom signatures. Includes support for managing those signatures across all deployed end-points. (No event management yet). Additionally, the byte-code engine allows for much more than just signatures.
Also from an enterprise perspective ClamAV is deployed in thousands of Enterprises/SME's/just about every ISP on the planet, so I'm guessing you are referring to end-point when it comes to the enterprise comment?
On Razorback, I'm not sure what you mean by missing data capture. The modules for plugging into snort support easy ways to get at different types of file data from the network. Do you mean from other types of devices?
Very much agree with everything else though. Great post.

Posted January 11, 2011 at 5:44 AM | Permalink | Reply

Seth Hall

Which challenges do you not see Bro supporting?

Posted January 13, 2011 at 5:48 PM | Permalink | Reply

Mike Cloppert

Ladies, ladies, you're all very pretty ;-)
In all seriousness, my intention here was not to highlight deficiencies in specific projects. Much the opposite, the tools I mentioned, I did so because they are closer than most to meeting the needs of network defenders (note how I didn't mention a single AV vendor by name). My comments on shortcomings were only to emphasize that, in my opinion, no tool is there yet.
I want to also make it clear that this was in no way an effort to be a comprehensive survey of open-source tools which provide framework-like capabilities. I didn't even touch on the SIM space, for example, for which many (if not all) of the same framework properties apply.
Since you asked, I'll lend a little more insight into my thoughts on the three tools mentioned in the comments. If any of you would like to discuss further offline, I'd be happy to (in person, if the opportunity presents itself).
Joel, Sourcefire definitely uses SO rules frequently, and as far as I can tell to great effect. My comment about integration with the product line was more to the level of exposure and support end-users (aka analysts) get. While custom rules are supported through the Sourcefire UI, custom SO rules are not. Documentation seems to exist primarily in the blogosphere. If the intention of SO rules was never to open that capability up to Snort users, well, that just reinforces my broader point about frameworks and that we've still got a ways to go.
Matt, ClamAV is a fantastic product. I and my team use it regularly for point solutions and threat-focused scanning. Admittedly, though, ClamAV is "designed especially for e-mail scanning on mail gateways." (http://www.clamav.net/lang/en/about/) It's a product that could make a big impact on security at endpoints, too. Yes, there are clients available, but support for Windows OS's is unavailable (http://www.sourcefire.com/security-technologies/ClamAV/support), a major barrier for adoption. Also, if version 3 supports central management capabilities, I was unable to find much about it ''" even after you'd mentioned it here. My google-fu may be weak, or it may not be well-advertised (likely the former).
Seth, Bro is one of the most flexible open-source IDSs that's seen any sort of meaningful adoption since Snort. The team has done some great work, particularly with the HTTP protocol. The tool requires a pretty steep learning curve and detailed under-the-hood knowledge to extend, though, making the task rather difficult (especially during heated incident response engagements). Analytical latency tolerance, mainly due to a lack of multithreading, was the biggest gap I saw in the last version of Bro I looked at.
I want to emphasize again that all of these are great tools, moving in the right direction. I wrote this blog entry merely to break down detection frameworks to their fundamental requirements, as seen by someone defending networks against sophisticated adversaries, in the hopes it would encourage self-reflection and development in a helpful direction. On that note, I'd like to extend my thanks to the developers of these and other open source projects that push the envelope. Keep on!
Mike