SANS Digital Forensics and Incident Response Blog

First forensics work - Part 1: Organized chaos and panic

You've taken the plunge. You want to work in digital forensics. Congratulations. You've told your boss of this interest, managed to get some forensics training (SANS FOR508 of course! ) and hyped up the type of things you would be able to accomplish. You feel good about yourself.

Until now.

Two months after your course.

And you haven't had time to practice anything, let alone review the material.

The situation: You were called in and asked to use all of these new skills to help solve a problem. And the pressure is on, as they want some answers by the end of the day. Now you are wondering why did I tell them I wanted to do this again?

Don't panic.

You can do this. We`ve all been there. All you need is a little help from your friends.

The goal of this series is to help guide you through a case, and provide suggestions on how you would go about attacking the problems you will face. So lets get down to business.

Part 1: Acquiring images from local systems

Probably one of the easiest tasks is acquiring the images from computers you can walk up to and physically touch. And the easiest way I have found to do this is with the Helix 3 PRO program that comes with the Forensics 508 classes now. Pop the DVD into the computer in question, attach a USB drive and launch Helix.

A few small tips on settings that I do with Helix:

  • Make sure you capture the volatile information first. You don't want to loose this information, and capturing the volatile information after taking the disk image, well, lets just say nowhere near as useful.
  • Ensure the output type is set to RAW.
  • I select the entire disk as the source, not just the partitions. This gives me everything to work with, and helps ensure I don't miss anything.
  • I set it to a single file for segmentation from the default 2GB. I do this since I only want to work with one image for each system.
  • Make sure you choose a hash type you want to calculate. This is just best practice and helps to show that the image isn't tampered with.
  • Set the destination file name as something that has meaning and not just "disk_image.img". When you have 28 images with the same file name, it gets confusing. Use something that is easy to reference and that will work for you. I tend to use the hostname of the system I am imaging and the date, as it makes it easy for me to find what I am looking for later and my work is all within one organization.

Helix will allow you to image the hard disk, create the hashes, and the chain of custody forms for you all with the click of a button. The latter two being little things that you may forget to do right away or at all.

One final thing you need to remember at all times, is to take notes of what you are doing, where you are doing it and when. This will save you time later when, for example, you need to remember if you typed a command or not. Even those fat finger mistakes. Make sure you note it. It makes it so much easier when you need to dig deep later.

Next up, Part 2: Imaging those remote systems.

Jonathan works as a Senior Technical Specialist in IT Security for the Canadian federal government. He is a SANS mentor, a GIAC question writer and he holds numerous certifications including GCFA and GWAN. When not working, his spare time is filled by his 3 young daughters.


Posted June 1, 2010 at 8:35 PM | Permalink | Reply


You say you prefer to save your image as a single very large monolithic file, instead of splitting it into 2GB or smaller segments. What file system are you saving to? Using Helix (or any Linux) your choices are:
1. Save to FAT32, which doesn't support files larger than 2GB
2. Save to NTFS, which has slow write speed under NTFS-3G
3. Save to Ext3 or another Linux filesystem, which cannot be (easily) read from a Windows examination system.

Posted June 5, 2010 at 5:02 PM | Permalink | Reply

Jonathan Risto

Thanks for the comment.
Most of the work I am doing is on XP systems for the PC systems or a flavour of Windows for the server side. I normally am connecting in on a USB HD for my images, and those are saving out on an NTFS format.
I normally end up doing my examination work on a Linux based system image within a VM workstation (currently using the latest SIFT station). Toolset is greatly, is the primary reason for me doing this.
I agree the image files can be large for some systems, (we have 80 or 160 GB drives in our office, but home users with 1 TB or more gets nasty) but given how windows splits files up all over the disk when writing files out, I have found that if I use chunks I've come across instances in the past where I am flipping between image files for all the fragments for a complete file.
If i have it as a single image file, most of the time it can be auto carved out by an application for me quickly and easily. While I haven't tried in a while, before I was not able to use an app carve across multi sub images and put the complete file together for me. Doing it manually tended to tedious at best.
For me at least, the approximate 3 hours it takes to image a system over the network (maybe 1.5 ''" 2 hours when on the console) normally isn't an issue for the single image file.
Hopefully this helps clarify, but if you still have questions, feel free to post back out and I'll be happy to answer.

Posted June 11, 2010 at 12:55 AM | Permalink | Reply


Well, yes and no.
I had assumed that using Helix you were booting into Helix Linux. Now I understand you are using Helix's windows side tools.
If you image a system live on that system aren't you running the risk of capturing some data mid-write and getting a corrupt version of some data? For example, your imaging tool captures some blocks at the beginning of a large Outlook.pst file, then Outlook writes data to the file both appending to the end and updating information in the header. Since your imaging tool has already passed the header it will capture the updated end without the matching header. Your Outlook.pst will be internally inconsistent and you may get errors trying to parse and read information out o fit. This same problem holds true for any database file and even to a lesser extent to the filesystem itself.
This is an old well known problem with backups, which is why old backup systems only worked when the system was "quiescent" (mostly shut down) and modern backup systems create a "snapshot" filesystem or "volume shadow" filesystem and use that to capture a consistent backup.
This is why I try to avoid capturing an image from a running system if I can. I will boot the system into Helix Linux and capture an image of the internal hard drive in it's quiet, unmounted, state.

Posted June 11, 2010 at 6:39 PM | Permalink | Reply


Oh man, this is sooo me!! I've RSS'd this one! Waiting expectantly for your next update''!

Posted June 16, 2010 at 12:50 AM | Permalink | Reply

Jonathan Risto

Thanks for helping clarify the windows vs linux boot items.. I had thought that I had mentioned it in the posting, but didn't'' Note to self, don't write blogs late at night when tired :)
You are also quite correct on the file accessing items. You will always run the risk of having some data on the system being written out, such as page files, swap space or the like. But if you shut down the system and go from a clean disk, you have lost all of the volitile system data. You also have a huge amount of write out that happens when you shut down a system, which could be over writing areas of the disk. Given the amount of valuable information that you capture from the volitile sources, I find it is worth the risk to potentially have some data that you need to work harder to get, I also find there are a lot less changes to the disk itself without doing the shutdown process.
You won't loose any data with a write out happening (well, ok, unallocated space will be over written), and if there was a large chunk of unallocated space that you have already passed in your disk image, you could miss the new write out depending on how the HD decides to store the file.
You will loose the ability to quickly carve things out and that means more work on our side for sure. But figuring out what happened on the disk in these cases (strange and bizzare behavour) is also half the fun I find.
So long as you understand what happened, and can explain why it happened, what steps you took to mitigate the risk (not accessing programs while disk imaging etc) would still permit this to be valid data from a forensics perspective. Your notes and documentation really matter here.

Posted June 18, 2010 at 2:51 AM | Permalink | Reply


Thanks Jonathan, those are some excellent points. It's true that shutting down the system will cause a great deal of activity, probably more activity than inserting a CD or USB and running a single executable imaging program.
It still makes me nervous, though, the idea of imaging while the system is running. It is theoretically possible for some malware to intercept your imaging program's attempt to access disk blocks and feed fake data. A rootkit will try various ways to feed fake data to other programs. Imagine a program that gives the user some data space to hide nefarious activity in, and intercepts system calls to see this space in the file system or disk blocks, feeding some fake or empty data. I haven't seen this in the wild, but it is possible. By imaging a system that isn't running (either powered off using an imaging device, or booting from a known safe OS like Helix) any malware won't have the chance to interfere with your results.
It's an interesting trade-off. In an imaginary perfect situation you would capture memory, then capture the hard drive from the live system, than capture the hard drive AGAIN from a shut down system. In the real world I never have the time to do that, and I certainly wouldn't have the time to compare the two hard drive images for differences on top of doing the actual analysis for the case.