Blog: SANS Digital Forensics and Incident Response Blog

Blog: SANS Digital Forensics and Incident Response Blog

Digital Forensic SIFTing - Targeted Timeline Creation and Analysis using log2timeline

Digital Forensic SIFTing is a series of blog articles that utilize the SIFT Workstation. The free SIFT workstation, can match any modern forensic tool suite, is also directly featured and taught in SANS' Advanced Computer Forensic Analysis and Incident Response course (FOR 508). SIFT demonstrates that advanced investigations and responding to intrusions can be accomplished using cutting-edge open-source tools that are freely available and frequently updated.

The SIFT Workstation is a VMware appliance, pre-configured with the necessary tools to perform detailed digital forensic examination in a variety of settings. It is compatible with Expert Witness Format (E01), Advanced Forensic Format (AFF), and raw (dd) evidence formats.

Targeted-Timelines


Super Timeline analysis is fairly overwhelming when you first begin your analysis. There are so many entry types and fields you need to master that result in a glassy-eyed analyst staring at the screen trying to make heads or tails of a particular set of data related to an incident. In many cases, unless you have a good starting point, it could be overwhelming unless you specifically have a hint of what you are looking for. We go through timeline analysis in-depth in the SANS Advanced Forensics and Incident Response course so I see many analysis experience the frontal onslaught of overwhelming data quite often.

Using the same framework to create timelines is useful. And to that point, while log2timeline can be used to create an overall system timeline called by many a "Super Timeline" it can also be used to create much more focused timelines as well. Knowing the intricacies of your toolset can be quite valuable in many circumstances and log2timeline is no different.

The most common way to create an extensive timeline is through using the log2timeline-sift command. By far it is the easiest and most inclusive methodology currently to mine for the timeline gold.

Why are there 2 tools? (log2timeline-sift and log2timeline)

The core difference between log2timeline-sift and the core log2timeline is:

  • log2timeline-sift processes Automated log2timeline Processing of disk or partition images
    • (e.g. raw image files, E01 images, AFF images)
  • log2timeline processes Flexible Targeted log2timeline Processing of individual or multiple artifacts
    • (e.g. LNK, index.dat, NTUSER.DAT, XP Restore Points)

The Core log2timeline Tool


It would be helpful to review how the structure of the log2timeline command works.

Critical Failure Point Note: There is confusion over what the (-z) time zone option is used for. The (-z) is the time zone of the SYSTEM. The timezone is used to baseline convert time data that is stored in "local" time and will output the data into the same time zone.

Why the need? Certain artifacts, such as setupapi.log files and index.dat files, store times in local system time instead of UTC. Without telling log2timeline what the local system time is, it would slurp up the data from those artifacts incorrectly. To correct this, log2timeline converts the data into UTC but then output the data back into the same —z time zone by default.

The output of your timeline will always be the same time zone as your —z option unless you specify a different time zone using the "BIG" —Z option. This will allow you to convert a system time of EST5EDT to UTC output if you desire to compare computers from two different time zones in a single timeline.

If your time zone includes areas that have daylight savings time, it is important to use the correct location with daylight savings time. For example, on the East Coast, the correct implementation of daylight savings for the timezone value would be EST5EDT. For Mountain Time it would be MST7MDT. log2timeline has autocomplete enabled in the SIFT Workstation, so all you would need to do is type -z [tab tab] to see all the available timezone options that it will recognize. If you do not use this time zone setting correctly with daylight savings accounted for, any local time timeline data that is analyzed in local time will be incorrect. So just to iterate, using EST as your timezone will treat all timestamps as if they were EST. Using EST5EDT (and other similarly named ones) will take daylight savings into account.

In summary: It is crucial that the -z option matches the way the system is configured to produce accurate results.

log2timeline LIST-Files


The list files in log2timeline are vitally important to understand. They can be used to specify exactly which log2timeline modules you would like to parse in a given image. There are some default list-files already built into log2timeline found in the /usr/share/perl5/Log2t/input directory and all in with the value .lst. The list-files will always be used with the —f list-file option and the -r (recurse directory) and will automatically parse any artifact included in the list-file chosen found in the starting or subdirectory of the location you are examining with log2timeline.

Once you understand the .lst files are just a list of artifacts you would like to examine, it is fairly simple to add your own for any type of situation. For example, you could add one for an intrusion investigation against an IIS webserver by using only the artifacts mft, evt, and iis. This will save you a lot of time especially if there are a bunch of IIS log files on the system.

Intermediate log2timeline LIST-Files usage


If you prefer to not make a list-file for use. log2timeline can take any number of processors, as long as they are separated with a comma.

An example: -f winxp,-ntuser,syslog-- This will load up all the modules in the winxp input list file, and then add the syslog module, and remove the ntuser one.

The same can be done here: -f webhist,ntuser,altiris,-chrome -- This will load up all the modules inside the webhist.lst file, add ntuser and altiris, and then remove the chrome module out of the list.


This is the proper way to form a command using the log2timeline list-files option in the SIFT workstation.


Now that we have an understanding of the basic functionality, it is best if we quickly take a look at some cases in where a targeted timeline could be used.

Case Study 1 — Intrusion Incident (IIS Web Server)


Perhaps you are examining a case in which you have to examine a web server for a possible residue of attack. In this case you are not sure when the attack took place but you would like to look at not only the system's MFT, you would also like to include the IIS log files and the system's event logs. This greatly reduces the amount of clutter in your timeline as you already know your attack via the web would be found in these 3 places.

Mount your disk image correctly using the SIFT workstation on /mnt/windows_mount

Now build the commands to build your initial timelin

Step 1 — Find Partition Starting Sector # mmls image.dd calculate offset ##### (sector *512)

Step 2 — Mount image for processing # mount -o ro, noexec,show_sys_files,loop,offset=##### image.dd /mnt/windows_mount

Step 3 — Add Filesystem Data # log2timeline —z EST5EDT —f mft,iis,evt /mnt/windows_mount —w intrusion.csv

Once you have run the three commands you will now have your timeline built. It is best that you now sort your timeline using l2t_process. l2t_process is most effective when you are bounding it by two dates to limit looking at all times on the system.

Step 4 — Filter Timeline # l2t_process -b intrusion.csv > filtered-timeline.csv

#l2t_process —b /cases/ EXAMPLE-DIR-YYYYMMDD-####/timeline.csv 01-16-2008..02-23-2008 > timeline-sorted.csv


Case Study 2 — Restore Point Examination


In this example, we are using log2timeline to sort a Windows XP restore points only looking for "Evidence of Execution" only. This is used to show how you can use log2timeline to provide a targeted timeline of only a piece of the drive image instead of the entire system itself. In this example, we could see historically the last execution time for many executables on each day a restore point was created.

# log2timeline -r -f ntuser -z EST5EDT /mnt/windows_mount/System\ Volume\ Information/ -w /cases/forensicchallenge/restore.csv

# kedit keyword (add userassist, runmru, LastVistedMRU, etc.)

# l2t_process -b restore.csv -k keyword.txt > filtered-timeline.csv

What if a key part of our case was determining when the last time internet explorer was last executed over time? It is now easily visible each time IE was last executed on a specific day using timeline analysis techniques like those I showed above. Here you can easily track the execution of a specific program across multiple days thanks to quick analysis using the restore point data (NTUSER.dat hives) and log2timeline.


Case Study 3 — Manual Super Timeline Creation


In some cases, you might not want to use the full super timeline. To understand what log2timeline-sift is stepping through automatically; it might be useful to accomplish the same output by hand. The following is the steps that log2timeline-sift takes care of us in a single command instead of 3 steps.


In Summary:


Timeline analysis is hard. Understanding how to use log2timeline will help engineer better solutions to unique investigative challenges. The tool was built for maximum flexibility to account for the need for both targeted and overall super timeline creation. Create your own preprocessors for targeted timelines. Use log2timeline to only collect the data you need. Or use it to collect everything.

In review: The core difference between log2timeline-sift and the core log2timelineis:

  • log2timeline-sift processes Automated log2timeline Processing of disk or partition images
    • (e.g. raw image files, E01 images, AFF images)
  • log2timeline processes Flexible Targeted log2timeline Processing of individual or multiple artifacts
    • (e.g. LNK, index.dat, NTUSER.DAT, XP Restore Points)
In the next article we will talk about more efficient ways of analyzing data collected from log2timeline or log2timeline-sift.

Keep Fighting Crime!


Rob Lee has over 15 years of experience in digital forensics, vulnerability discovery, intrusion detection and incident response. Rob is the lead course author and faculty fellow for the computer forensic courses at the SANS Institute and lead author for FOR408 Windows Forensics and FOR508 Advanced Computer Forensics Analysis and Incident Response.

6 Comments

Posted January 20, 2012 at 4:12 PM | Permalink | Reply

DAVID NIDES

AWESOME WRITE UP, I LIKE HOW YOU LISTED THE DIFFERENT USAGE EXAMPLES!! TIMELINE ANALYSIS RULES AND ENCASE/FTK DRULES!

Posted January 20, 2012 at 10:08 PM | Permalink | Reply

Frank McClain

Rob, thanks so much for this writeup. Well done, enjoyed it, very useful on the differences between l2t and l2t-sift. Thanks as well to everyone who has worked so hard to improve l2t and integrate into SIFT.

Posted January 23, 2012 at 6:55 AM | Permalink | Reply

John Foley

Awesome! This is what I needed. Thanks!

Posted June 04, 2012 at 1:44 PM | Permalink | Reply

Ketil Froyn

Rather than using mmls to manually calculate the offset, I prefer to use losetup and partx. Use like this:

losetup -f --show image.dd # outputs device node, let's assume /dev/loop0
partx -a /dev/loop0
mount -o ro,noexec,show_sys_files /dev/loop0p1 /mnt/windows_mount
# create timeline here
umount /mnt/windows_mount
losetup -d /dev/loop0

After running partx -a you can see how many (and which) partitions exist by looking in /proc/partitions or running partx --show /dev/loop0.

Posted June 27, 2012 at 2:33 PM | Permalink | Reply

ano

I still don't understand a timezone value to -z option as you explained. What do you mean by "The (-z) is the timezone of the SYSTEM.". What is the "SYSTEM" ? Does the timezone value depends on a file system of acquired image (FAT, NTFS) or does it depend on the system-time (as opposed to local-time) of the investigated computer system ?

Please explain, I am relatively new to DFIR !

Thanks

Posted July 06, 2012 at 1:52 AM | Permalink | Reply

Rob Lee

SYSTEM = the system time of the computer your are analyzing. There are some artifacts in log files written in local time and the local system time is needed to convert them to UTC.

Post a Comment






Captcha

* Indicates a required field.