Linkz for SIEM

Sunday, July 13, 2014 Posted by Corey Harrell 2 comments
Security information and event management (SIEM) has been an area where I have spent considerable time researching. My research started out as curiosity to see if the technology could solve some problems then continued to get organization buy-in followed by going all in to architect, implement, and manage a SIEM for my organization. Needless to say, I did my homework to ensure our organization would not follow in the footsteps of others who either botched their SIEM deployments and/or ended up with a SIEM solution that doesn't meet expectations. In this linkz post I'm sharing my bookmarks for all things SIEM.

We Need a Response Team

In the movie Avengers after everything else failed Nick Fury was there saying "we need a response team." Regardless what the World Security Council said Nick kept saying "we need a response team." The Avengers is a great movie with many parallels to incident response (yes, I cut up the movie to use for in-house training and it's on the deck for my next presentation). Deploying a SIEM - as with any detection technology - will result in things being detected. After things are detected then someone will need to respond to it to investigate it. As a result, my initial focus for my SIEM research was on designing and implementing an enterprise-scale incident response (IR) process. For a bunch of IR linkz see my post Linkz for Incident Response. My initial focus on IR wasn't solely because things will be detected. I also see IR activities merging with security monitoring activities. To see more about this thought process refer to one of the links I shared in my post by Anton Chuvakin titled Fusion of Incident Response and Security Monitoring?

General SIEM Information

The first few linkz provide general information about SIEM technology. Securosis did a series about SIEMs and three of their posts provide an overview about what they are. The posts are: Understanding and Selecting SIEM/Log Management: Introduction and UnderstandingSelecting SIEM/LM: Data Collection, and Understanding and Selecting SIEM/LM: Aggregation, Normalization, and Enrichment.

Hands down the best content I found about SIEM was written by Anton Chuvakin who is a Gartner analyst. This post links to a lot of his material and the following posts are self explanatory: SIEM Analytics Histories and Lessons and SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?

Rounding out my general links is another one by Securosis. In their series this post actually came a lot later (after the articles I listed in the Planning the SIEM Project section) but I think the content is more important to have up front. To get anywhere with a SIEM in an organization someone has to agree to it. Someone who has the ability to purchase it. This is where Securosis's next article comes into play since it provides examples of the justifications one could use. For more see the post Understanding and Selecting SIEM/LM: Business Justification.

Planning the SIEM Project

The next round of links is what I found to be gold designing my organization's SIEM solution. One thing I didn't want to happen was to follow in the footsteps of so many companies before me. They buy and acquire a SIEM then end up with a solution that doesn't solve any of their problems. Bumbling a SIEM project is not something I wanted in my rear view mirror. To avoid this from happening I spent considerable time researching how to be successful in SIEM deployments so I could avoid the pitfalls that others have fallen in. Sitting where I am today and reflecting back I'm really glad I did my homework upfront as our SIEM project continues along addressing our use cases.

The best reference I found to help architect a SIEM solution is a slide deck by Anton Chuvakin. The presentation is Five Best and Five Worst Practices for SIEM and it outlines the major areas to include in your SIEM project (16 to be exact). It may not cover everything -such as building rules, alarms, and establishing triage processes - but it does an outstanding job outlining how to avoid the pitfalls others have fallen in. If anyone is considering a SIEM deployment or in the midst of a SIEM deployment then this is the one link they will want to read.

Continuing on with links from Anton that provide additional details are the following: On Broken SIEM Deployments, Detailed SIEM Use Case Example, On Large-scale SIEM Architecture, On SIEM Deployment Evolution, and Popular SIEM Starter Use Cases. All of these posts are worth taking the time to read.

Similar to the amount of information Anton makes public Securosis also has a wealth of great SIEM posts. The following posts are great since they discuss use cases: Understanding and Selecting SIEM/LM: Use Cases, Part 1 and Understanding and Selecting SIEM/LM: Use Cases, Part 2.

Selecting a SIEM

At some point a SIEM may be bought and it is helpful to know what should be taken into consideration. Again, Anton and Securosis have posts addressing this as well. Anton has the two posts Top 10 Criteria for a SIEM? and On Choosing SIEM while Securosis has their white paper Understanding and Selecting SIEM/Log Management.

The last reference to use for SIEM selection is the analysis done by Gartner. Regardless what people may think about how Gartner comes to their conclusions the company does publish quality research. The SIEM Magic Quadrant analyzes the various SIEM products, ranks them, and discusses their pros/cons. To get the quadrant you need to download it from a SIEM vendor and yes that vendor will start contacting you. To find where to download it just google "SIEM Magic Quadrant 2014" for this year's report.

Managing a SIEM

Up to this point there are a lot of references one can use to help in: exploring SIEM technology, selecting a SIEM, and designing a SIEM solution. However, I started to see a drop in information for things that occur after a SIEM is up and running. There appears to be very little related to SIEM operations and how others are leveraging SIEM to solve security issues. I only have two linkz both by Anton which are On SIEM Processes/Practices and On SIEM Tool and Operation Metrics.

Due to the lack of SIEM operational literature I branched out to look at resources related to security monitoring. To a certain extent this was helpful but again not exactly what I was looking for. What I've been looking to find is literature focused on detection, intelligence, and response. What I came across was more of the general information as opposed to operational information. One slide deck I found helpful for identifying operational areas to consider was Brandie Anderson's 2013 SANs DFIR Summit slide deck Building, Maturing & Rocking a Security Operations Center.

Keep Waiting for a Decent SIEM Book

The book Security Information and Event Management (SIEM) Implementation is the only SIEM book on the market. I saw the poor Amazon reviews but I opted to take a chance on the book. I was designing and implementing a SIEM so I wanted to read anything I could on the subject. I gave this book two chances but in the end it was a waste of my money. The book doesn't address how to design and implement a SIEM solution nor does it properly cover SIEM operational processes. For those looking for a SIEM book I'd keep waiting and in the meantime read the linkz in this post. I wanted to give others considering this book a heads up.


Improving Your Malware Forensics Skills

Wednesday, June 25, 2014 Posted by Corey Harrell 6 comments
By failing to prepare, you are preparing to fail.

~ Benjamin Franklin

In many ways preparation is key to success. Look at any sporting event and the team who usually comes out on top are the ones who are better prepared. I'm not just referring to game day; I'm also talking about the coaching schemes and building a roster. Preparation is a significant factor to one's success in the Digital Forensic and Incident Response field. This applies to the entire field and not just malware forensics, which is the focus of this post. When you are confronted with a system potentially impacted with malware your ability to investigate the system successfully depends on your knowledge, experience, and toolset.  This is where there is a conundrum. There is a tendency for people not to do malware cases (either through being hired or within an organization) due to a lack of knowledge and experience but people are unable to gain knowledge and experience without working malware cases. The solution is through careful preparation one can acquire knowledge, experience, and toolset that can eventual lead to working malware cases. This post outlines the process I used and currently use to improve my malware forensics skills.

The process described in this post helped me to develop the skills I have today. I even use a similar process to create test systems to help develop malware forensic skills in others (if you read this blog then you have already seen the benefits of doing this). With that said, what I am describing is a very time consuming process. However, if you decide to replicate the path I took it will be worth it and along the way you'll improve your malware forensic skills. This post is specifically directed at those wanting to take this step using their own resources and time. Those wanting to lose themselves in the DFIR music.

Process, Process, Process

Malware forensics is the process of examining a system to: find malicious code, determine how it got there, and what changes it caused on system. The first place to start for improving one's skills is by exploring the process one should use. The purpose of starting with the process is twofold. First and foremost it is to understand the techniques, examination steps, and knowing what to look for. The second reason is to explore the various tools to use to carry out the process. There are only a few resources available specific to this craft such as: Malware Forensics Field Guide for Windows Systems and Windows Forensic Analysis Toolkit, Fourth Edition. In addition, this has been an area on my radar to add one more book to the discussion but in the meantime my jIIr methodology page outlines my process and links to various posts. My suggestion is to first review the methodology I put together which is further explained in the posts: Overall DF Investigation Process and End to End Digital Investigation. Afterwards, review one or two of the books I mentioned. As you work your way through this material pay attention to the malware forensic process the author uses or references.

When it is all said and done and you completed reviewing what you set out to then document a malware forensic process you want to use. If this sounds familiar then you either started reading jIIr from the beginning or you are one of my early followers. This is exactly what I described in my second and third posts Where to start? and Initial Examination Steps & First Challenge that I wrote almost four years ago. However, a lot of time has passed since I wrote those posts and I improved my process as outlined below:

     - Examine the master boot record
     - Obtain information about the operating system and its configuration
     - Examine the volatile data
     - Examine the files on the system that were identified in volatile data
     - Hash the files on the system
     - Examine the programs ran on the system
     - Examine the auto-start locations
     - Examine the host-based logs
     - Examine file system artifacts
     - Malware searches
     - Perform a timeline analysis
     - Examine web browsing history
     - Examine specific artifacts
     - Perform a keyword search
     - Examine suspected malicious files

Tools, Tools, Tools

After the process you want to use is documented then the next step is to identify the tools you will use in each examination step. There are numerous tools you can use; the tools mentioned by the authors in the reference material, tools talked about the bloggers in my blog roll, or tools you already have experience with. To be honest, what tools someone should use depends. It really depends on what you prefer and are comfortable with. The tools I started out with are not the same ones I use today; the important thing is each tool helped me learn and grow. Pick any tools you want as a starting point and over time you will start to see the pros and cons of various tools.

Testing Environment

With your process and tools selected, now it is finally time to stop the researching/documenting and to actually use the process you documented and tools you selected. To do this you first have to set up a testing environment. There is an inherit risk to using virtualization for the testing environment; the malware may be virtualization aware and behave differently than on real computer. However, despite this risk I highly recommend to use virtualization as your testing environment. It's a lot faster to create multiple test systems (by copying virtual machines) and the snapshot feature makes it easier to revert mistakes. There are various virtualization options available with great documentation such as VirtualBox and VMware. Pick a virtualization platform and install it using the provided instructions.

Creating your VMs

Another decision you'll need to make is what operating systems to perform your testing on. This not only includes the operating system versions (i.e. Windows 7 vs Windows 8) but what processor to use as well (32 bit vs 64 bit). I ended up selecting VMware for the virtualization software and Windows 7 32 bit as the testing platform. You will need to create your first VM by installing the operating system of your choice. After the installation try to make things easier for the system to be compromised.

First disabled security features. This includes the built-in firewall and the user account control. Next make sure the account you are using has administrative privileges. Next you will want to make your test system a very juicy target. To do this you'll need to install vulnerable client-side applications including: Adobe flash, Adobe Reader, Java, Silverlight, Microsoft Office, Internet Explorer, and a non-patched operating system. One place to grab these applications is Old Apps and to determine what versions to install pick the ones targeted by exploit kits. At a minimum, make sure you don't patch the OS and install Java, Silverlight, Adobe reader, and Adobe flash. This will make your VM a very juicy target.

After the VM is created and configured then you'll want to make multiple copies of it. Using copies makes things easier during analysis without having to deal with snapshots.

Manually Infecting Systems

The first approach to improving your skills is a manual method to help show the basics. The purpose is to familiarize yourself with the artifacts associated with malware executing in the operating system you picked. These artifacts are key to be successful in performing malware forensics on a compromise system. The manual method involves you infecting your test VM and then analyzing it to identify the artifacts. The manual method consists of two parts: using known and unknown samples.

However, before proceeding there is very important configuration change. The virtual machine's network configuration needs to be isolated to prevent the malware from calling home or attacking other systems.

Using Known (to you) Samples

Starting out it is better to practice with a sample that is known. By known I mean documented so that you can reference the documentation in order to help determine what the malware did. Again, we are trying to improve our ability to investigate a system potentially impacted with malware and not trying to reverse the malware. The documentation is just to help you account for what the malware did to make it easier to spot the other artifacts associated with the malware running in the operating system.

The way to find known samples really depends. You could find them using information on antivirus websites since they list reports using their malware naming convention. For example, Symantec's Threat Listing, Symantec's Response blog, Microsoft's Threat Reports, or Microsoft's Malware Encyclopedia to name a few. These are only a few but there are a lot more out there; just look at antivirus websites. The key is to find malware with a specific name that you can search on such as Microsoft's Backdoor:Win32/Bergat.B. Once you find one you like then review the technical information to see the changes the malware makes.

I suggested finding known malware by names first because there are more options to do this. A better route if you can find it is to use a hash of a known malware sample. Some websites share the hash of the sample they are discussing but this doesn't occur frequently. A few examples are: Contagio Malware Dump,, or MxLab blog. Another option is to look at the public sandboxes for samples that people submitted such as Joe Sandbox or one listed on Lenny Zelster's automated malware analysis services list.

After you pick a malware name or hash to use then the next step is to actually find the malware. Lenny Zelster has another great list outlining different malware sample sources for researchers. Anyone one of these could be used; it just needs the ability to search by detection name or hash. I had great success using: VirusShare, Open Malware, Contagio Malware Dump, and

Remember the purpose of going through all of this is to improve your malware forensic skills and not your malware analysis skills. We are trying to find malware and determine how the infection happened; not reversing malware to determine its functionality. Now that you have your sample just infect your virtual machine (VM) with it and then power it down. If the VM  has any snapshots then delete them to make it easier.

Now that you have an infected image (i.e. the vmdk file) you can analyze it using the process you outlined and the tools you selected. At this point you are making sure the process and tools work for you. You are also looking to explore the artifacts created during the infection. You know the behavior of the known malware so don't focus on this. Malware is different and so will be their artifacts. Focus on the artifacts created by a program executing in the operating system you selected. Artifacts such as program execution, logs, and file system.

Using Unknown (to you) Samples

Using a known sample is helpful to get your feet wet but it gets old pretty quick. After you used a few different known samples it is not as challenging to find the artifacts. This is where you take the next step by using an unknown (to you) sample. Just download a random sample from one of the sources listed at malware sample sources for researchers.  Infect your virtual machine (VM) with it and then power it down. If the VM  has any snapshots then delete them to make it easier.

Now you can start your examination using the same process and tools you used with a known malware sample. This method makes it a little more challenging because you don't know what the malware did to the operating system.

Automatically Infecting Systems

The manual method is an excellent way to explore the malware forensic process. It allows you to get familiar with an examination process, tools, and artifacts associated with an infection. One important aspect about performing malware forensics is to identify the initial infection vector which was used to compromise the system in the first place. The manual method infections always trace back to you executing malware samples so you need to use a different method. This method is automatically infecting systems to simulate how real infections appear.

Before proceeding there is a very important configuration change. The virtual machine's network configuration needs to be connected to the Internet. This can be done through the NAT or bridged configuration but you will want to be in a controlled environment (aka not your company's production network). There are some risks with doing this so you will need to take that into consideration. Personally, I accept this risk since improving my skills to help protect organizations is worth the trade off.

Using Known Websites Serving Malware

In the automatically infecting systems approach the first method is to use a known website serving malware. There are different ways to identify these websites. I typically start by referring to (FYI, site hates Internet Explorer), Malc0de database, and the Malware Domain List. In both instances I look for URLs that point to a malicious binary. Other sources you can use are the ones listed by Lenny Zelster on his Blocklists of Suspected Malicious IPs and URLs page. Again, you are trying to find a URL to a site hosting a malicious binary. Another, source you shouldn't overlook is your email SPAM/Junk folder. I have found some nice emails with either malicious attachments or malicious links. Lastly, if you pay attention to the current trends being used to spread malware then you can found malicious sites leveraging what you read and Google. This is a bit harder to pull off but it's worth it since you see a technique currently being used in attacks.

Inside your VM open a web browser, enter in the URL you identified, and if necessary click any required buttons to execute the binary. Wait a minute or two for the malware to run and then power it down. If the VM  has any snapshots then delete them to make it easier.

Now you can start your examination using the same process and tools you used with the manual approach. The purpose is to find the malware, artifacts associated with the infection, and the initial infection vector. Infecting a VM in this manner simulates a social engineering attack where a user is tricked into infecting themselves. If a SPAM email was used then it simulates a email based attack.

To see how beneficial this method is for creating test images to analyze you can check out my posts Examining IRS Notification Letter SPAM and Coming To A System Near You. The first post simulates a phishing email while the later simulates social engineering through Google image search.

Using Potentially Malicious Websites

The second method in the automatically infecting systems approach is to use potentially malicious websites. This method tries to infect the VM through software vulnerabilities present in either the operating system or installed client-side applications. This is the most time consuming out of all of the methods I described in this post. It's pretty hard to infect a VM on purpose so you will end up going through numerous URLs before hitting one that works. This is where you may need to use the VM snapshot feature.

To find potentially malicious URLs you can look at the Malware Domain List. Look for any URLs from the current or previous day that are described as exploit or exploit kit as shown below. You can ignore the older URLs since they are most likely no longer active.

Another option I recently discovered but haven't tried yet is using information posted at The site doesn't obfuscate websites used so you may be able to use it to find active websites serving up exploits. The last option I'm sharing is the one I use the most; the URLs others are submitting to Just keep in mind, there are certain things that can't be unseen and there are some really screwed up people submitting stuff to URLQuery. When reviewing the submitted URLs you want to pay attention to those that have detections as shown below:

After you see a URL with detections then you'll need to examine it closer by reviewing the URLQuery report. To save yourself time, focus on any URLs whose reports mention: malicious iframes, exploits, exploit kits, or names of exploit kits. These are the better candidates to infect your VM. The pictures below show what I am referring to.

Before proceeding make sure your VM is powered on and you created a snapshot. The snapshot comes in handy when you want to start with a clean slate after visiting numerous URLs with no infection. An easy way to determine if an infection occurred is to monitor the program execution artifacts. One way I do this is by opening the C:\Windows\Prefetch folder with the items sorted by last modification time. If an infection occurs then prefetch files are modified which lets me know. Now you can open a web browser inside your VM, enter in the URL you identified, and monitor the program execution artifacts (i.e. prefetch files). If nothing happens then move on to the next URL. Continue going through URLs until one successfully exploits your VM. Upon infection wait a minute or two for the malware to run and then power it down. Make sure you delete any snapshots to make it easier.

Now you can start your examination using the same process and tools you have been using. The purpose is to find the malware, artifacts associated with the infection, and the initial infection vector. The initial infection vector will be a bit of a challenge since your VM has various vulnerable programs. Infecting a VM in this manner simulates a drive-by which is a common attack vector used to load malware onto a system. To see how beneficial this method is for creating test images you can check out my post Mr Silverlight Drive-by Meet Volatility Timelines (FYI, I suspended the VM to capture the vmem file in addition instead to powering it down to get the disk image).


Benjamin Franklin said "by failing to prepare, you are preparing to fail." To be successful when confronted with a system potentially impacted with malware we should be preparing for this moment now. Taking the time to improve our malware forensic skills including our process, tools, and knowledge of artifacts. Making the right preparations so when game day approaches we will come out on top. The process I use to improve my malware forensic skills and the one I described in this post is not for everyone. It takes time and a lot of work; I've spent countless days working through it. However, working your way through this process you will attain something that can't be bought with money. There is no book, training, college course, or workshop that can replicate or replace the skills, knowledge, and experience you gain through careful preparation by training yourself.

Review of Windows Forensic Analysis 4th Edition

Sunday, June 15, 2014 Posted by Corey Harrell 5 comments
About a month ago I finished reading Windows Forensic Analysis 4th Edition by Harlan Carvey. Due to personal obligations I was unable to post my WFA 4/e review until now. All in all the 4th edition is good update to the Windows Forensic Analysis series.

It's an Update and Not a Companion Book

I think it is necessary to first address the expectations for WFA 4/e. In my Review of Windows Forensic Analysis 3rd Edition I mentioned " at first I was worried about reading the same information I read in Windows Forensic Analysis 2nd Edition or Windows Registry Forensics but my worries were unfounded. The author has said numerous times WFA 3/e is not a rewrite to his other books and is a companion book." In the WFA series, Syngress has kept the same title and just increased the edition number. In a way, this can have an impact on people's expectations. WFA third edition was a complete rewrite of the second edition. This meant the books were complimentary and the third edition didn't contain any of the previous material from the second. I can see how this can create an expectation that with each new edition it will follow the same path. However, this is not how newer editions are typically done since they usually contain updated material and are not complete rewrites. With WFA 4/e the book is not a complete rewrite but it does contain some great updated content. This review is focused on the updated content since I already discussed some of the material in my WFA 3/e review.

Don’t Overlook the Materials Accompanying the Book

There are very few DFIR authors who not only produce content outlining processes and artifacts but also create and release the necessary tools to carry out the process they described. Most of the DFIR authors I've read (including training content authors) usually point people to tools created by others. They don't provide tools of their own or source code to help you better understand how artifacts are parsed. When reviewing a DFIR book it's necessary to take into consideration the book as a whole including the materials provided with it. This is one area where I think Harlan excels and it's something I always liked about his work. The material for his books contain a wealth of resources from cheat sheets to open source tools to explanations about how to do something.

Along with WFA 4/e Harlan provided new and updated material to accompany the book. One of the more notable mentions are the new plug-ins for RegRipper (link to most recent version at time of this post). Seriously, there are so many updates that you'll really need to read the updates.txt file he provides. Some plug-ins were updated to support Wow6432Node, others had alerts added, and there are a bunch of new plug-ins. Besides RegRipper there are tools (and source code) to parse the RecentFileCache.bcf, index.dat, and $UsnJrnl to name a few.

It's not always about the tools either. In the Chapter 5 folder there is a file called usbdev.pdf. This document outlines the Windows 7 USB device analysis including what RegRipper plug-ins pull what, how various registry values tie together, and other information to perform this analysis. The Chapter 9 folder contains even more documents related to report writing. Hands down, the material provided with the book is outstanding.

Tying Things Together

One of the updates to this edition are two new chapters that tie things together. First is Chapter 8 Correlating Artifacts while the second is Chapter 9 Reporting. To be successful in DFIR one needs to be able to tie information together from different sources to answer the questions presented to them. This is why I really like these updated chapters. Throughout the book Harlan discusses the significance of various Windows artifacts and clearly explains how those artifacts can help a case. However, the artifacts are discussed individually to make it easier to understand. The Correlating Artifacts chapter is where things are tied together. Various artifacts are brought together to illustrate how the information they contain can help address certain questions. The sample questions addressed are ones commonly encountered on various types of cases such as: correlating Windows shortcuts to USB devices, detecting system time changes, and determining data exfiltration. Again, the ability to take the information contained from different artifacts to make sense of it is really what we do in DFIR. The information was laid out in a clear manner and followed up with how to communicate your findings in reporting.

Overall Thoughts

As I said before, all in all the 4th edition is good update to the Windows Forensic Analysis series. There are updates throughout the book including some Windows 8 artifacts and on the back end it's completely new content. The book materials are loaded with new goodies. Personally, I tend to shy away from purchasing updated editions that contain the same material as the previous edition with updates. However, I took a chance with WFA 4/e (based on who the author is) and I wasn't disappointed with my purchase.

Malware Root Cause Analysis Dont Be a Bone Head Slide Deck

Tuesday, June 3, 2014 Posted by Corey Harrell 0 comments
Today I gave a presentation titled Malware Root Cause Analysis Don't Be a Bone Head at the New York State Cyber Security Conference. This presentation was a follow-up to the presentation I gave last year titled Finding Malware Like Iron Man. Last year I laid out a triage process to find malware and this year I went into more depth discussing how the malware got there in the first place. This post contains the following for my talk: CFP, slide deck, and video I showed.


Computer users are confronted with a reoccurring issue every day. This happens regardless if the user is an employee doing work for their company or a person doing online shopping trying to catch the summer sales. The user is using their computer and the next thing you know it is infected with malware. Even Hollywood is not immune to this issue as illustrated in the TV show Bones.  The most common action to address a malware infection is to reimage, rebuild, and redeploy the system back into production.  Analysis of the system to understand where the malware came from is not a priority or goal.

Root case analysis needs to be performed on systems impacted by malware to improve decision making. The most crucial question to answer is how did this happen since it will determine if we were targeted and more importantly what can be done to mitigate this from re-occurring. Last year, in my presentation Finding Malware Like Iron Man I explored the first step in root cause analysis, which is locating the malware. The next step in root cause analysis is to identify the malware's source.

In this technical presentation Corey will discuss the root cause analysis process to determine how malware infected a computer running the Windows operating system. The topics will include: why perform root cause analysis, how not to perform root cause analysis,  compromise root cause analysis model, attack vector artifacts, and scenarios.

Slide Deck

Malware Root Cause Analysis Don't Be a Bone Head slides viewable online

Malware Root Cause Analysis Don't Be a Bone Head slides PDF file


I chopped up the Bones TV episode The Crack in the Code (Season 7 episode 6) I purchased through iTunes. However, others have posted the segment I used in my presentation. For your viewing pleasure here is "Malware on Bone".

Mr Silverlight Drive-by Meet Volatility Timelines

Sunday, May 18, 2014 Posted by Corey Harrell 3 comments
I recently had the opportunity to attend the Volatility Windows Malware and Memory Forensics Training. Prior to the training, I used memory forensics (and thus Volatility) in different capacities but it wasn't a technique I leveraged when responding to security events and incidents during incident response activities. This was an area I wanted to improve upon going into the training. As the training went on and more material and labs were covered I kept thinking to myself how I intended to incorporate memory forensics into my response process. To use the technique when triaging live systems remotely over the network. The labs in the training provided numerous scenarios about using memory forensics on compromised systems but I wanted to further explore it with a simulated a security event. This post explores Volatility usage against an infected system's memory image by first determining: is the system infected and if so, how did it become infected in the first place.

Short Thought About the Training

Before diving into the memory image I first wanted to provide a short thought about the training. The training is not just about a single memory forensics tool named Volatility. The training goes in-depth in numerous topics including Windows internals, malware reversing, Windows data structures, how those structures are parsed, and bypassing encryption. I was looking for an in-depth course and I found it with Volatility. It walks you through exploring the Windows internals, the structures, how they can be parsed, and then actually doing it in labs. This layout results in knowing not just how to use tools for memory forensics but understanding what they are doing and what they are suppose to be doing. To top it off, the content is put into context as it relates to Digital Forensics and Incident Response (DFIR). All in all, it was a great training and  I highly recommend it to anyone looking to get more memory forensics knowledge and skills. For a more detailed review refer to TekDefense's Review - Malware and Memory Forensics with Volatility (just keep in mind the content has been updated since his review.)

Is the System Infected?

To set up the simulation I configured an extremely vulnerable virtual machine and then browsed to potentially malicious websites. I suspended the VM to grab the vmem file (memory) once I saw the first indication the system may be compromised. This is where the simulation starts to determine if this system is infected.

I first scanned the memory image for any previous networking activity that may be tied to malware by executing the command below:

python -f Win7.vmem --profile=Win7SP0x86 netscan

The picture below shows the partial output from netscan. A few different items jump out. The first is the process 0320.dll PID 3812 listening on the TCP ports 61779 and 8681. These ports are not typically open on Windows systems which made this a good lead. The other item to note is that Internet Explorer was active at one point in time.

The netscan plug-in provided some information but additional information is needed for PID 3812 0320.dll. I first wnated to know the command used to launch the program and I did this with pstree plug-in with the -v switch

python -f Win7.vmem --profile=Win7SP0x86 pstree -v

The program's location in the temp folder made it even more suspicious as being malicious.

To get a better understanding about how PID 3812 started  I ran the pstree plug-in again without the -v switch to make it easier to see.

python -f Win7.vmem --profile=Win7SP0x86 pstree

The Internet Explorer process PID 572 spawned the PID 3812 0320.dll.

The netscan plug-in showed that this Internet Explorer was reaching out to the IP address A search with the Malware Analysis search provided a few different hits including one to VirusTotal. The passive DNS showed this IP associated with domains flagged as malicious as well as a malicious file being downloaded from there.

To get a better idea about any processes that may had started and exited around the time PID 3812 0320.dll executed I ran the psscan plug-in.

python -f Win7.vmem --profile=Win7SP0x86 psscan

I highlighted the other processes that started around the same time but it didn't provide any more substantial leads.

0x000000007dd8f510 iexplore.exe       3600   2504 0x7d7a75c0 2014-05-11 01:38:00 UTC+0000
0x000000007e4d1030 0320.dll           3812    572 0x7d7a7540 2014-05-11 01:46:02 UTC+0000
0x000000007dad3d40 msiexec.exe        4036    492 0x7d7a7700 2014-05-11 01:46:38 UTC+0000
0x000000007e243030 dllhost.exe        2292    612 0x7d7a71e0 2014-05-11 01:46:55 UTC+0000   2014-05-11 01:47:00 UTC+0000

To see what access the PID 3812 process had on the system the getsids plug-in was used.

python -f Win7.vmem --profile=Win7SP0x86 getsids -p 3812

The lines below show 0320.dll was running in the context of the lab user account and had administrative rights to the system.

0320.dll (3812): S-1-5-21-2793522790-2301028668-542554750-1000 (lab)
0320.dll (3812): S-1-5-32-544 (Administrators)

I continued my focus on PID 3812 0320.dll to get more information about it by seeing what other items it is interacting with. To see it's handles I used the following:

python -f Win7.vmem --profile=Win7SP0x86 handles -p 3812

The only item of note in the handles output was the process's mutant; maybe it could help in researching in the malware.

0x8583b278   3812       0xd0   0x1f0001 Mutant           Y7X-TYAA-X7A

The next item I explored was the process's loaded DLLs.

python -f Win7.vmem --profile=Win7SP0x86 dlllist -p 3812

Listed among the DLLs was one item located in the lab user profile; C:\Users\lab\AppData\Local\sattech.dll.

The last item I wanted to explore was to see if the process leveraged code injection with the command.

python -f Win7.vmem --profile=Win7SP0x86 malfind

The output showed the following processes had injected code into them: explorer.exe (PID 300), wmpnetwk.exe (PID 2116), iexplore.exe (PID 2504), iexplore.exe (PID 2504), iexplore.exe (PID 572), iexplore.exe (PID 3600), and msiexec.exe (PID 4036).

Exploring 0320.dll and sattech.dll

Volatility enabled me to quickly identify two suspicious files indicating the system is infected. The next step was to explore these files to actually confirm the infection. Both files were dumped from memory with the following commands (first one dumps 0320.dll while second dumps sattech.dll.)

python -f Win7.vmem --profile=Win7SP0x86 procdump -p 3812 -D .

python -f Win7.vmem --profile=Win7SP0x86 dlldump -p 3812 -b 0x10000000 -D .

For brevity I'm not posting everything I did to examine these dumped files. The one item I wanted to note was that the strings in 0320.dll referenced the registry key SOFTWARE\Microsoft\Windows\CurrentVersion\Run which may be its persistence.  Knowing this was a simulated environment I ran both files through VirusTotal on 5/11/14 (the detections were low: 1/52 for 3020.dll and 10/52 for sattech.dll). I rescanned them while writing this post and the detections increased as can be seen in the 0320.dll VT report and sattech.dll VT report.

How Did the System Become Infected?

Memory forensics so far was able to confirm the infection, locate malware, provide clues about the malware's purpose, and extract the malware for further analysis. The next question I needed memory forensics to help me answer was how did the infection occur in the first place. An effective technique to address this question is timeline analysis and the Volatility training really opened my eyes to the additional capabilities memory provides to this technique. To fully explore Volatility timelines I'm performing timeline analysis in three separate stages. First only using the $MFT, then the timeliner plug-in (numerous time related objects), and finally the NTFS change journal ($USNJrnl).

Exploring the memory image to confirm the infection provided a few leads I leveraged in timeline analysis. There were the two files of interest: C:\Users\lab\AppData\Local\Temp\0320.dll and C:\Users\lab\AppData\Local\sattech.dll. Plus, the timeframe of interest was 2014-05-11 01:46:02 UTC+0000 since this was when the 0320.dll process started. I searched on these indicators and looked at the system activity that proceeded it and occurred after it. However, for clarity I'm presenting my timeline analysis in sequential order.

$MFT Timeline

The commands below generates a timeline in bodyfile format with the $MFT records found in memory. Then the timeline is converted with mactime, grep for the date of interest, and then formatted with awk to remove unneeded columns.

python -f Win7.vmem --profile=Win7SP0x86 mftparser --output=body --output-file=mft-body

mactime -b mft-body -d -z UTC > mft_timeline.csv

cat mft-timeline.csv | grep -i "Sun May 11 2014" | awk -F, '{print $1, $3, $8}' > mft-timeline_05-11-14.csv

The image below shows the MFT timeline starting at 05/11/14 01:45:59 UTC.

The first few lines show Internet activity with files being created in Internet Explorer's temporary files cache. The last three lines shows activity for the C:\Windows\SoftwareDistribution folder (which is associated with Windows update) and file creation inside the Silverlight application folder (C:\Users\lab\AppData\LocalLow\Microsoft\Silverlight). The timeline continues as shown in the image below.

The activity for involving the SoftwareDistribution and Silverlight folders continues before the 0320.dll file is created on the system at 05/11/14 01:46:02. Solely, based on this activity it provides a clue to how the system became infected. There was Internet activity followed by Silverlight activity then malware appearing on the system. This activity points to a drive-by that targeted a vulnerability in the Silverlight application. This shows again the significance of exploring attack vector artifacts and it's an area I've been looking in to for some time (it applies to all types of attacks). I just opted to stop blogging about it and only recently posted the CVE 2013-0074 & 3896 Silverlight Exploit Artifacts documenting Silverlight exploits to provide context to the activity in this timeline. The image below continues with the $MFT timeline.

It shows that the sattech.dll file was created on the system one second after 3020.dll. The last portion of the timeline I'm discussing is shown below.

The Silverlight application activity surrounds the malware further confirming the attack vector used against the system. In addition, at 01:46:10 artifacts associated with program execution started appearing on the system. There are prefetch files showing both Silverlight application and malware executed.

Volatility Timeliner and $MFT Timeline

The $MFT timeline created from the memory image enabled me to answer the "how" question. However, adding addition timeline data will make the events surrounding the infection more clear. The commands below generates the timeliner timeline in bodyfile format and combines it with the $MFT bodyfile. Then the timeline is converted with mactime, grep for the date of interest, and then formatted with awk to remove unneeded columns. Side note: one item I'm hoping Volalitity incorporates into the timeliner plug-in is the ability to specify what timeline artifacts to parse instead of parsing everything (this change should make timeline creation faster and more focused.)

python -f Win7.vmem --profile=Win7SP0x86 timeliner --output=body --output-file=timeline-body

cat mft-body timeline-body > timeliner-mft-body

mactime -b timeliner-mft-body -d -z UTC > timeliner-mft-timeline.csv

cat timeliner-mft-timeline.csv | grep -i "Sun May 11 2014" | awk -F, '{print $1, $3, $8}' > mft-timeliner_05-11-14.csv

The image below shows the timeliner and $MFT timeline starting at 05/11/14 01:45:59 UTC.

Right off the bat the other timeline data pulled from the memory image provided more context about what happened. Various Internet explorer processes were connecting to the freshdekor[dot]com domain. Remember the Volatility netscan plug-in output showed Internet Explorer connecting to two different IP addresses and one of them was One of the Google searches on this IP address lead to a VirusTotal report containing passive DNS information. One of the passive domains listed for the IP address was freshdekor[dot]com. It's nice stuff how everything ties together more clearly. The image below continues with the timeline.

The activity is for the files in Internet Explorer's temporary files cache but now I know these came from the freshdekor[dot]com domain. The image below is the next portion of the timeline.

The first line shows something very cool. The Internet Explorer PID 572 is loading the coreclr.dll dll at 01:46:00. Running grep across the dlllist plug-in showed the full path to the dll which was c:\Program Files\Microsoft Silverlight\5.0.61118.0\coreclr.dll. A Google search for the dll indicated it is a Silverlight plug-in. This line is showing iexplore.exe loading the Silverlight plug-in within a second after visiting the freshdekor[dot]com domain. The third line from the top is cut off in the screenshot but it shows Internet Explorer PID 572 accessing the registry key SOFTWARE\APPDATALOW\SOFTWARE\MICROSOFT\SILVERLIGHT\PERMISSIONS for the lab user account. The rest of this timeline portion is more IE history artifacts for the domain in question. Continuing on with the timeline in the image below.

This activity is from the $MFT and was shown previously. However, now it's clear Internet Explorer loaded the Silverlight plug-in for a Silverlight application on a website. The Silverlight activity in this timeline portion was the result of a Silverlight application which was the exploit. The image below continues with the timeline.

Besides the threads it is the same activity shown previously involving Silverlight and the SoftwareDistribution folder. Continuing on with the timeline below.

This brings us to 01:46:02 which is when the 0320.dll process started. The activity shows the dlls being loaded (minus the sattech.dll dll), process created, and first thread started for 0320.dll PID 3812. This confirmed that the 0320.dll executed as soon as it was dropped onto the system. The image below shows what happened next.

The sattech.dll was created on the system and loaded by 0320.dll PID 3812 one second after the process started. The last section I'm highlighting in the timeliner/$MFT timeline is below.

The activity is from $MFT records and is the program execution artifacts mentioned previously.

NTFS Change Journal, Volatility Timeliner and $MFT Timeline

The timeliner combined with the mftparser plug-in provided a wealth of information about how the system became infected. It even provide additional information about the attack that would never be found on the host. There is still yet another source of timeline data that can be added to the timeline. During the week I was taking the Volatility training, the instructors gave us the heads up that Tom Spencer released the USNParser plug-in to parse the NTFS change journal. The $USNJrnl is an excellent source to reference as I illustrated in my post Re-Introducing $UsnJrnl. To further explore memory timelines, I downloaded the plug-in and ran the commands below. It first generates the $USNJrnl in bodyfile format and then combines it with the timeliner/MFT bodyfile. The timeline is then converted with mactime, grep for the date of interest, and then formatted with awk to remove unneeded columns.

python -f Win7.vmem --profile=Win7SP0x86 usnparser --output=body --output-file=usn-body

cat usn-body timeliner-mft-body > timeline-all-body

mactime -b timeline-all-body -d -z UTC > timeline-all-timeline.csv

cat timeline-all-timeline.csv | grep -i "Sun May 11 2014" | awk -F, '{print $1, $3, $8}' > timeline-all_05-11-14.csv

The image below shows the $USNJrnl, $MFT, and timeliner timeline starting at 05/11/14 01:45:59 UTC.

The $USNJrnl didn't provide much useful in this portion of the timeline since it only shows the edb.log inside the SoftwareDistribution\DataStore\Logs folder. The timeline continues in the image below.

The first $USNJrnl activity shows the mssl.lck being referenced. This file was referenced in my CVE 2013-0074 & 3896 Silverlight Exploit Artifacts post. Even if I wasn't aware about these exploit artifacts, mssl.lck can still be tied to Silverlight by grepping for the file in the filescan plug-in output. This shows mssl.lck located in the C:\Users\lab\AppData\LocalLow\Microsoft\Silverlight folder. The image below is the next part of the timeline.

The additional information shown in this activity highlights an item that was present in the previous timelines. Prior to Internet Explorer PID 572 loading the Silverlight plug-in (coreclr.dll), the webpage stored in the temporary Internet files cache was named 8fdhe54wg1[1].htm was visited. This highlights another file to examine closer on the host to determine if it was responsible for serving up the malicious Silverlight application. The image below is the next part of the timeline.

The activity is involving the same files and domains already discussed. The timeline continues below.

Just prior to the activity for the BITFC11.tmp file in the Silverlight folder there is more activity involving the C:\Windows\SoftwareDistribution\DataStore\Logs folder. This time around it is for the tmp.edb file. The image below is the next portion of the timeline.

There is no additional items to note so the timeline continues in the image below.

Again, there isn't any additional activity of interest but I'm still posting these images for others to see the timeline in its entirely. The image below shows the next part of the timeline.

The $USNJrnl shows activity for the Silverlight application and the timeline continues below.

This portion was shown in the timeliner/$MFT timeline where the 0320.dll PID 3812 process started. The image below shows what happens next.

After the 0320.dll PID 3812 process started then there is a lot of activity for a tmp file in the Windows\Temp folder as shown below.

The $USNJrnl also reflects the sattech.dll file being created on the system. The last image of the timeline shows more information about the Silverlight application and program execution artifacts I mentioned previously.

Memory Forensics for the Win

As I went into the Volatility Windows Malware and Memory Forensics Training I wanted to leverage memory forensics more when responding to security events and incidents during incident response. The way I intend to use this technique is for analysis of live systems remotely over the network. As a method to investigate a security alert such as a system reaching out to a known malicious domain. Memory forensics could provide a wealth of information for this type of alert. Tying the specific network activity to a process and then determining where the process came from in the first place. Very similar to the simulation I did for this post. I may had started out wanting to leverage memory forensics more but I ended up with more knowledge and an improved skillset to help me hold the last line of defense for my organization.

CVE 2013-0074 & 3896 Silverlight Exploit Artifacts

Tuesday, May 13, 2014 Posted by Corey Harrell 0 comments
Artifact Name

Exploit Artifacts for CVE 2013-0074/3896 (Silverlight) Vulnerabilities

Attack Vector Category



Two vulnerabilities present in Microsoft Silverlight 5 that in combination enable an attacker to execute arbitary code.

CVE 2013-3896 affects Microsoft Silverlight 5 before 5.1.20913.0. The vulnerability is due to not properly validating pointers during access to Silverlight elements, which allows remote attackers to obtain sensitive information via a crafted Silverlight application. The significance of this vulnerability as explained in TrendMicro's  A Look At A Silverlight Exploit article:

"The exploit uses this vulnerability to leak a pointer address in memory, and then uses this leaked address to compute the base address of, bypassing ASLR. Later, this base address is used to compute the ROP gadgets in order to bypass DEP."  

CVE 2013-0074 affects Microsoft Silverlight 5 before 5.1.20125.0. The vulnerability is due to not properly validating pointers during HTML object rendering, which allows remote attackers to execute arbitrary code via a crafted Silverlight application. The TrendMicro article stated this vulnerability "is used to control the execution flow to jump to the ROP gadget."

Attack Description

The significance of these Silverlight vulnerabilities is their usage in mass attacks through exploit kits. Furthermore, Packetstorm security posted the exploit code for these vulnerabilities making them public and thus available to anyone to use them including exploit kits' authors.

This description was obtained from the very detailed Malware don't need Coffee blog post CVE-2013-0074/3896 (Silverlight) integrates Exploit Kits. To truly understand this attack I highly recommend reading this blog post.

     1. User visits a malicious website.

     2. The website serves up a malicious Silverlight application to compromise the system.

Exploits Tested

Metasploit exploit/windows/browser/ms13_022_silverlight_script_object

Target System Information

Windows 7 SP0 x86 Virtual Machine with Silverlight v 5.0.60818.0 (no Silverlight applications were executed on the system prior to test)

Different Artifacts based on Administrator Rights

Not tested

Different Artifacts based on Tested Software Versions

Not tested

Potential Artifacts

The potential artifacts include the 2013-0074/3896 exploit and the changes the exploit causes in the operating system environment. The artifacts can be grouped under the following three areas:

     * Temporary File Creation
     * Indications of the Vulnerable Application Executing
     * Internet Activity

Note: the documenting of the potential artifacts attempted to identify the overall artifacts associated with the vulnerability being exploited as opposed to the specific artifacts unique to the Metasploit. As a result, the actual artifact storage locations and filenames are inside of brackets in order to distinguish what may be unique to the testing environment.

     Temporary File Creation

- Webpage created in a temporary Internet files storage location on the system within the timeframe of interest. [C:\ Users\lab\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5\I87XK24W\nXKoGc[1].htm] The webpage contains the code to load the Silverlight application exploit. The code includes: the data/type variables indicating " application/x-silverlight-2" and the InitParams indicating what to load. The image below shows the webpage code (please note: the InitParams in Metasploit contains the code to execute while on other systems it may point to an actual file)

     Indications of the Vulnerable Application Executing

- Folder activity involving the Silverlight application. [C:\Users\lab\AppData\LocalLow\Microsoft\Silverlight]

- File creation inside the Silverlight application folder. [C:\ Users\lab\AppData\LocalLow\Microsoft\Silverlight\BIT65AC.tmp and C: \Users\lab\AppData\LocalLow\Microsoft\Silverlight\mssl.lck]

- Registry modification involving Silverlight in the user profile's NTUSER.DAT hive the exploit executed under. [HKU\Software\AppDataLow\Software\Microsoft\Silverlight and HKU\ Software\AppDataLow\Software\Microsoft\Silverlight\Permissions] (note: this artifact may be due to Silverlight executing for the first time)

- Entries for Silverlight programs that executed for the first time on the system inside the RecentFileCache.bcf file (Windows 7 artifact) [c:\program files\microsoft silverlight\5.0.61118.0\agcp.exe]

- References to Silverlight programs in the CONHOST.EXE's prefetch file handles [\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\MICROSOFT SILVERLIGHT\5.0.61118.0\COREGEN.EXE]

     Internet Activity

- Web browser history of user accessing websites within the timeframe of interest. [lab user account accessed the computer running Metasploit]

- Files located in the Temporary Internet Files folder. [Users\lab\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5]

Timeline View of Potential Artifacts

The images below shows the above artifacts in a timeline of the file system from the test Windows 7 SP0 system. The timeline only includes the file system metadata. The purpose of the timeline is to help illustrate what the artifacts look like on a compromised system.

A few tidbits about items listed in the timeline that are not discussed above. First, in numerous tests when Silverlight executes it initiates activity in the C:\Windows\SoftwareDistribution\DataStore. This folder is associated with Windows updates and at times it referenced Silverlight activity.  Secondly, in numerous tests when Silverlight executes there is activity in the C:\Windows\System32\wdi folder. The files didn't specifically reference Silverlight but the activity was consistent. In both cases, I opted to not include these artifacts until it can be determined that this activity is directly associated with Silverlight being exploited.

     * MFT Timeline

     * Change Journal Timeline