Linkz for Detection and Response

Monday, January 19, 2015 Posted by Corey Harrell 1 comments
The fastest way to reach a destination is to learn from those who have traveled parts of the journey you are on. Others may point you in a direction to avoid the obstacles they faced or show you the path through the woods towards your destination. In Information Security, we tend to gain knowledge from others by what they publish: websites, blogs, books, and even their 140 character tweets. The journey I have been on is exploring enterprise security detection and response; actually implementing enterprise detection and response. Implementing them not as standalone processes standing on their own but two complimentary processes that feed into each other. This Linkz post is sharing the works by others I came across whose focus is on detection and response.

Linkz for Incident Response and SIEM


I shared these linkz before but they point to some great resources about detection and response. For linkz related to incident response - including the thought process behind IR - refer to the post Linkz for Incident Response. For linkz related to detection with a focus on SIEM refer to Linkz for SIEM.

Network Security Operations Management Workflow


When it comes to network security there is a wealth of information. How to create a SOC, how to design and deploy IDS sensors, and how to use security monitoring tools are a few of the topics covered. Despite the wealth of information, there is very little about the management workflow for network security monitoring. Securosis released the Network Security Operations Quant Report about five years ago and it outlines one of the best management workflows I have come across. The Manage Process (on page 13) addresses policy review, update policies & rules, evaluate signatures, deploy, and audit/validate. It's a decent process since it touches on topics I haven't seen addressed elsewhere. At this point I could care less about the metrics portion of the document but the process portion is worth a look.

Network Security Monitoring Books


One thing I tend to look for in detection and response resources is do they outline a process or a thought process. Tools are great and all but it's pretty easy to learn a tool on your own compared to the process one  should use when doing detection and response. The processes are what you can learn from others to reach your destination and you can fill in the gaps by teaching yourself the tools. The next two resources are not free but are well worth the investment. Richard Bejtlich's The Practice of Network Security Monitoring: Understanding Incident Detection and Response and Chris Sanders  & Jason Smith's Applied Network Security Monitoring: Collection, Detection, and Analysis. Both books cover open source NSM tools and analyzing network data but the aspects I really enjoyed where their perspectives. Their perspectives about how to approach NSM and how to manage certain aspects of NSM including the response to security incidents along with detection security events.

Free Book with Strategies for Cybersecurity Operations Center


Carson Zimmerman of The MITRE Corporation released a free book titled Ten Strategies of a World-Class Cybersecurity Operations Center. I have been to a few different conferences where vendors give out free books. Needless to say, based on my past experiences I view free books with skepticism. However, Ten Strategies of a World-Class Cybersecurity Operations Center has restored my faith in free books. I'm surprised it was released for free; the content was great and the quality was top notch. The strategies covered topics from consolidating functions of detection and response under one organization to CSOC size to staff quality to sensor placement to responding to incidents. It's a great read for anyone who is looking to improve an organization's detection and response capability. It may even be worth the while for people who have been managing CSOCs to pick up a few different ideas.

Leveraging the Kill Chain for Detection


Sean Mason wrote the article Leveraging The Kill Chain For Awesome discussing the various ways the kill chain can be used. The section I wanted to focus on was the following: "when it comes to enterprise detection, the Kill Chain is useful for understanding what your capabilities are, as well as your gaps in coverage by tools and threat actors." This is one area where I think the kill chain excels. By organizing your detection rules beneath the kill chain it is easy to see what detection areas are strong and weak. Furthermore, it helps to see where external tools or intelligence can fill in the gaps. The one additional point I suggest is to do this organization for each use case that is implemented.

Questions to Answer During Response


A couple months ago Jack Crook put together a nice post over on his HandlerDiaries blog called Answering those needed questions. He walked through the typical questions he needs to answer when he is responding to an incident. In addition, he addressed "some of the actions needed during response activities." There were two things I really enjoyed about the post. First was how he broke down all of the possible questions one could ask into six categories; this simplifies the thought process making it easier to work through. The second point I liked was how he tied together response and detection. Throughout the post he touched on things to consider to improve detection as you respond.

Triage Questions


Continuing along with what questions should be answered when responding to security incidents is David Bianco's article Triage Any Alert With These Five Weird Questions!. David defines alert triage as "the process of going through all of your alerts, investigating them, and either closing them or escalating them to an incident." By far, this is one of the most common activities for those performing detection and response. The article walks through five questions one should ask while performing alert triage. The article is well worth the read since it highlights things to consider when performing this work.

If Antivirus Fires Triage that System


Speaking about triaging and alerts. Adam over at the Hexacorn blog put together an awesome post (The art of disrespecting AV (and other old-school controls), Part 2) highlighting a critical point. His conclusion was "when you see an AV alert you need to triage the system, because it has been compromised + there may be still some undetected malware present on it." I couldn't agree more with the items he brought to light in his post and his conclusion. Too many times you see organizations complacent in that their antivirus solutions detected and removed a malware without any additional work or trying to answer questions. Too many times I've seen where antivirus hits on one file but missed numerous others. The line of thinking things are all good "since antivirus got it" is broken and in the end risks leaving compromise systems on the network. Furthermore, triaging every single antivirus alert provides visibility into the network and the methods being used trying to compromise an organization.

Seeing the Complete Picture


Rounding out this linkz post is the article What It Looks Like: Malware Infection via a Weaponized Document by Harlan Carvey. Harlan obtained a weaponized document, executed it in a test system, and then walked through his examination to identify artifacts on the host. The document he obtained was already written about from the dynamic analysis perspective but it didn't address artifacts left on the system. The thing I really liked about Harlan's post is it addresses an area that very little is written about. There are a ton of articles about dynamic analysis and offensive techniques such as exploiting a vulnerability but there is not as much about attack vector artifacts left on the system. The attack vector artifacts showing how it looks when a hacker (or cracker) uses a path or means to gain access to a computer or network server in order to deliver a payload or malicious outcome. Over the past month there have been various articles mentioning the increase in malicious documents being used to compromise systems. The malicious documents mentioned vary as well as their payloads vary. However, the activity left on the system due to a malicious document will remain the same. This activity is what Harlan addressed in his post and the information helps to bring a more complete picture into view. Harlan also provided his thoughts on a few take-aways for detection and response.
Labels: , ,

Triaging a System Infected with Poweliks

Sunday, January 4, 2015 Posted by Corey Harrell 3 comments
Change is one of the only constants in incident response. In time most things will change; technology, tools, processes, and techniques all eventually change. The change is not only limited to the things we rely on to be the last line of defense for our organizations and/or customers. The threats we are protecting them against change too. One recent example is the Angler exploit kit incorporating fileless malware. Malware that never hits the hard drive is not new but this change is pretty significant. An exploit kit is using the technique so the impact is more far reaching than the previous instances where fileless malware has been used (to my knowledge.) In this post I'm walking through the process one can use to triage a system potentially impacted by fileless malware. The post is focused on Poweliks but the process applies to any fileless malware.

Background on Why This Matters


In my RSS feeds, I was following the various articles about how an exploit kit incorporated the use of fileless malware. The malware never gets dropped to the disk and gets loaded directly into memory. A few of the articles I'm referring to are: Poweliks: The file-less little malware that could, Angler EK : now capable of "fileless" infection (memory malware), Fileless Infections from Exploit Kit: An Overview, POWELIKS: Malware Hides In Windows Registry, and POWELIKS Levels Up With New Autostart Mechanism. Reading the articles made one thing clear: one of the most effective tools to deliver malware (exploit kits) is now using malware that stays in memory.

This change has a significant impact on multiple areas. If the malware stays in memory then the typically artifacts we see on the host will not be there. For example, when the malware is loaded into memory then it won't create program execution artifacts on the system. This means the triage and examination process needs to adjust. As I mentioned previously, this change was implemented into a widely known exploit kit (Angler exploit kit.) The systems infected with this exploit kit can be far reaching. This means we will encounter this change sooner rather than later; if you haven't faced it already. Case in point, recently the Internet Systems Consortium website was compromised and was redirecting visitors to the Angler exploit kit. The last impact is if this change provides better results for the people behind it then I can see other exploit kit authors following suit. This means fileless malware may become even more widespread and it's something that is here to stay.

I knew memory forensics is one technique we can use to find the malware in memory. (if you need a great reference on how to do this check out the book the Art of Memory Forensics.) However, the question remained what does this look like. I took the short route for a quick answer to my question by reaching out to my Twitter followers. I asked them the following: "Anyone know how Poweliks code looks from memory forensics perspective?"

The first responses I got back was from Adam over at the Hexacron blog (great blog by the way) as shown below.

 
Adam provided some great information; to narrow in on the dllhost.exe process and what strings to look for. Another response I got was from @lstaPee as shown below:


@lstaPee provided a few more tidbits. RunDll32.exe injects code into the Dllhost.exe and dllhost.exe should have network connections. The response I got back from Twitter was great but I really needed to address the bigger question. If and when I have to triage a system infected with Poweliks what is the fastest way to perform the triage to locate the malware and determine the root cause of the infection. A question I needed to dig in to in order to find out the answer.

Testing Environment


As much as I wanted to simulate this attack by finding a live link to an Angler exploit kit I knew it would be very difficult. Based on various articles I read, Angler is VMware aware and  it doesn't always deliverer the fileless malware. I opted to use a Powelik's dropper/downloader. I used the sample MD5 0181850239cd26b8fb8b72afb0e95eac I found on Malwr. The test system was a Windows 7 32bit virtual machine in VMware.

The test conditions were really basic. I executed the sample by clicking it and then waited for about a minute. The VM was suspended and I collected the memory and prefetch files. I then unsuspended the VM followed by rebooting the system. After reboot, I logged onto the VM and then suspended it to collect the memory and prefetch files.

My tests was to analyze the Poweliks infection from two angles. The initial infection prior to a system reboot and a persistent infection after the system reboots. My analysis had one exception. By clicking the Poweliks executable to infect the system this action created program execution artifacts. I ignored these artifacts since they wouldn't be present if the malware was loaded directly into memory. I followed my typical examination process on the memory images and vmdk files but this post only highlights the activity that directly points to Poweliks. There was other activity of interest but the activity by itself does not indicate anything malicious. This activity I opted to omit from the post.

Poweliks' Behavior


Before diving into the triage process and what to look for it's important I discuss one Poweliks' behavior. I won't go into any details how I first picked up on this but I will show the end result. What the behavior is and how it can help when triaging Poweliks specifically. The screenshot below shows partial of the Malwr's behavior analysis section showing the behavior I'm referring to.


Upon a system's initial infection, the malware calls rundll32.exe which then calls powershell.exe who injects code into the dllhost.exe process. In the image above the numbers are for the process IDs and this relevant as we dig deeper into the behavior.

The image below shows activity that occurs shortly after the rundll32.exe process starts. As can be seen, rundll32.exe attempts to load a module into its own address space with the LdrLoadDll function. The module being loaded is actually javascript; this behavior is well documented for Poweliks such as in the article Poweliks – Command Line Confusion. Notice the activity following the LdrLoadDll function call is trying to locate the address for the RunHTMLApplication function. Here's the keyword Adam pointed out.


The images below shows activity that occurs just prior to powershell.exe process exiting. Powershell.exe creates the dllhost.exe process in the suspended state. Code gets injected into this suspended dllhost.exe process and then it is resumed. This technique is process hollowing and when the suspended process is resumed it executes the injected code.



Triaging System Infected with Poweliks


Triage is the assessment of a security event to determine if there is a security incident, its priority, and the need for escalation. As it relates to potential malware incidents, the purpose of triaging may vary. In this instance, triage is being used to determine if an event is a security incident or false positive by identifying  malware on the system. Confirming the presence of malware allows for a deeper examination to be completed. The triage process I'm outlining is to confirm the presence of the Poweliks fileless malware.

Triaging with Host Artifacts


Normally, triaging a system using artifacts on the host is an effective technique to identify malware. This is especially true when leveraging program execution artifacts. However, loading malware directly into memory has a significant impact on the artifacts available on the host. There are very little artifacts available and if the malware doesn't remain persistent then there will be even less. Triaging a system infected with Poweliks is no different. Most of the typically artifacts are missing but it can still be identified using prefetch files and autorun locations.

Prefetch Files


Previously I outlined the Poweliks behavior where the rundll32.exe process runs, which then starts a powershell.exe process before injecting code into the dllhost.exe suspended process. This behavior is apparent in the prefetch files at the point of the initial infection. The image below shows the activity.


The prefetch files show the sequence of rundll32.exe executing followed by powershell.exe before dllhost.exe. Furthermore, the dllhost.exe prefetch file is missing the process path. The missing process path indicates process hollowing was used as I outlined in the post Prefetch File Meet Process Hollowing. The prefetch files contain references to files accessed during the first 10 seconds of application startup. The dllhost.exe prefetch file contains revealing ones. It contains a reference to wininet.dll for interacting with the network and files associated with Internet Explorer as shown below.


This specific prefetch file sequence only occurs upon the initial infection. Future system restarts where Poweliks is loaded into the dllhost.exe process only shows the dllhost.exe prefetch file. The file references in this prefetch still show references to files located in the user profile.


Autoruns


The prefetch files contain a distinctive pattern indicating a Poweliks infection. Depending on the sample, autoruns can reveal even more. I mention depending on the sample because Poweliks has changed its persistence mechanism. Initially it used the Run registry key before moving on to a CLSID registry key. I thought one article mentioned Poweliks may not try to remain persistent at all times. If Poweliks does try to remain persistent then its mechanism can be used to find it. Keep in mind, Poweliks has taken self protection measures to prevent this mechanism from being located on a live system. The easiest method to bypass these measures is to access the system remotely with a forensic tool like Encase Enterprise, mount the drive, and then run Regripper across the hives.

The image below shows the Run key from the user account on my test system. The sample I used was older since the Run key was used but it still is a tell-tale sign for a Poweliks infection.

...snip....

Memory Analysis Triage


Fileless malware may leave very little artifacts available on the host's hard drive but it still has to reside in memory. The most effective technique to identify a fileless malware infection is memory forensics. A Poweliks infection is not an exception since it stands out in memory whether if the memory is examined after the initial infection or a system reboot.

Network Connections


One area with malware indications is network activity are for unusual processes. @lstaPee alluded to this in their tweet about Poweliks. The Volatility netscan plug-in does show network activity for  the dllhost.exe process involving the IP address 178.89.159.35 on port 80 for HTTP traffic. dllhost.exe is not a process typically associated with web traffic so this makes it a good indicator pointing to Poweliks.


Process Listing


Another area with malware indications is the process listing showing unusual ones or ones with unusual commands. The Volatility pslist, psscan, and pstree -v plugins did not reveal anything that could definitely be used as an indicator but they did show the dllhost.exe process running. I checked a few clean systems to see if dllhost.exe normally runs but the process was not running by default.  This doesn't mean it can be used as an indicator because there could be other reasons for dllhost.exe running besides Poweliks. The screen below is from the pstree plug-in showing the command-line for launching dllhost.exe (notice there are no other options used in the command.)


Injected Code


Looking for processes with injected code is an effective technique to locate malware on a system. This is the one technique that absolutely reveals Poweliks on a system. The Volatility malfind plug-in showed the dllhost.exe process with injected code. This matches up to the articles about the malware and behavior analysis showing code does get injected into the dllhost.exe process. The image below shows the partial output from malfind.


Extracting the injected code and scanning it with antivirus confirms it is Poweliks. The image below shows the VirusTotal results for the injected code. Microsoft detected the code as Trojan:Win32/Powessere.A which is their classification for Poweliks.


Strings


The last area containing indicators pointing to Poweliks are the strings in the dllhost.exe process. The method to review the strings is not as straight forward as running a single Volatility plug-in. The strings command reference walks through the process and it's the one I used. The only thing I did different was to grep for my process ID to make the strings easier to review. The dllhost.exe strings revealed URLs such as one containing the IP address found with the netscan plug-in.


The most significant string found was the command used to make rundll32.exe inject code into the dllhost.exe process as shown below. The presence of this string alone in the dllhost.exe process indicates the system is infected with Poweliks.


Wrapping Things Up


The change introduce by the Angler exploit kit creator(s) is causing us to make adjustments in our processes. The effective techniques we used in the past may not be as effective against fileless malware. However, it doesn't mean nothing is effective preventing us from triaging these systems. It only means we need to use other processes, techniques, and tools we have at our disposal. We need to take what artifacts do remain and use it to our advantage. This post was specific to the Poweliks malware but the techniques discussed will apply to other fileless malware. The only difference will be what data is actually found in the artifacts.

The Art of Memory Forensics Book Review

Sunday, December 28, 2014 Posted by Corey Harrell 6 comments
Christmas is in the rear view mirror and you may be left wondering about the gift you didn't find under the tree. The gift loaded with DFIR goodness to bring you into the new year. A gift you can use to improve your knowledge and skills. A gift to help you get to the next level in digital forensics, incident response, or malware analysis. You didn't get any DFIR goodness so it's a great opportunity to reward yourself with a gift of your own choice; one to accomplish what the previous sentences alluded to. If you fit this description then the book The Art of Memory Forensics is what you should be looking for. Even if the description doesn't fit and you don't already own this book then you should seriously check it out. This post is my review of the book The Art of Memory Forensics.

Three in One


The book addresses memory forensics on the following three operating systems: Windows, Linux, and Mac. This makes it an outstanding book since it addresses the most commonly faced operating systems. Furthermore, the content not only addresses memory forensic techniques but goes into detail about operating system internals. The majority of the systems I encounter are Windows systems so my focus was on the Windows portion of the book. (I did skim the other sections but I took my time in the Windows section.) The content went deep into the various Windows data structures and function calls. This makes the book an outstanding reference to better understand operating system internals. I easily envision myself using this book as a reference for years to come.

Not Just Words but Hands-on


One thing I tend to look for in a technical security book is how easy is it for the reader to take the content/techniques then apply it elsewhere. This is another area where the Art of Memory Forensics shines. The book's website provides additional materials that accompany the book. The items include lab questions, lab answers, and memory images for each chapter. This allows the reader an opportunity to do the hands-on labs to re-enforce that chapter's content. It's a great way to learn since you are actually performing memory forensics on an image after reading about it. Furthermore, to explain concepts the book uses - for the most part- memory images freely available on the Internet. As you read the book you can follow along by performing the same activities on the same memory images. At times the authors don't explicitly say what memory image they are using but the name of the memory images is pretty revealing. For example, in the Detecting Registry Persistence section (Kindle version page 4626) the Volatility handles plug-in is ran against a memory image named "laqma.mem". On the SampleMemoryImages webpage you can see a Laqma memory image is available and this is the one used in the book. This occurs frequently in the book as well as the authors specifically mentioning the memory image they are using.

The one area I thought that could make this book even better would be for the authors to explicitly state the memory image being used in the examples. This would make it easier for others (especially people who are not aware about the available memory images) to follow along in the book doing the same examples.

Memory Forensics in Toolbox


The last point I wanted to touch on about why I think so highly of this book is memory forensics is a process we need to have in our toolbox. This is true regardless if the work involves incident response or malware analysis. In incident response, there are times when you need to examine the volatile data on a system to obtain an answer. For example, a system is making network connections to a known malicious domain, which is setting off alerts. To tie the network connections to an actual process on the system requires memory forensics. The need for this in incident response is even more so with the recent increase of an exploit kit leveraging fileless malware. In malware analysis, there are times when memory forensics can provide additional information about a sample under examination. Does it open a socket, make network connections, inject code, hook functions, etc.. Memory forensics is now a process we need available in our toolbox and this book can help put it there.

All in All


If you are looking for a gift loaded with DFIR goodness, looking to improve your knowledge/skills, or looking for help to get to the next level in DFIR then this book is for you. The Art of Memory Forensics is a hefty book loaded with excellent content. It's an outstanding book and for those who don't already own it should seriously consider making it their next DFIR purchase. Just make sure to get your money's worth by grabbing the labs, memory images, and then putting hands to the keyboard as you read along.

Prefetch File Meet Process Hollowing

Wednesday, December 17, 2014 Posted by Corey Harrell 4 comments
There are times when you are doing research and you notice certain behavior. You may had been aware about the behavior but you never consider the impact it has on other artifacts we depend on during our digital forensic and incident response examinations. After thinking about and researching the behavior and impact, it makes complete sense; so much so it's pretty obvious after the fact. In this post I'm exploring one such behavior and its impact I came across researching systems impacted by the Poweliks fileless malware. Specifically, how creating a suspended process and injecting code into it impacts the process's prefetch file.

The statement below is the short version describing the impact injecting code into a suspended process has on its prefetch file. For those wanting the details behind it the rest of the post explains it.

If the CreateProcess function creates a process in the suspended state and code gets injected into the process. The prefetch file for that process will contain the trace for the injected code and not the original process. Therefore, the prefetch file can be an indicator showing this technique was used.

Process Hollowing Technique


Malware uses various techniques to covertly execute code on systems. One such technique is process hollowing, which is also known as process replacement. The book Practical Malware Analysis states the following in regards to this technique:

"Process replacement is used when a malware author wants to disguise malware as a legitimate process, without the risk of crashing a process through the use of process injection.

Key to process replacement is creating a process in a suspended state. This means that the process will be loaded into memory, but the primary thread of the process is suspended. The program will not do anything until an external program resumes the primary thread, causing the program to start running"

In addition, the book The Art of Memory Forensics states the following:

"A malicious process starts a new instance of a legitimate process (such as lsass.exe) in suspended mode. Before resuming it, the executable section( s) are freed and reallocated with malicious code."

In essence, process hollowing is when a process is started in the suspended state, code is injected into the process to overwrite the original data, and when the process is resumed the injected code is executed. In a future post I will go into greater detail about this technique but for this post I wanted to keep the description at a high level.

Windows Internals Information


To fully explore the process hollowing behavior on a Windows system and its impact on the prefetch file it is necessary to understand Windows internals. Specifically, when and why prefetch files are created  and how processes are created.

Windows Prefetcher Information


Prefetch files are a well known and understood artifact within the DFIR field. These files are a byproduct of the Windows operating system trying to speed up either the boot process or applications startup. The book Windows Internals, Part 2: Covering Windows Server 2008 R2 and Windows 7 states the following:

"The prefetcher tries to speed the boot process and application startup by monitoring the data and code accessed by boot and application startups and using that information at the beginning of a subsequent boot or application startup to read in the code and data."

As it relates to application prefetch files the book continues by saying:

"The prefetcher monitors the first 10 seconds of application startup."

The information the prefetcher monitors is explained as follows:

"The trace assembled in the kernel notes faults taken on the NTFS master file table (MFT) metadata file (if the application accesses files or directories on NTFS volumes), on referenced files, and on referenced directories."

In essence, if the prefetcher is enabled on a Windows systems then the first 10 seconds of execution is monitored to determine what files and directories the application accesses. This information (or trace) is then recorded in a prefetch file located inside the C:\Windows\Prefetch directory.

Windows Flow of Process Creation


Knowing about process hollowing and prefetching is not enough. It's necessary to understand the flow of process creation in order to see the various activities that occur when a process is created. The book Windows Internals, Part 1: Covering Windows Server 2008 R2 and Windows 7 goes into great detail about process creation and below are the main stages:

1. Validate parameters; convert Windows subsystem flags and options to their native counterparts; parse, validate, and convert the attribute list to its native counterpart.

2. Open the image file (. exe) to be executed inside the process.

3. Create the Windows executive process object.

4. Create the initial thread (stack, context, and Windows executive thread object).

5. Perform post-creation, Windows-subsystem-specific process initialization.

6. Start execution of the initial thread (unless the CREATE_ SUSPENDED flag was specified).

7. In the context of the new process and thread, complete the initialization of the address space (such as load required DLLs) and begin execution of the program.

The picture below illustrates the process creation main stages:


The main stages highlight a very important point as it relates to process hollowing. When the a process is created it doesn't start executing until Stage 7. If the process is created in a suspended state then the first 6 stages are completed. The question is when does the prefetcher come into play. The book Windows Internals Part 1 states the following activity occurs in stage 7:

"Otherwise, the routine checks whether application prefetching is enabled on the system and, if so, calls the prefetcher (and Superfetch) to process the prefetch instruction file (if it exists) and prefetch pages referenced during the first 10 seconds the last time the process ran."

This is very important so it is worth repeating. The prefetcher monitoring application startup occurs in stage 7 when the application is executing. However, a process that is created in the suspended state does not execute until it is resumed. This means if the process hollowing technique is used then when the process resumes and executes the injected code the prefetcher is monitoring the files/directories accessed by the injected code and not the original process. In shorter words:

If the CreateProcess function creates a process in the suspended state and code gets injected into the process. The prefetch file for that process will contain the trace for the injected code and not the original process. Therefore, the prefetch file can be an indicator showing this technique was used.

Process Hollowing Prefetch File


It's always helpful to see what is being described in actual data to make it easier to how it applies to our examinations. To generate a prefetch file I double-clicked the svchost.exe executable located in the C:\Windows\Prefetch directory. Below is the partial output from this prefetch file being parsed with Harlan Carvey's pref.pl script in the WFA 4th edition book materials. (One note about the prefetch file; I added the underscore at the end to force the creation of a new prefetch file.)

File     : C:\Prefetch\SVCHOST_.EXE-3530F672.pf
Exe Path : \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\SVCHOST.EXE
Last Run : Wed Dec 17 20:58:57 2014
Run Count: 1

Module paths:
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\NTDLL.DLL
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\KERNEL32.DLL
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\UNICODE.NLS
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\LOCALE.NLS
  

As shown above, the svchost.exe executed one time and some of the files accessed are reflected in the module paths section. I asked Harlan about where the executable path comes from in prefetch parsers since the prefetch file format does not record the executable path. He confirmed what I was assuming. Prefetch parsers search the module paths for the name of the executable referenced at offset 0x0010.

Now let's take a look at the prefetch file for a svchost.exe process that was hollowed out. The Lab12-02.exe executable provided with Practical Malware Analysis performs process hollowing to the svchost.exe process. This can be seen performing dynamic analysis on the executable. The images below are from the executable being ran in Malwr.

Lab12-02.exe first creates the svchost.exe process in the suspended state. Notice the creation flag of 0x00000004. Also, make note about the process handle to the suspended process (0x00000094).


Lab12-02.exe then continues by injecting code into the suspended svchost.exe process.


Lab12-02.exe finishes injecting code into the suspended svchost.exe process with the process handle 0x00000094 before it resumes the suspended process.



Below is the partial output from this prefetch file being parsed with Harlan's pref.pl script.

File     : C:\ Prefetch\SVCHOST.EXE-3530F672.pf
Exe Path : SVCHOST.EXE-3530F672.pf
Last Run : Wed Dec 17 20:59:48 2014
Run Count: 1

Module paths:
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\NTDLL.DLL
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\KERNEL32.DLL
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\UNICODE.NLS
   \DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\LOCALE.NLS
  

It may not be obvious in the partial output but the files accessed by the injected code is different than the normal svchost.exe process. One of the more obvious files is what is not accessed; the original executable itself. Notice how the executable path contains the name of the prefetch file and this is due to the actual svchost.exe process not being accessed within 10 seconds of the injected code running.

Circling Back To Poweliks


I mentioned previously that I noticed this behavior when researching the Poweliks fileless malware. This malware doesn't write its binary to disk since it stays in memory. The malware is not only not on the hard drive but other typical artifacts may not be present as well (such as the normal program execution artifacts.) In the near future I'll have a detailed post about how to triage systems impacted by this malware but I wanted highlight why the information I shared in this post matters. The screenshot below shows parsed prefetch files from a test system infected with a Poweliks dropper. My only clue to you is: one prefetch file is not like the others?


There is one dllhost.exe prefetch file that has the missing process path. A closer inspection of this prefetch file reveals some other interesting file access during application start-up:


During access start-up files associated with Internet activity were accessed in addition to the WININET.DLL dll for making HTTP requests. When Poweliks (or at least the samples I've reviewed) initially infects a computer it performs the following actions: rundll32.exe process starts the powershell.exe process  that then starts a suspended dllhost.exe and injects code into it. Powershell.exe then resumes the dllhost.exe process that executes the Poweliks malicious code. The screenhots above illustrate the behavior of process hollowing performed by Poweliks and the impact on the dllhost.exe prefetch file.

The above indication of the missing executable path in the dllhost.exe prefetch file  is only for the initial infection when process hollowing is used. (The malicious code doesn't access the file it injects in once it executes) If Poweliks remains persistent and the system reboots the malicious code is still injected into dllhost.exe process but it doesn't appear to be process hollowing. These dllhost.exe prefetch files will contain the dllhost.exe executable path; along with the trace for suspicious file access as shown in the last screenshot. This is reflected in the second dllhost.exe prefetch file highlighted in the above screenshot.

Conclusion


Uncovering behaviors during research is helpful to put something you take for granted in a different perspective. Process hollowing behaves in a specific way in the Windows operating system and it can impact prefetch files in a specific way. This impact can be used as an indicator to help explain what occurred on a system but it needs context. A missing process executable in a prefetch file does not mean process hollowing occurred. However, a missing process executable along with suspicious file access during application start-up and an indication a system was compromised means something completely else. 

The Hammer Is Not Broken

Sunday, November 23, 2014 Posted by Corey Harrell 2 comments
I went to my local hardware store to buy one of the latest hammers. I brought the hammer home but it was unable to build the shed in my backyard. I spoke to someone else who said something similar. They bought the hammer and it didn't build what they wanted it to do. The expectation was that the hammer should do what it needs to do automatically and if it doesn't then the hammer is broken. It doesn't live up to its expectations, it wasn't the best investment, and the hammer developers need to start over.

Every now and then there are articles about how a certain security technology is broken. The recent focus of criticism is SIEM technology. It's broken because it doesn't do everything automatically and live up to peoples' expectations. The expectation that the technology should be able to be implemented and then automatically solve problems. Similar, to the expectation of a shed being built by simply buying a hammer. The hammer is not at fault; similar to the security technology not being at fault. Both of them are tools and the issue is with who is wielding that tool.

The hammer is not broken. The security technology is not broken. What is broken is the person (or organization) holding the tool. They need the critical thinking and problem solving skills to make the tool do what it needs to do and meet expectations along the way.

"analysts can be trained to use a tool in a rudimen-tary manner, they cannot be trained in the mind-set or critical thinking skills needed to master the tool"

                                                                               ~ Carson Zimmerman

The issue is not with the tool. The issue is the organization hasn't put the tool in the right peoples' hands and until they do the brokenness won't be fixed.
Labels:

Triaging with Tr3Secure Script's NTFS Artifacts Only Option

Wednesday, November 5, 2014 Posted by Corey Harrell 1 comments
An alert fires about an end point potentially having a security issue. The end point is not in the cubicle next to you, not down the hall, and not even in the same city. It's miles away in one of your organization's remote locations. Or maybe the end point is not miles away but a few floors from you. However, despite its closeness the system has a one terabyte hard drive in it. The alert fired and the end point needs to be triaged but what options do you have. Do you spend the time to physically track down the end point, remove the hard drive, image the drive, and then start your analysis. How much time and resources would be spent approaching triage in this manner? How many other alerts would be overlooked while the focus is on just one? How much time and resources would be wasted if the alert was a false positive? In this post I'll demonstrate how the Tr3secure collection script can be leveraged to triage this type of alert.

Efficiency and Speed Matters


Triaging requires a delicate balance between thoroughness and speed. Too much time spent on one alert means not enough time for the other alerts. Time is not your friend when you are either trying to physically track down a system miles away or imaging a hard drive prior to analysis. Why travel miles when milliseconds will do?  Why collect terabytes/gigabytes when megabytes will do? Why spend time doing deep dive analysis until you confirm you have to? The answer to all these questions is to walk the delicate balance between thoroughness and speed. To quickly collect only the data you need to confirm if the alert is a security event or false positive. The tools one selects for triage is directly related to their ability to be thorough but fast. One crop of tools to help you strike this balance are triage scripts.

Triage scripts automate parts of the triage process; this may include either the data collection or analysis. The Tr3secure collection script automates the data collection. In my previous post - Tr3Secure Collection Script Updated  - I highlighted the new features I added in the script to collect the NTFS Change Journal ($UsnJrnl) and new menu option for only collecting NTFS artifacts. Triaging the alert I described previously can easily be done by leveraging Tr3Secure collection script's new menu option.

To demonstrate this capability I configure a virtual machine in an extremely vulnerable state then visited a few malicious sites to provide me with an end point to respond to.

Responding to the System


The command prompt was accessed on the target system. A drive was mapped to a network share to assist with the data collection. The network share not only is where the collected data will be stored but it is also where the Tr3Secure collect script resides. The command below uses the Tr3Secure collection script NTFS artifacts only option and stores the collected data in the network share with the drive (Y:). Disclaimer: leveraging network shares does pose a risk but it allows for the tools to be executed without being copied to the target and for the data to be stored in a remote location.


The image below shows part of the output from the command above.


The following is the data that was collected using Tr3secure collection script's NTFS artifacts only option:

     - Recentfilecach.bcf
     - Prefetch files
     - Master File Table ($MFT)
     - NTFS Change Journal ($UsnJrnl)
     - NTFS Logfile ($Logfile)


Triaging


Collecting the data is the first activity in triaging. The next step is to actually examine the data to confirm if the alert is an actual security event or a false positive. The triage process will vary based on what the event is. A fast spreading worm varies from a Trojan. Antivirus alerts vary from malicious network traffic (i.e. an end point communicating with a malicious IP). The scenario demonstrated in this post is for an alert firing that requires triaging an end point. There are certain indicators to look for when triaging end points for malware as outlined below:

     - Programs executing from temporary or cache folders
     - Programs executing from user profiles (AppData, Roaming, Local, etc)
     - Programs executing from C:\ProgramData or All Users profile
     - Programs executing from C:\RECYCLER
     - Programs stored as Alternate Data Streams (i.e. C:\Windows\System32:svchost.exe)

     - Programs with random and unusual file names
     - Windows programs located in wrong folders (i.e. C:\Windows\svchost.exe)
     - Other activity on the system around suspicious files


In addition to looking for the above indicators, it is necessary  to ask yourself a few important questions.

     1. What was occurring on the system around the time the alert generated?
     2. Is there any indicators in the alert to use examining the data?
     3. For any suspicious or identified malicious code, did it execute on the system?


The triage steps to use is dependent on the collected data. In this case, the data collected with NTFS artifacts option results in only examining: program execution and filesystem activity. These two examination steps are very effective for triaging end points as I previously illustrated in my post Triaging Malware Incidents.

Examining Program Execution Artifacts


The first artifact to check is the Recentfilecach.bcf to determine if any stand-alone programs executed on the endpoint. The file can be examined using a hex editor or Harlan's rfc.exe tool included in his WFA 4/e book materials. The command below parses this artifact:

rfc.exe C:\Data-22\WIN-556NOJB2SI8-11.04.14-19.36\preserved-files\AppCompat\RecentFileCache.bcf

The image below is highlighting a suspicious program. The program stands out since it is a program that executed from the lab user profile's temporary internet files folder. Right off the bat this artifact provides some useful information. The coffee.exe program executed within the past 24 hours, the program came from the Internet using the Internet Explorer web browser, and the activity is associated with the lab user profile.


The second program execution artifact collected during the NTFS artifacts only option are prefetch files. The command below parses this artifact with Winprefetchview:

winprefetchview.exe /folder C:\Data-22\WIN-556NOJB2SI8-11.04.14-19.36\preserved-files\Prefetch

The parsed prefetch files were first sorted by process path to find any suspicious programs. Then they were sorted by last run time to identify any suspicious programs that executed recently or around the time of the alert. Nothing really suspicious jumped out in the parsed prefetch files. However, there was a svchost.exe prefetch file and this was examined since it may contain interesting file handles related to the suspicious program listed in the Recentfilecach.bcf. (to see why review my Recentfilecach.bcf post and pay close attention to where I discuss the process svchost.exe) This could be something but it could be nothing. The svchost.exe process had a file handle to the ad[2].htm file located in the temporarily internet files folder as shown below:


Examining Filesystem Artifacts


The Tr3Secure collection script collects the NTFS $MFT, $Logfile, and $UsnJrnl; all of which can provide a wealth of information. These artifacts can be examined in two ways. First is to parse them and then look at the activity around the time the alert fired. Second, is to parse them and then look at the activity around the time of the identified suspicious files appeared on the system. In this post I'm using the latter method since the alert is hypothetical.

The $MFT was parsed with Joakim Schicht's MFT2CSV program. The output format is in the log2timeline format to make it into a timeline.


The MFT2CSV csv file was imported into Excel and a search was performed for the file "coffee.exe". It's important to look at the activity prior to this indicator and afterwards to obtain more context about the event. As illustrated below, there was very little activity - besides Internet activity - around the time the coffee.exe file was created. This may not confirm how the file got there but it does help rule out certain attack avenues such as drive-by downloads.


The image below shows what occurred on the system next. There is activity of Java executing and this may appear to be related to a Java exploit. However, this activity was due to Java update program executing.


The last portion of the $MFT timeline is shown below.  A file with the .tmp extension was created in the temp directory followed by the creation of a file named SonicMaster.exe in the System32 folder. SonicMaster.exe automatically becomes suspicious due to its creation time around coffee.exe (guilty by association.) If this is related then that means the malware had administrative rights, which is required to make modifications to the System32 folder. The task manager is executed and then there was no more obvious suspicious activity in the rest of the timeline. The remaining activity was the lab user account browsing the Internet.


At this point the alert has been properly triaged and confirmed. The event can be escalated according to the incident response procedures. However, addition context can be obtained by parsing the NTFS Change Journal ($UsnJrnl). Collecting this artifact was the most recent update to the Tr3Secure collection script. The $UsnJrnl was parsed with Joakim Schicht's UsnJrnl2CSV program.


The UsnJrnl2CSV csv file was imported into Excel and a search was performed for the file "coffee.exe". It's important to look at the activity prior to this indicator and afterwards to obtain more context about the event.  The $UsnJrnl shows the coffee.exe file being dropped onto the system from the Internet (notice the [1] included in the filename.)


The next portion as shown below, shows coffee.exe executing, which resulted in the modification to the Recentfilecache.bcf. Following this the tmp file is created along with the SonicMaster.exe file. The interesting activity is the data overwrites made to the SonicMaster.exe file after it's created.


Immediately after the SonicMaster.exe file creation the data overwrites continued to other programs as illustrated below. To determine what is happening would require those executables to be examined but it appears they had data written to them.


The executables were initially missed in the $MFT timeline since the files by themselves were not suspicious. They were legitimate programs located in their correct folders. However, the parsed $UsnJrnl provided more context about the executables and circling back around to the $MFT timeline shows the activity involving them. Their $MFT entries were updated.


Conclusion


As I mentioned previously, triaging requires a delicate balance between thoroughness but speed. It's a delicate balance between quickly collecting and analyzing data from the systems involved with an alert to determine if the alert is a security event or false positive. This post highlighted an effective technique for triaging an end point and the tools one could use. The entire technique takes minutes to complete from beginning to end. The triage process did not confirm if the activity is a security event but it did determine additional time and resources needs to be spent digging a bit deeper. Doing so would had revealed that the coffee.exe file is malicious and is actually the Win32/Parite file infector. The malware infects all executables on the local drive and network shares. Digging even deeper by performing analysis on coffee.exe (Malwr report and Anubis report) matches the activity identified on the system and even provides more indicators (such as an IP address) one could use to search for other infections in the environment.

Thanks for Reading and Sharing

Sunday, November 2, 2014 Posted by Corey Harrell 3 comments
It was a little over four years ago I started a new journey. The timing wasn't the best when I took my first step into the blogging world. My family welcomed our third son into the world, I was doing challenging work for my employer, and I was still pursuing my Masters degree attending school full time. Needless to say, starting a blog would just compete for the little spare time I had. After talking things over with my wife I took the first step by launching this blog: Journey Into Incident Response (jIIr.) I've been reflecting about this journey since jIIr has surpassed 500,000 page views.

So much has changed in the past four years. I started out with my sights set on incident response and now I'm building out and managing an enterprise-wide incident response and detection program. I have been sharing my journey, my research, and thoughts on jIIr for others to read and learn from. To those who either read one post or numerous posts I wanted to say thank you. Thank you for taking the time to read what I have written. To those who went one step beyond to leave a comment or send me an email I wanted to thank you as well. Blogging is a challenging endeavor and comments are very encouraging.

jIIr started out as a way for me to share back to the digital forensic and incident response community. Shortly after jIIr's launch my motivations changed. What drove me to maintain the blog was no longer sharing back to the community. It's not necessary to share my motivations  because at the end of the day is what matters the most is my ability to help others by sharing what I do. Each jIIr post is an effort to help anyone who may read it. For them to take the information and apply it in their lives and/or work. To those people and organizations who have shared jIIr content either by linking to my posts, retweeting my tweets, forwarding along links, or sharing your thoughts on your own websites. Thank you. Thank you for sharing; jIIr would not be reaching the audience it is today without your support. To those who continue to share jIIr content, I cannot say how grateful I am.

The past four years have been a humbling experience. God has blessed my with a passion for DFIR and the ability to communicate in the written form. For as long as I'm capable, I'll continue to do what I do so at the end of the day others who stumble upon this site may be helped in some way.
Labels: