Making Incident Response a Security Program Enabler

Sunday, April 26, 2015 Posted by Corey Harrell 0 comments
Incident response is frequently viewed as a reactive process. As soon as something bad happens that is when the incident response process is activated to respond to what occurred. This view is similar to insurance. Every month we spend money on buying insurance so it is available when we need it. It doesn’t matter if the insurance gets used once in a year or not at all; money is still spent on a monthly basis to buy it. In a way, it’s easy to see the similarity to the incident response process. Resources - such as staffing and technology - are invested in the incident response process. In some organizations there is a sizable investment while in others very little. The hope is something is available when the organizations need it. How can one change an organization’s view of incident response? How can you take a traditional reactive process and make it in to a proactive process that’s an enabler for the organization’s information security program? This post discusses one approach to make incident response a security enabler by addressing: continuous incident response, incident response metrics, root cause analysis, and data analytics.

 

Continuous Incident Response


The traditional incident response models resemble the incident response lifecycle illustrated below that was obtained from the NIST Computer Security Incident Handling Guide.


Image obtained from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf 

The first phase involves the organization preparing for future incidents. The next phase is when an incident is detected and analyzed. This is followed by the containment, eradication, and recovery activities. At times when trying to remediate an incident, activities cycle back to detection and analysis to determine if the incident was resolved. After the incident is eradicated and the organization returns to normal operations a post incident activity is performed to see what did and didn't work out as planned. The lifecycle represents the traditional approach to incident response: incident detected -> organization responds -> incident eradicated -> organization returns to normal operations. This is the tradition reactive incident response process where the assumption is that nothing is going on until an incident is detected and after the incident is resolved it goes back to assuming nothing is going on.

To take a traditional reactive incident response process and make it in to a proactive process requires incident response to be seen in a different light. Organizations are under constant attack from daily malware infections to daily probing to daily exploit attempts to daily potential unauthorized access attempts. The model is no longer linear where an organization is waiting to detect an incident and then returning to normal operations. The new normal is being under constant attack and being at different stages in the incident response process concurrently. Richard Bejtlich stated in his book The Practice of Network Security Monitoring on page 188 regarding the model below:

the workflow in Figure 9-2 appears orderly and linear, but that’s typically not the case in real life. In fact, all phases of the detection and response processes may occur at the same time. Sometimes, multiple incidents are occurring; other times, the same incident occupies all four stages at once.”

Image obtained from http://www.nostarch.com/nsm

Richard expressed incident response is not a linear process with a start and end; on the contrary the process can be at different phases at the same time dealing with different incidents. Anton Chuvakin also touched on the non-linear incident response process in his post Incident Response: The Death of a Straight Line. Not only did he say that the “ “normal -> incident -> back to normal” is no more” but he summed up the situation organizations find themselves in.

“While some will try to draw a clear line between monitoring (before/after the incident) and incident response (during the incident), the line is getting much blurrier than many think.  Ongoing indicator scans (based on external and internal sources), malware and artifact reversing, network forensics “hunting”, etc all blur the line and become continuous incident response activities.”

The light incident response needs to be seen in is that it is a continuous process instead of a linear one. Incident response is not something that starts and ends but is an ongoing cyclical process where an organization is constantly detecting and responding to incidents. A process similar to David Bianco's the Intel-Driven Operations Cycle model shown below and was obtained from his The Pyramid of Pain Intel-Driven Detection and Response to Increase Your Adversary's Cost of Operations presentation.

Image obtained from https://speakerdeck.com/davidjbianco/the-pyramid-of-pain-intel-driven-detection-and-response-to-increase-your-adversarys-cost-of-operations

Seeing incident response as a continuous process is one that everyone must see from security practitioners to incident responders to management. Changing people’s perspectives on incident response will take time and every opportunity to sell it will need to be seized (don’t sell FUD but layout the actual threat environment we find ourselves in.) In time the conversation will go from viewing incident response as insurance that may or may not be needed to viewing incident response as continuous where people are detecting and responding to the daily security incidents. The conversation will go from “do we really need to invest in this since we only had a few incidents last year” to “we are continuing seeing these incidents due to this security weakness so how can we address it since it’s an area of concern.”

Operationalize Incident Response Information


Changing the view of incident response from a linear process to a continuous one is not enough to make it a security program enabler. To be a security program enabler incident response needs to contribute to the organization’s security strategy to help influence where security resources are focused. Too often incident response tries to influence the security strategy in a reactive manner. The reactive process resembles the following: incident detected -> organization responds -> incident eradicated -> organization returns to normal operations -> incident response recommendations provided. The attempts to influence the security strategy is based on the most recent incident. In essence, recommendations are being made based on a single event instead being made based on trends from numerous events. Don’t get me wrong, there are times when recommendations from a single event do influence the security strategy but to make incident response a security program enabler there needs to be more.

To re-enforce this point, a story about a local credit union that happened years ago may help. The credit union happened to be located at a busy intersection; its location was very accessible from buses, cars, bikes, and walking. One day a person walked in to the credit union, handed the teller a note, and then walked out with money. As an outsider looking at this single event, there was nothing drastic implemented from any recommendations based on this single robbery. The next week a similar event occurred again with someone handing the teller a note and walking out with cash. This occurred a few more times and each robbery was very similar. The robbery involved a person handing a note to the teller without any visible weapons shown. The credit union looked at all the robberies and they must have seen this pattern. In response, the credit union implemented a compensating control and this control was double doors to trap any individual as they try to exit the bank. After this control was implemented the robberies stopped. This story shows how incident response can become a security program enabler. The first robbery was a single event and the recommendation may had been to install trap doors. However, installing trap doors takes essential resources from other areas and this may not be in the best interest of the organization. As more data is collected from different events it causes a pattern to emerge. Now taking essential resources from other areas is an easier decision since the data analysis shows installing trap doors is not addressing a single event but a re-occurring issue.

The continuous incident response process needs to move from only providing reactive recommendations to producing intelligence by operationalizing the information produced by enterprise incident response and detection processes. To accomplish this, data and information needs to be captured from the ongoing detection and response activities. Then this data and information is analyzed to produce intelligence to be used by the security program. Some intelligence is used by the response and detection processes themselves but other intelligence (especially ones developed through trend analysis) is reported to appropriate parties to influence the organization’s security strategy. Operationalizing incident response information results in creating intelligence at various levels in the intelligence pyramid.


The book Building an Intelligence-Led Security Program authored by Allan Liska describes the pyramid levels as follows:

"Strategic intelligence is concerned with long-term trends surrounding threats, or potential threats to an organization. Strategic intelligence is forward thinking and relies heavily on estimation – anticipating future behavior based on past actions or expected capabilities."

"Tactical intelligence is an assessment of the immediate capabilities of an adversary. It focuses on the weaknesses, strengths, and the intentions of an enemy. An honest tactical assessment of an adversary allows those in the field to allocate resources in the most effective manner and engage the adversary at the appropriate time and with the right battle plan."

"Operational intelligence is real time, or near real-time intelligence, often derived from technical means, and delivered to ground troops engaged in activity against the adversary. Operational intelligence is immediate, and has a short time to live (TTL). The immediacy of operational intelligence requires that analysts have instant access to the collection systems and be able to put together FINTEL in a high-pressure environment."

As it relates to making incident response process a security program enabler, the focus needs to be on making the process contribute to the organization’s security strategy by producing tactical and strategic intelligence. Tactical intelligence can highlight the organization’s weaknesses and strengths then show where security resources can be used more effectively. Strategic intelligence can influence the direction of the organization’s long term security strategy. Incident response starts to move from being viewed as a reactive process to a proactive one once it starts adding value to other areas in an organization’s security program.

Improve Root Cause Analysis Capabilities


Before one can start to operationalize incident response information to produce intelligence at various levels in the intelligence pyramid they must first improve their root cause analysis capabilities. Root cause analysis is trying to determine how an attacker went after another system or network by identifying and understanding the remnants they left on the systems involved during the attack. This is a necessary activity for one to discover information during a security incident that can be operationalized. The Verizon Data Breach Investigations Report is an excellent example about the type of information one can discover by performing root cause analysis. The report highlights trends from “time to incident discovery” to “time to compromise” to exploited vulnerabilities to frequency of attack types to hacking actions. None of this data would had been available for analysis if root cause analysis wasn’t completed on these incidents.

Take the hypothetical scenario of a malware infected system. Root cause analysis discovered the attacker compromised the system using a phishing email containing a malicious Word document. At this point there is various data one can then turn in to intelligence. At the operationally level, the email’s subject line, content, from address, and Word document attachment name can all be documented and then turned in to intelligence for response and detection activities. The same can occur for the URL inside the Word document and the malware it downloads. Doing root cause analysis on all infections can then make data available to do trend analysis. Is it a pattern for the organization employees to be socially engineered through Word documents? Can resources be applied in other areas such as security awareness training to combat this threat? In time, more and more data can be collected to reveal other trends to help drive security. Performing root cause analysis on each incident is needed to operationalize incident response information to produce intelligence in this manner. The Compromised Root Cause Analysis Model is one model to use and it is described in the post Compromised Root Cause Analysis Model Revisited.


Incident Response Metrics


The outcome from performing root cause analysis on each incident is discoverable information. It’s not enough to consistently do root cause analysis to discover information; the information needs to be documented and analyzed to make it into intelligence. Different options are available to document security incident information but in my opinion the best available schema is the VERIS Framework. The “Vocabulary for Event Recording and Incident Sharing (VERIS) is a set of metrics designed to provide a common language for describing security incidents in a structured and repeatable manner.” The VERIS Framework is open and can be modified to meet an organization’s needs.

The schema is well designed but to support an internal incident response process some modifications may be needed. This post won’t go in to great detail about the needed modifications but I will mention a few to make the schema better support internal incident response. In the Incident Tracking section, to make it easier to track security incidents the following can be added: Incident Severity (to match the incident response process severity for incidents), Hostname (of the targeted system), IP Address (of the targeted system), Username (involved in the incident), and Source IP Address (of the attacker’s system). In the Victim Demographics section, these may or may not apply for an internal incident response process. Personally, I don’t see the need for tracking this information if the incident response process supports the same entity. In the Incident Description section, the biggest change is outlining the expected values for the vectors and vulnerabilities. For example, for the vulnerabilities list out each possible vulnerable application - such as Java vulnerability - instead of allowing for specific CVEs. This reduces the amount of work needed on doing root cause analysis without losing too much on the metrics side. The last changes I’ll discuss are for the Discovery and Response section. In this section make sure to account for the various discovery methods the organization may use to detect incidents as well as the intelligence sources behind those methods. This slight change enables an organization to measure how they are detecting security incidents and to evaluate the return on investment for different intelligence sources.

Data Analysis


Information that is documented is only data and does not become intelligence until it is analyzed and refined so it is useful to others. There are different options available for organizations to produce intelligence from the information discovered during root cause analysis. The book Data-Driven Security: Analysis, Visualization and Dashboards goes in to detail about how one can do data analysis with free and/or open source tools. The route I initially took was to allow me to focus on the incident response process without getting bogged down trying to create visualizations to identify trends. At my company (this is the only item in this post directly tied to my employer and I only mention it in hopes it helps my readers) we went with a license for Tableau Desktop and I bought a personal copy of the book Tableau Your Data!: Fast and Easy Visual Analysis with Tableau Software. The combination of Tableau Desktop and the VERIS Framework makes it very effective at producing strategic and tactical intelligence that can be consumed by the security program. In minutes, you can create visualizations to highlight what departments in an organization is most susceptible to phishing attacks or to quickly identify the trends explaining how malware is entering the organization. The answers and intelligence one can gain from the incident response data is only limited by one’s creativity and the ability of those consuming the intelligence.

Making Incident Response a Security Program Enabler


The approach an organization can take to take incident response from a reactive process to proactive one involves the following steps:

      - Improving an organization's incident response capabilities
      - Improving an organization's root cause analysis capabilities
      - Improving an organization’s security monitoring capabilities
      - Influencing others to see incident response as a continuous process
      - Operationalizing incident response information
      - Collecting and documenting data for the organization’s incident response metrics
      - Analyzing the organization’s incident response metrics to produce intelligence
      - Presenting the intelligence to appropriate stakeholders

Making incident response a security program enabler is a gradual process requiring organization buy-in and resources to make it happen. As DFIR practitioners, we can only be the voice in the wilderness telling others incident response can be more than a reactive process. It can be more than an insurance policy. It can be a continuous process enabling an organization’s security strategy and helping guide how security resources are used. A voice hoping to influence others to make the right decision to better protect their organization.
Labels:

Python: print “Hello DFIR World!”

Wednesday, April 8, 2015 Posted by Corey Harrell 5 comments
Coursera's mission is to "provide universal access to the world's best education." Judging by their extensive course listing it appears as if they are delivering on their mission since the courses are free for anyone to take. I knew about Coursera for some time but only recently did I take one of their courses (Python Programming for Everybody.) In this post I'm sharing some thoughts about my Coursera experience, the course I took, and how I immediately used what I learned.

Why Python? Why Coursera?


Python is a language used often in information security and DFIR. Its usage is varied from simple scripts to extensive programs. My interest in Python was modest; I wanted to be able to modify (if needed) Python tools I use and to write automation scripts to make my job easier. Despite the wealth of resources available to learn Python, I wanted a more structured environment to learn the basics. An environment that leverages lectures, weekly readings, and weekly assignments to explore the topic. My plan was to learn the basics then proceed exploring how Python applies to information security using the books Black Hat Python and Violent Python. Browsing through the Cousera offerings I found the course Programming for Everybody (Python). The course “aims to teach everyone to learn the basics of programming computers using Python. The course has no pre-requisites and avoids all but the simplest mathematics.” Teaches the basics in a span of 10 weeks without the traditional learning to code by mathematics; the course was exactly what I was looking for.

Programming for Everybody (Python)


I’m not providing a full fledge course review but I did want to provide some thoughts on this course. The course itself is “designed to be a first programming course using the popular Python programming language.” This is important and worth repeating. The course is designed to be someone’s first programming course. If you already know how to code in a different language then this course isn’t for you. I didn’t necessary fit the target audience since I know how to script in both batch and Perl. However, I knew this was a beginner’s course going in so I expected things would move slowly. I could easily overlook this one aspect since my interest was to build a foundation in Python. The course leveraged some pretty cool technology for an online format. The recorded lectures used a split screen between the professor, his slides, and his ability to write on the slides as he taught. The assignments had an auto grader where students complete assignments by executing their programs and the grader confirms if the program was written correctly. The text book is Python for Informatics: Exploring Information, which focuses more on trying to solve data analysis problems instead of math problems like traditional programming texts. The basics covered include: variables, conditional code, functions, loops/iteration, strings, files, lists, dictionaries, tuples, and regular expressions.

Overall, spending the past 10 weeks completing this course was time well spent. Sure, at times I wish times moved faster but I did achieve what I wanted to. Exploring the basics of the Python language so I can have a foundation prior to exploring how the language applies to security work. The last thing I wanted to mention about the course, which I highly respect. The entire course from the textbook to the lecture videos is licensed under a Creative Common Attribution making it available for pretty much anyone to use.

Applying What I Learned


The way I tend to judge courses, trainings, and books is by how much of the content can be applied to my work. If the curriculum is not relevant to one’s work than what is the point in wasting time completing it? It’s just my opinion but judging courses and trainings in this manner has proven to be effective. To illustrate this point as it applies to the Python Programming for Everybody course I’m showing how the basics I learned solved a recent issue. One issue I was facing is how to automate parsing online content and consuming it in a SIEM. This is a typical issue for those wishing to use open source threat intelligence feeds. One approach is to manually parse it in to a machine readable form that your SIEM and tools can use. Another and a better approach is to automate as much as possible through scripting. I took the later approach by creating a simple script to automate this process. For those interested in Python usage in DFIR should check out David Cowen's Automating DFIR series or Tom Yarrish's Year of Python series.

There are various open source threat intelligence feeds one can incorporate in to their enterprise detection program. Kyle Maxwell’s presentation Open Source Threat Intelligence touched on some of them. For this post, I’m only discussing one and it was something I was interested in knowing how to do it. Tor is an anonymity service that enables people to hide where they are coming from as they surf the Internet. Tor has a lot of legitimate uses and just because someone is using it does not mean they are doing something wrong. Being able to flagged users connecting to your network from Tor can add context to other activity. Is the SQL injection IDS alert a false positive? Is the SQL injection IDS alert coming from someone who is also using Tor a false positive? See what I mean by adding context. This was an issue that needed a Python solution (or at least a solution where I could apply what I learned.)

To accomplish adding Tor context to activity in my SIEM I first had to identify the IP addresses for the Tor exit nodes. Users using the service will have the IP address of the exit node they are going through. The Tor Project FAQs provides an answer to the question "I want to ban the Tor network from my service." After trying to discourage people from blocking two options are presented by using either the Tor exit relay list or a DNS-based list. The Tor exit relay list webpage has a link to the current list of exit addresses. The screenshot below shows how this information is presented:


Now we’ll explore the script I wrote to parse the Tor exit node IP addresses into a form my SIEM can consume, which is a text file with one IP address per line. The first part –as shown in the image below - imports the urllib2 module that is used to open URLs. This part wasn’t covered in the course but wasn’t too difficult to figure out by Googling. The last line in the image creates a dictionary called urls. A dictionary associates a key with a value and in this case the key is tor-exit with the value being the URL to the Tor exit relay list. Leveraging a dictionary allows the script to be extended to support other feeds without having to make significant changes to the script.


The next portion of the script as shown below is where the first for loop occurs. The for loop will process each entry (key and value pair) in the urls dictionary. The try and except is a method to account for errors such as a URL not working. Inside the try section the URL is opened in to a variable named file and then it is read in to a variable named data using the urllib2 readlines() option. Lastly, a file is created to store the output using the key value and the file handle is named output.


The next part of the script –image below - is specific to each threat feed being parsed. This accounts for the differences in the way threat feeds present data. The if statement checks to see if the key matches “tor-exit” and if it does then the second for loop executes. This for loop reads each line in the data variable (hence the data listed at the URL.) As each line is read there is additional actions performed such as skipping blank lines and any line that doesn’t start with the string “ExitAddress.” For the lines that do start with this string, the line is broken up in to a list named words. Basically, it breaks the line up into different values by using the space as a separator. The IP address is the second value so it is contained in the second index location in the words list (words[1]). The IP address is then written to the output file and after each line is processed a message is displayed saying processing completed.


The screenshot below shows the script running.


The end result is a text file containing the Tor exit IP addresses with one address per line. This text file can then be automatically consumed by my SIEM or I can use it when analyzing web logs to flag any activity involving Tor.


It’s Basic but Works


Harlan recently said in his Blogging post “it doesn't matter how new you are to the industry, or if you've been in the industry for 15 years...there's always something new that can be shared, whether it's data, or even just a perspective.” My hope with this post is it would be useful to others who are not programmers but want to learn Python. Coursera is a good option that can teach you the basics. Even just learning the basics can extend your DFIR capabilities as demonstrated by my simple script.
Labels:

Compromised Root Cause Analysis Model Revisited

Wednesday, March 11, 2015 Posted by Corey Harrell 3 comments
How? The one question that is easy to ask but can be very difficult to answer. It's the question I kept asking myself over and over. Reading article after article where publicized breaches and compromises were discussed. Each article alluded to the answer about how the breach or compromise occurred in the first place but each one left something out. Every single one left out the details that influenced their conclusions. As a result, I was left wondering how they figure out how the attack occurred in the first place. It was the question everyone alluded to and everyone said to perform root cause to determine the answer. They didn’t elaborate on how to actually do root cause analysis though. Most incident response literature echoes the same sentiment; do root cause analysis while omitting the most critical piece explaining how to do it.  I asked my question to a supposed "incident responder" and their response was along the lines "you will know it when you see it." Their answer along with every other answer on the topic was not good enough. What was needed was a repeatable methodical process one can use to perform root cause analysis. The type of methodical process found in the Compromise Root Cause Analysis Model.

I developed the Compromise Root Cause Analysis Model three years ago to fulfill the need for a repeatable investigative process for doing root cause analysis. In this post I'm revisiting the model and demonstrating its usefulness by outlining the following:

        - Exploring Locard’s Exchange Principle
        - Exploring Temporal Context
        - Exploring Attack Vectors
        - Exploring the Compromise Root Cause Analysis Model
        - The Model is Cyclical
        - Applying the Compromise Root Cause Analysis Model
                * Webserver Compromise

Exploring Locard’s Exchange Principle


The essential principle in the Compromise Root Cause Analysis Model is Locard’s Exchange Principle. This principle states “when two objects come into contact, something is exchanged from one to the other.” Locard’s Exchange Principle is typically explained using examples from the physical world. When one object – such as someone’s hand – comes in to contact with another object – such as a glass – something is exchanged from one to the other. In this example, on the glass are traces of oils from the person’s hand, skin flakes, and even fingerprints.

The principle is not only limited to the physical world; it applies to the digital world as well. Harlan Carvey’s example demonstrated the principle in the digital world as follows: “well, in essence, whenever two computers come "into contact" and interact, they exchange something from each other.” The principle is not only limited to computers; it applies to everything such as routers, switches, firewalls, or mobile devices. The essence of this principle for the Compromise Root Cause Analysis Model is:

When an attacker goes after another system; the exchange will leave remnants of the attack on the systems involved. There is a transfer between the attacker’s system(s), the targeted system(s), and the networking devices connecting them together.

The transfer between the systems and networks involved in the attack will indicate the actual attack used. By identifying and exploring the remnants left by the transfer is what enables the question of “how did the attack occur in the first place” to be answered.

Exploring Temporal Context


The second principle and one that supports the Compromise Root Cause Analysis Model is the psychology principle of proximity. The principle of proximity is one of the Gestalt laws of grouping and states that “when we see stimuli or objects that are in close proximity to each other, we tend to perceive them as being grouped together.” As it relates to the Compromise Root Cause Analysis Model, the grouping is based on the temporal relationship between each object. Temporal proximity impacts the model in two ways.

The first way temporal proximity impacts the Compromise Root Cause Analysis Model is by enabling the grouping of remnants related to an attack. When an attacker goes after another system, remnants are left is various places within system and the network the system is a part of. Networking devices logs showing the network activity, application logs showing what the intruder was doing, and remnants on the system showing what the intruder accomplished are only a few of the places where these artifacts could be located. The attacker’s actions are not the only remnants left within the network and system. The organization and its employees are creating remnants every day from their activity as well as remnants left by the normal operation of the information technology devices themselves. Temporal proximity enables the grouping of the remnants left by an attacker throughout a network and system by their temporal relationship to each other. Remnants that occur within a short timeframe of each other can be grouped together while remnants outside of this timeframe are excluded. Other factors are involved to identify the attacker’s remnants amongst normal activity but temporal proximity is one of the most significant factors.

The second way temporal proximity impacts the Compromise Root Cause Analysis Model is the time that lapses between when an attacker attacks the system and an investigation is conducted affects the ability to identify and group the remnants left by the attacker. The reason for this impact is that “time is what permits other forces to have an effect on the persistence of data.” The remnants left by the attacker is in the form of data on information technology devices. The more time that goes by after these remnants are left the more opportunity there is for them to be changed and/or removed. Logs can be overwritten, files modified, or files deleted through activities of the organization and its employees along with the normal operation of the information technology devices. The more time that lapses between when the attack occurred and when the investigation begins the greater the opportunity for remnants to disappear and the inability to group the remaining remnants together. The Compromise Root Cause Analysis Model can still be used to identify these remnants and group them but it is much more difficult as more time lapses between the initial attack and investigation.

Exploring Attack Vectors


Root cause analysis is trying to determine how an attacker went after another system or network by identifying and understanding the remnants they left on the systems involved during the attack. In essence, the analysis is identifying the attack vector used to compromise the system. It is crucial to explore what an attack vector is to see how it applies to the Compromise Root Cause Analysis Model.

SearchSecurity defines an attack vector as "a path or means by which a hacker (or cracker) can gain access to a computer or network server in order to deliver a payload or malicious outcome." Based on this definition, the attack vector can be broken down into three separate components. The path or means is the exploit used, the payload is the outcome of the exploit, and the delivery mechanism is what delivers the exploit and/or the payload to the target. The definition combines the delivery mechanism and exploit together but in reality these are separated. The exploit, payload, and delivery mechanism can all leave remnants (or artifacts) on the compromised system and network and these artifacts are used to identify the attack vector used.

Exploit


An exploit is defined as "a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug, glitch or vulnerability in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized).." An exploit takes advantage of a weakness in a system to cause a desirable activity on that system for the attacker. Exploits can target vulnerabilities in either operating systems, applications, or the people using the system. In accordance with Locard’s Exchange Principle, when the exploit comes in contact with the system containing the weakness remnants are left by the attacker. Identifying these exploit artifacts left on a system are one piece of the puzzle for identifying the attack vector.

Payload


A payload is defined (in security) as “the cargo of a data transmission.” A payload is the desirable activity on a system for the attacker that was caused by an exploit taking advantage of a weakness. In accordance with Locard’s Exchange Principle, when the payload comes in contact with the system remnants are left by the attacker. Identifying these payload artifacts left on a system are another piece of the puzzle for identifying the attack vector.

Delivery Mechanism


A delivery mechanism is defined as “a method by which malicious software places a payload into a target computer or computer network.” The delivery mechanism is what delivers the exploit and/or payload to the system to enable the desirable activity to occur on the system for the attacker. Similar to the exploit and payload, when the delivery mechanisms come in contact with the system remnants are left by the attacker. Identifying these delivery mechanisms artifacts left on a system are the last piece of the puzzle for identifying the attack vector.

Exploring the Compromise Root Cause Analysis Model


When an attacker goes after another system; the exchange leaves artifacts of the attack on the systems involved. These artifacts are identified during an investigation and grouped together based on their temporal proximity to one another. Root cause analysis identifies the attack vector used by determining what of the identified artifacts are related to the exploit, payload, and delivery mechanism(s). The Compromise Root Cause Analysis Model is a methodical process for organizing information and identified artifacts during an investigation to make it easier to answer the question of how did a compromise occur. The model is a not a replacement for any existing models; rather it’s a complimentary model to help discover information related to a system compromise. The Compromise Root Cause Analysis Model organizes the artifacts left on a network and/or system after being attacked into the following categories: source, delivery mechanism, exploit, payload, and indicators. The relationship between the categories are shown below.



Source


At the core of the model is the source of the attack. The source is where the attack originated from. Attacks can originate from outside or within an organization’s network; it all depends on who the attacker is. An external source is anything residing outside the control of an organization or person. An example is attacks against a web application coming from the Internet. Attacks can also be internal, which is within the network and under the control of an organization or person. An example is an employee who is stealing data from a company file server.

The artifacts left behind by the attacker on the system is used to determine where the attack came from. For example, if the attack originated from the Internet then the data left on the systems indicate this. Firewall logs, web application logs, proxy server logs, authentication logs, and email logs all will point to the attacker’s location outside of the organization’s network.

Delivery Mechanism


Proceeding to the next layer is the first delivery mechanism. This is the mechanism used to send the exploit to the system. The mechanism used is dependent on the attacker’s location. Attackers external to the organization may use avenues such as email, network services (i.e. HTTP, SSH, FTP, etc..), or removable media. Attackers internal to the organization may use avenues such as physical access or file sharing protocols.

The artifacts left behind by the attacker on the system is used to determine how they sent the exploit to the system. Where and what the artifacts are is solely dependent on the method used. If the method was HTTP then either web proxy, web browser histories, or web application logs will contain the remnants from the attacker. If the method was email then the email gateway logs, client email storage file, or user activity involving email will contain the remnants from the attacker.

Exploit


Continuing outward to the next layer is the exploit. The exploit is what was sent to take advantage of a vulnerability. As mentioned previously, vulnerabilities can be present in a range of items: from operating systems to applications to databases to network services to the person using the computer.
When vulnerabilities are exploited it leaves specific artifacts on the system and these artifacts can identify the weakness targeted by the attacker. Where and what the artifacts are is solely dependent on what weakness is targeted. The Applying the Model section illustrates this artifact for one vulnerability.

Delivery Mechanism


The next layer is the second delivery mechanism. A successful exploit may result in a payload being sent to the system. This is what the outer delivery mechanism is for. If the payload has to be sent to then there may be artifacts showing this activity. This is the one layer that may not always be present. There are times when the payload is bundled with the exploit or the payload just provides access to the system. Similar to the exploit, where and what the artifacts are present solely dependent on what the exploit was.

Payload


The next layer outlines the desired end result in any attack; to deliver a payload or malicious outcome to the system. The payload can include a number of actions ranging from unauthorized access to denial of service to remote code execution to escalation of privileges. The payload artifacts left behind will be dependent on what action was taken.

Indicators


The last layer in the model is the indicators layer. The layer is not only where the information and artifacts about how the attack was detected would go but it also encompasses all of the artifacts showing the post compromise activity. The reason for organizing all the other remnants left by the attacker into this layer is to make it easier to identify the attack vector artifacts (exploit, payload, and delivery mechanisms.) This results in the layer being broad since it contains all of the post compromise artifacts such as downloading files, malware executing, network traversal, or data exfiltration.

The Model is Cyclical


The Compromise Root Cause Analysis Model is a way to organize information and artifacts to make it easier to answer questions about an attack. More specifically to answer: how and when did the compromise occur? Information or artifacts about the compromise are discovered by completing examination steps against any relevant systems involved with the attack. The model is cyclical; as each new system is discovered the model is used to determine how the system was compromised. This ongoing process continues until each system involved with an attack is examined to confirm if it truly was a part of the attack.

To illustrate, take the hypothetical scenario of an IDS alert indicating an organization’s employee laptop is infected with malware. The IDS signature that flagged the network traffic is shown below (signature was obtained from the Emerging Threats emerging-botcc.rules.) As can be seen in the rule, the laptop was flagged for visiting an IP address associated with the Zeus Trojan.

 
The network packet captured in the IDS alert indicates the employee is a remote user connected through the organization’s VPN. The network diagram below shows the organization’s network layout and where this employee’s laptop is located.


The investigation into the employee’s laptop - remotely over the network – located the Zeus Trojan on the laptop. The examination continued by doing root cause analysis to determine how the laptop became infected in the first place. The employee was surfing the Internet prior to connecting to the organization’s network through the VPN. A drive-by attack successfully compromised the laptop when the employee visited the organization’s website. The IDS alerted on the infection once the laptop connected through the VPN. The investigation now uncovered another system involved with the attack (organization’s web server) and its location is shown below.


The organization’s main website is compromised and serving malware to its visitors. The investigation continues by moving to the compromised web server. The Root Cause Analysis Model is applied to the server to determine how it became compromised in the first place. The answer was an attacker found the webserver was running an outdated Joomla plug-in and exploited it. The attacker eventually leveraged the compromised web server to deliver malware to its visitors.

In this hypothetical scenario, the Compromise Root Cause Analysis Model was initially applied to a compromised laptop. The source of the attack pointed to another system under the control of the organization. The investigation continued to the newly discovered system by applying the Compromise Root Cause Analysis Model against it. The attack vector pointed to an attacker from the Internet so at this point all of the systems involved in the attack have been investigated and the root cause identified. If there were more systems involved then the cyclical process continues until all systems are investigated. The Compromise Root Cause Analysis Model enabled the attack vector for each system to be determined and the incident information discovered can then be further organized using other models. For example, the overall attack can be described using the Lockheed Martin's Cyber Kill Chain model.

Applying the Compromise Root Cause Analysis Model


The Compromise Root Cause Analysis Model is a way to organize information and artifacts to make it easier to answer questions about a compromise. The model can be applied to systems to either confirm how they were compromised or to determine if they were compromised. The article Malware Root Cause Analysis goes in to detail about how to apply the model for a malicious code incident involving a single system. However, the model is not limited to only malicious code incidents. It can be applied to any type of security incident including: unauthorized access, denial of service, malicious network traffic, phishing, and compromised user accounts. To demonstrate the model’s versatility, it will be applied to a hypothetical security incident using data from a published article. The incident is for a compromised Coldfusion webserver as described in the An Eye on Forensics's article A Cold Day in E-Commerce - Guest Post. The data referenced below either was influenced/borrowed from either the previously mentioned article, the Coldfusion for Pentesters presentation, or made up to appear realistic.

Webserver Compromise


An IDS alert flags some suspicious network traffic for an external system trying to connect to an organization’s Coldfusion web server located in their DMZ. The organization monitors for access attempts to the Coldfusion administrator web panel including access to features such as scheduling tasks. The external system triggered the IDS signature shown below because it accessed the Coldfusion’s scheduleedit located at hxxp://www.fake_site.com/CFIDE/administrator/scheduler/scheduleedit.cfm on an established session.


The reason the IDS alert is concerning is because what accessing scheduleedit means. One method an attacker can use to upload code on to a compromised Coldfusion server is by leveraging the scheduled tasks. The attacker can schedule a task, point it to their program’s location on a different server, and then have the task save it locally to the Coldfusion server for them to use (see page 85 in this presentation.) Accessing the interface to edit scheduled tasks is reflected by “scheduleedit” appearing in the URL. The IDS alert is triaged to determine if the Coldfusion server was successfully compromised and if an attacker was uploading anything to the server using the scheduled tasks feature.

The Coldfusion instance is running on a Windows 2008 server with IIS and its IP address is 192.168.0.1. The IIS log was reviewed for the time in question to see the activity around the time the IDS alert triggered.

2015-03-10 22:09:00 192.168.0.1 GET /CFIDE/Administrator/scheduler/scheduletasks.cfm - 80 – X.X.X.X fake-useragent  200 0 0 5353

2015-03-10 22:09:10 192.168.0.1 GET /CFIDE/Administrator/scheduler/scheduleedit.cfm submit=Schedule+New+Task 80 - X.X.X.X fake-useragent 200 0 0 5432

2015-03-10 22:09:15 192.168.0.1 GET /CFIDE/Administrator/scheduler/scheduletasks.cfm runtask=z&timeout=0 80 – X.X.X.X fake-useragent 200 0 0 1000

2015-03-10 22:11:15 192.168.0.1 GET /CFIDE/shell.cfm - 80 – X.X.X.X fake-useragent 200 0 0 432

The IIS logs showed the activity that tripped the IDS sensor occurred at 2015-03-10 22:09:10 when the external system with IP address X.X.X.X scheduled a new task successfully. Notice the 200 HTTP status code indicating the request completed successfully. This single entry answers one of the questions. The attacker did compromise the Coldfusion server and has administrator rights to the Coldfusion instance because they were able to access the schedule tasks area within the administrator panel. The next log entry shows the scheduled task named “z” executed at 2015-03-10 22:09:15 and shortly thereafter the attacker accessed a file named “shell.cfm”. Applying the Root Cause Analysis Model to this incident results in this activity along with the IDS alert being organized into the indicators layer. The activity is post compromise activity and the model is being used to identify the attack vector. The investigation continues to see what remnants the attacker left in the logs just prior to tripping the sensor while trying to upload their web shell.

The IIS log was reviewed to see what occurred prior to 2015-03-10 22:09:10 for the attackers IP address X.X.X.X. The few records are listed below:

2015-03-10 22:08:30 192.168.0.1 GET /CFIDE/adminapi/administrator.cfc method=login&adminpassword=&rdsPasswordAllowed=true 80 – X.X.X.X fake-useragent 200 0 0 432

2015-03-10 22:08:40 192.168.0.1 GET /CFIDE/administrator/images/icon.jpg 80 – X.X.X.X fake-useragent 200 0 0 432

The prior activity shows the attacker requesting a strange URL followed by successfully accessing the icon.jpg image file. Searching on the strange URL reveals it’s an Adobe ColdFusion Administrative Login Bypass exploit and when successful it provides access to the admin panel. This remnant is organized into the exploit layer. The payload of this exploit is direct access to the admin panel. There is no delivery mechanism for the payload. When the admin panel is accessed certain files are loaded such as images. In this scenario one of the images loaded by default is the file icon.jpg. This remnant indicates the attacker successfully accessed the admin panel so it means the exploit worked and the payload was admin access. The access to the icon.jpg file is organized into the payload layer. At this point the following layers in the Root Cause Analysis have been completed: indicators, payload, deliver mechanism, and exploit. The remaining layers are the delivery mechanism for the exploit and source. The attacker used a tool or web browser to attack the server so the delivery mechanism for the exploit is HTTP and the source of the attack is somewhere from the Internet.

The Compromise Root Cause Analysis Model was applied to the hypothetical web compromise security incident and it made it easier to review the remnants left by the attacker to identify the attack vector they used.

Root Cause Analysis Is Easier with a Methodical Process


The Compromise Root Cause Analysis Model is a cyclical methodical process one can use to perform root cause analysis. The model is way to organize information and artifacts discovered during investigations for each system involved in the attack. The model is a repeatable investigation process enabling the questions of how and when did the compromise occur to be answered.



References

Carvey, H. (2005). Locard's Exchange Principle in the Digital World. Retrieved from http://windowsir.blogspot.com/2005/01/locards-exchange-principle-in-digital.html

Harrell, C. (2010). Attack Vector Artifacts. Retrieved from http://journeyintoir.blogspot.com/2010/11/attack-vector-artifacts.html

Harrell, C. (2012). Compromise Root Cause Analysis Model. Retrieved from http://journeyintoir.blogspot.com/2012/06/compromise-root-cause-analysis-model.html

Harrell, C. (2012). Malware Root Cause Analysis. Retrieved from http://journeyintoir.blogspot.com/2012/07/malware-root-cause-analysis.html

Harrell, C. (2014). Malware Root Cause Analysis Dont Be a Bone Head Slide Deck. Retrieved from http://journeyintoir.blogspot.com/2014/06/malware-root-cause-analysis-dont-be.html

Hogfly. (2008). Footprints in the snow. Retrieved from http://forensicir.blogspot.com/2008/12/footprints-in-snow.html

The Jock Becomes the Geek

Sunday, February 22, 2015 Posted by Corey Harrell 2 comments
We interrupt the normal DFIR programming on this blog for a different kind of post. A post about a situation I found myself in. It's a story others may find amusing or cause them to have empathy for me. It's a story about how I evolved from being a jock to walking amongst the DFIR geeks.

In high school I didn't pay any mind to the so called “clicks.” If I had to be categorized then I guess it would had been a jock. I was a three sport athlete who enjoyed the social life outside of school. I wasn’t in to any of the things people tend to talk about to show their “geek credentials.” I didn’t care about technology (outside of video games), didn’t use computers, and definitely didn’t play any of the so called “geeky” games like dungeons and dragons. Heck, I didn’t even have a drive to learn since I was only going through the motions.

So at this point you may be wondering what the hell happened to me. To go from an athlete who didn't care about technology and learning. To someone who is passionate about one of the most technical areas within the information security field and spends their free time researching “geeky” things. What happened to me was a life changing experience.

After I graduated high school I was working for an organization who provided for people with disabilities. I have the upmost respect for the people whose calling is in this field. To care for one of the most vulnerable populations in our society. Without these organizations, there is a risk to returning to the institutions where this population was mistreated and abused for years. The people working in this field have been some of the most caring people I ever met. My mom was one of them and growing up she would bring me to the places where she worked. Now back to my story. I was a floater in the organization I worked for. Floaters worked in the residences where the people lived and did not have a set house they would always work in. We floated from house to house based on where coverage was needed. The houses varied in the people’s functional levels who lived there. Some had high functioning individuals who had jobs and took care of themselves; my job was more of a mentor then a direct care worker. Other houses had lower functioning individuals; my job was direct care taking care of their every need. Going in to this job I knew what I was in for and what the nature of the work could involve.

One day I was working at a low functioning house. During my shift, an individual who was confined to a wheel chair had an accident that involved a bowel movement. I wasn't sure how it played out with my coworker but it was my turn. It was the turn of an 19 year old kid to clean up an adult who had an accident. I worked in this position for some time but this was the first time I encountered having to attempt anything like this. As we entered into the bathroom I noticed the most awful smell I have ever smelled. Mind you, we just walked in to the bathroom and the bowel smell quickly overcame what little fresh air was left in the room. I started to envision what I had to do next. The images running through my mind along with the smell was making me more and more nauseous. That is when I blew chunks in the direction of the toilet as I dropped to my knees getting sicker and sicker as tears started rolling down my face. I mumbled and grunted to my coworker; something along the lines "I can't. I can't. I can’t. I can't stop getting sick." She was one of the people I worked with who had a serving heart but looking back on this almost 18 years later I think she took pity on me. An 19 year old kid who looked like a mess and was on his knees throwing up into the toilet. Each time I breathed in what smelled worse than death resulted in the toilet calling my name again. I tried to leave. Believe me, I tried to leave to get fresh air. Boy, how many times did I try to leave? The smell; that horrible, horrible smell. At one point, I stood up to leave and I saw my coworker attending to the individual. That provided visuals to go along with the smell and things became worse.

I don't remember how long this went on for or what happened afterwards. All I know was this experience had a significant impact on my life. Again, I have the upmost respect for those who work in this field but the experience taught me the work was not meant for me and my weak stomach. The experience stuck with me and impacted me when I was joining the Marines a year later. I had the pick of any job I wanted in the Marines. I only wanted a job that kept me far away from going through the experience again. I picked what in my mind was the complete opposite of the field where I had this experience. I picked a technology field where I thought I would never again be responsible for caring for other people. (Please keep in mind, when I made this decision I was 19 and had a lot to learn.) This decision –based on my experience - is what made the jock become a geek. The rest of my story is history as I had a guiding hand leading me down the path where I eventually found my passion amongst the DFIR geeks. A community where if you have the technical skills and knowledge then you are accepted as one of their own even if you lack the traditional "geek credentials."
Labels:

Process Hollowing Meets Cuckoo Sandbox

Wednesday, February 4, 2015 Posted by Corey Harrell 2 comments
Growing up I loved to watch horror movies. In hindsight, they scared the crap out of me probably because I was too young to watch them. One such movie was the 1986 movie Night of the Creeps. Alien slugs enter through peoples' mouths and eventually take over their bodies. A classic body snatchers style movie that had me worried for few days when talking to close to people. Process hollowing (aka process replacement) is a technique malware uses to overwrite a running process with a malicious code. To me it's the technical equivalent of those alien body snatchers. This post explores process hollowing techniques using the Cuckoo Sandbox.

Process Hollowing (aka Process Replacement)


In my post Prefetch File Meet Process Hollowing I walked through what process hollowing was but for completeness I’ll copied what I wrote below:

Malware uses various techniques to covertly execute code on systems. One such technique is process hollowing, which is also known as process replacement.

The book Practical Malware Analysis states the following in regards to this technique:

"Process replacement is used when a malware author wants to disguise malware as a legitimate process, without the risk of crashing a process through the use of process injection.

Key to process replacement is creating a process in a suspended state. This means that the process will be loaded into memory, but the primary thread of the process is suspended. The program will not do anything until an external program resumes the primary thread, causing the program to start running"

In addition, the book The Art of Memory Forensics states the following:

"A malicious process starts a new instance of a legitimate process (such as lsass.exe) in suspended mode. Before resuming it, the executable section( s) are freed and reallocated with malicious code."

In essence, process hollowing is when a process is started in the suspended state, code is injected into the process to overwrite the original data, and when the process is resumed the injected code is executed. Everything about the process initial appears to reflect the original process. Similar to how everything about the person initially appears to be the original person. Upon closer inspection it reveals that everything is not what it seems. The process behaves differently (such as network communications) and the code inside the process is not the original code. This is very similar to the person behaving differently (such as trying to eat you) and the biological material inside the person is not the original biological material.

A Common Process Hollowing Technique


Through observation, the characters in the In the Night of the Creeps figured out how people’s bodies were snatched. Slugs went from one person’s mouth to another person’s mouth. After observing this method the characters put tape over their mouths and were able to fight the zombies without becoming one themselves. By knowing what technique was used to snatch a body enabled the characters to defend themselves. The same can be said about process hollowing and knowing how the technique looks enables you to spot the zombified processes. One of the more publicize techniques was described in the Practical Malware Analysis book (lab 12-2 solution on page 590) as well as Trustwave SpiderLabs’s article Analyzing Malware Hollow Processes. The sequence of Windows functions, their descriptions, and how they appear during dynamic analysis of the Profoma Invoice.exe sample (md5 ab30c5c81a9b3509d77d83a5d18091de) with the Cuckoo sandbox is as follows:

        - CreateProcessA: creates a new process and the process creation flag 0x00000004 is used to create the process in the suspended state
        - GetThreadContext: retrieves the context of the specified thread for the suspended process
        - ReadProcessMemory: reads the image base of the suspended process
        - GetProcAddress: according to Practical Malware Analysis this function “manually resolves the import UnMapViewofSection using GetProcAddress, the ImageBaseAddress is a parameter of UnMapViewofSection”. This removes the suspended process from memory.
        - VirtualAllocEx: allocates memory within the suspended process’s address space
        - WriteProcessMemory: writes data of the PE file into the memory just allocated within the suspended process
        - SetThreadContext: according to Practical Malware Analysis this function sets the EAX register to the entry point of the executable just written into the suspended process’s memory space. This means the thread of the suspended process is pointing to the injected code so it will execute when the process is resumed
        - ResumeThread: resumes the thread of the suspended process executing the injected code

Cuckoo Sandbox Showing the Common Process Hollowing Technique


Cuckoo Sandbox is an open source automated malware analysis system. In their own words "it simply means that you can throw any suspicious file at it and in a matter of seconds Cuckoo will provide you back some detailed results outlining what such file did when executed inside an isolated environment." Malwr is a free online malware analysis service that leverages the Cuckoo Sandbox. The Behavioral Analysis section outlines the function calls made during execution. The pictures below show the Profoma Invoice.exe sample’s (md5 ab30c5c81a9b3509d77d83a5d18091de) function calls that perform process hollowing.

The image below shows Profoma Invoice.exe creating a process in the suspended state. The suspended process’ handle is 0x00000088 and thread handle is 0x0000008c.


The next image shows Profoma Invoice.exe retrieving the context of the suspended process since it references the thread handle 0x0000008c.


The image below shows Profoma Invoice.exe reading the image base of the suspended process since it references the process handle 0x00000088.


The image below shows Profoma Invoice.exe getting the addresses of the UnMapViewofSection and VirtualAllocEx function calls.


The images below show Profoma Invoice.exe writing a PE file into the address space of the suspended process since it references the process handle 0x00000088. It takes multiple WriteProcessMemory function calls to write the entire PE file.


The image below shows Profoma Invoice.exe setting the thread context for the suspended process since it references the thread handle 0x0000008c.


The image below shows Profoma Invoice.exe resuming the suspended thread to execute the injected code.


Cuckoo Sandbox Detecting the Common Process Hollowing Technique


Cuckoo Sandbox detects malware functionality using signatures. The image below shows Malwr detecting the common process hollowing technique used by Profoma Invoice.exe (md5 ab30c5c81a9b3509d77d83a5d18091de).


The signature detecting process hollowing reports it as “executed a process and injected code into it, probably while unpacking.” The signature detecting the technique is named injection_runpe.py and is available in the Community Signatures. The signature is open allowing anyone to read it to see how it detects this behavior. However, the image below shows a portion of the signature that detects the sequence of function calls outlined earlier to perform process hollowing.


A Different Process Hollowing Technique


The process hollowing technique outlined above is well publicized and is the technique I normally expected to see. It was as if I had tape on my mouth waiting for a zombified friend to come strolling down the street. There are more than one ways to perform an action similar to there being more than one way to snatch a body. In the 1998 movie The Faculty an unknown creature snatched bodies by entering the body through the ear. Now imagine what would had happened to the characters from the Night of the Creeps movie encountering these body snatchers. The zombified bodies are harder to spot since they don’t look like zombies. Trying to defend themselves with tape on their mouths and baseball bats in hand would be short lived. The tape offers no protection since the creatures enter through the ear. It’s a different technique with the same result. Process hollowing is similar with different techniques ending with the same result.

I was a bit surprised back in December when I saw the behavior in the image below after I ran the sample Kroger_OrderID.exe (md5 1de7834ba959e734ad701dc18ef0edfc) through a sandbox.


The behavior clearly shows that Kroger_OrderID.exe is going to perform process hollowing since it started the svchost.exe process in a suspended state (creation flag 0x00000004.) However, the function calls afterwards are not the typical well publicized ones; this was a different technique. After a bit of searching I found the Lexsi article Overview of the Kronos banking malware rootkit, which breaks down how this technique works. (the article also shows how to use Volatility to analyze this as well.) I summarized below the Windows function sequence and their descriptions as outlined in the article:

        - CreateProcessA: creates a new process and the process creation flag 0x00000004 is used to create the process in the suspended state
        - ReadProcessMemory: reads image base of the suspended process
        - NtCreateSection: creates two read/write/execute sections
        - ZwMapViewOfSection: maps the read/write/execute sections into the malware’s address space
        - ZwMapViewOfSection: maps the second section into the suspended process’s address space (this section is therefore shared between both processes).
        - ReadProcessMemory: reads image base of the suspended process’s image into section 1
        - ReadProcessMemory: reads image base of the malware’s image into section 2
        - NtMapViewOfSection: overwrites the suspended process's entry point code by mapping section 1 to the new process base address
        - ResumeThread: resumes the thread of the suspended process executing the injected code

Cuckoo Sandbox Showing the Different Process Hollowing Technique


The Behavioral Analysis section outlines the function calls made during execution. The pictures below show the sample Kroger_OrderID.exe (md5 1de7834ba959e734ad701dc18ef0edfc) function calls performing the different process hollowing technique.

The image below shows the first three function calls. The sample Kroger_OrderID.exe creates a suspended process with the thread handle 0x00000608 and process handle 0x00000604. Next the ReadProcessMemory function reads the image base of the suspended process due to the reference to process handle 0x00000604. The NtCreateSection function then creates the second read/write/execute section with the section handle 0x000005f8.


The image below shows the next three function calls. The ZwMapViewOfSection function maps the read/write/execute sections into the malware’s address space due to the section handle 0x000005f8 being referenced. The next ZwMapViewOfSection maps the second section into the suspended process’s address space due to both the section handle 0x000005f8 and process handle 0x00000604 both being referenced. Then the ReadProcessMemory function reads malware’s image into the section. Not shown in the image is the ReadProcessMemory function referencing the process handle 0x00000604.


The image below shows the remaining four functions. The NtCreateSection function then creates the first read/write/execute section with the section handle 0x000005f4. The ZwMapViewOfSection functions maps the read/write/execute sections between the malware and suspended process due to section handle 0x000005f4 and process handle 0x00000604 both being referenced. This mapping overwrites the entry point code in the suspended process. Finally, the ResumeThread function resumes the thread of the suspended process executing the injected code.


Cuckoo Sandbox Detecting the Different Process Hollowing Technique



**** Updated on 02/04/15 *****

This section of the blog has been edited since it was published earlier today. In the original blog post I highlighted how the injection_run.py signature did not detect this injection technique and I shared a signature I put together to detect it.


Brad Spengler sent me an email about what I was seeing. He mentioned that a change did not make it into the updated injection_run.py signature. Specifically, he mentioned the plugin is looking for NtMapViewOfSection which he uses in his Cuckoo Sandbox instead of looking for the older ZwMapViewOfSection. I modified the injection_run.py signature by renaming NtMapViewOfSection to ZwMapViewOfSection (on lines 45 and 51) and afterwards it did detect this technique. As a result, I updated this section of the blog to reflect this since this post’s purpose was to explore different injection techniques and how Cuckoo can help explore them.

**** Updated on 02/04/15 *****


Cuckoo Sandbox is able to detect this different process hollowing technique (see update about change made to the injection_runpe.py signature.) Executing the sample Kroger_OrderID.exe (md5 1de7834ba959e734ad701dc18ef0edfc) in Cuckoo results in the following behavior detection.

 

Wrapping Things Up


We don’t need to sit at our computers wearing headphones and tape on our mouths to hunt down zombified processes within our environments. Process hollowing is an interesting technique and it constantly reminds me about the various body snatcher horror movies I’ve seen. Leveraging the Cuckoo Sandbox makes exploring the various process hollowing techniques even more interesting since it allows for following the sequence of Windows function calls.

Happy hunting and if you come across any zombies in your travels don’t take any chances and just follow the rule from the movie Zombieland. Rule 2 Double Tap: when in doubt, don't get stingy with your bullets.