So far in this book, you have learned about the fundamentals of incident response, the knowledge of the attacker's behaviors using threat intelligence, and the way that you can implement and use different tools to improve the capacity of your organization to respond to attacks.
However, in the critical moments when an incident occurs, it is essential to know what you need to look for and where to get relevant information.
There are multiple sources of information where you can get valuable data about malicious behaviors to define an identification and contention strategy. You can do this by implementing analytics and detection engineering in incident response.
In this chapter, we will cover the following topics:
In case you haven't already, you need to download and install VMware Workstation Player from this link https://www.vmware.com/products/workstation-player/workstation-player-evaluation.html.
You'll also need to download the following from the book's official GitHub repository https://github.com/PacktPublishing/Incident-Response-with-Threat-Intelligence:
Before we start the practical exercises in this chapter, we need to prepare our work environment.
To begin, start up the virtual machines that we will use throughout this chapter. To do this, start VMware Workstation Player. From there, do the following:
Once you have started both virtual machines, you can install and configure the tools that will be required to perform the practical lab exercises.
In the previous chapter, you learned about some basic concepts for using the monitoring, detection, incident response, and orchestration Security Onion platform. As you learned, this platform contains valuable tools for active defense against threats.
In this chapter, you will learn how to install some of these tools individually on your IR-Workstation VM to create a threat hunting platform to work on the practical exercises provided in this chapter, as well as the next.
ELK stands for the integration of three open source tools:
These components of the Elastic stack work together to ingest, process, and display the information so that it can be managed and visualized, as shown in the following screenshot:
The first ELK component that we are going to install will be Elasticsearch. We will do so using the Debian/Ubuntu installation package.
From your IR-Workstation VM, follow these steps:
cd Downloads
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.2-amd64.deb
This will result in the following output:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.2-amd64.deb.sha512
shasum -a 512 -c elasticsearch-7.15.2-amd64.deb.sha512
sudo dpkg -i elasticsearch-7.15.2-amd64.deb
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
To start the Elasticsearch service, run the following command:
sudo systemctl start elasticsearch.service
These commands can be seen in the following screenshot:
With that, you have installed and started the first component of ELK. Next, we are going to install and configure Logstash.
Installing Logstash is similar to what we did for Elasticsearch. Follow these steps:
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.15.2-amd64.deb
sudo dpkg -i logstash-7.15.2-amd64.deb
This will result in the following output:
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable logstash.service
sudo systemctl start logstash
The preceding commands can be seen in action in the following screenshot:
Now that you have started the Logstash service, let's install Kibana.
The last of the ELK components is Kibana, and the installation process is similar to what we saw for Elasticsearch and Logstash:
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.15.2-amd64.deb
sudo dpkg -i kibana-7.15.2-amd64.deb
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
In this case, we are going to configure Kibana to allow connections from the other devices on the network before starting the service.
sudo vim /etc/kibana/kibana.yml
Change the server.host parameter to allow external connections and the elasticsearch.host parameter to define the URL for the Elasticsearch instance, as shown in the following screenshot:
sudo systemctl start kibana.service
You will see a welcome message so that you can start using ELK, as shown in the following screenshot:
To start receiving information via ELK, we need to create a configuration file on Logstash that includes input, transformation, and output parameters.
You can create a Logstash configuration file from scratch or you can use a preconfigured template. In this case, we are going to use a sample template:
cd /etc/logstash
sudo vim logstash-sample.config
This will result in the following output:
sudo cp logstash-sample.conf conf.d/win.conf
This will result in the following output:
sudo systemctl restart logstash
Now, your ELK is ready to receive and process information from different devices to analyze on the platform.
In the previous chapter, you installed and configured winlogbeat to send the Windows logs to ELK on Security Onion.
To receive the logs from your IR-Laptop VM, you need to change the IP address in the winlogbeat configuration file so that the output.logstash parameter points it to the IR-Workstation VM. Follow these steps:
restart-Service winlogbeat
Your IR-Laptop VM will now send the Windows logs to the ELK instance on the IR-Workstation VM.
The last part of configuring ELK consists of creating an index to define the way that Kibana will process and show the information on the dashboard. Follow these steps:
winlogbeat*
Now that you've created an index, you can start visualizing the information in Kibana.
To open the Discover dashboard, follow these steps:
Congratulations – you finished installing and configuring the ELK stack on your IR-Workstation VM! Now, we are ready to learn about some concepts and strategies we can use to detect and contain threats.
As you learned in Chapter 2, Concepts of Digital Forensics and Incident Response, according to the SANS Incident Response process, phase 2 – identification, and phase 3 – containment, are essential to reduce the impact of a cyberattack, as shown in the following diagram:
Incident response sometimes starts with the escalation of an alert or a user reporting the disruption of a service or the discovery of a data leak. Once a case has been created regarding an incident, the next step is to follow the playbooks associated with the incident.
The more information you have about the incident, the better you can understand the nature of the attack, especially if you use frameworks such as MITRE ATT&CK and you have reliable threat intelligence sources of information.
However, at this point, you just have information about the incident's symptoms, but not necessarily the root of the problem, which means that you may not know about the attack vector or the scope of the compromise.
Essentially, you can't move efficiently to the containment phase if you don't have enough information and context about the attack.
Between the detection phase and the containment phase, you need to be proactive, assertive, and efficient. Remember that sometimes, the attackers are on your network and every move can accelerate the attacker's actions if they discover that you are trying to catch them.
There are six steps that you can follow to dimension the level of compromise, look for malicious indicators, and use this information to limit the damage of the attack. These steps are shown in the following diagram:
Let's discuss these steps further:
In the next section, you will learn some concepts surrounding detection engineering and how to use it to hunt threats in incident response.
Detection engineering is the process of improving detection capabilities by using diverse sources of information to analyze potential threats, identify adversaries' tactics, techniques, and procedures, and incorporate analytics and detection rules to implement them in specific tools such as Security Information and Event Management (SIEM) or perform direct searches on devices.
This process should be technology-agnostic and focus on using existing analytics that have been created by other professionals or on the development of your analytics to detect malicious indicators.
To create good detection rules, you will need the following components:
Detection engineering is a core activity for security operations centers (SOCs), and you can work together with the SOC in the preparation phase to develop analytics in case an incident occurs.
The goal is to reduce the detection time of malicious indicators by identifying potentially compromised devices in the incident and contain threats in time to limit their impact on the organization.
MITRE ATT&CK is a good resource for identifying data sources where you can find malicious indicators – you just need to look for the data components that contain valuable information for a particular technique/sub-technique in the detection section, as shown in the following screenshot:
Once you've mapped the data sources to this technique, you can review the information associated with the data components and create the analytics for detection.
In the next section, you will learn how to develop and test detection engineering so that you can use it in incident response.
So far, you have learned about the principles you can use to identify threats using data analytics and detection engineering. Sometimes, you will need to create analytics at the time of the incident response, but the idea is to do it proactively by creating a repository in advance to use when necessary.
Now, let's learn how to configure a laboratory to create and test analytics, as well as validate their efficiency.
Here, we will select a specific MITRE ATT&CK technique and from this technique, we will associate it with a MITRE Cyber Analytics Repository (CAR) analytic and create the implementation from the pseudocode.
Subsequently, we will emulate this technique using the Invoke-AtomicRedTeam tool to generate the IoA.
Once that activity has been recorded, we will use the analytics we created previously to detect this behavior through attack indicators, as shown in the preceding screenshot.
The MITRE CAR is a repository of analytics that was developed by MITRE based on the MITRE ATT&CK adversary model. The MITRE CAR knowledge base is defined by agnostic pseudocode representations and the implementations for specific detection technologies such as Event Query Language (EQL) and Splunk.
As described in the official MITRE CAR Portal, the analytics of CAR includes the following information:
For example, you suspect that the attacker might be using PowerShell scripts to perform malicious activities. According to MITRE CAR, the analytical information to detect this behavior corresponds to CAR-2014-04-003. You can find details about this analytic at https://car.mitre.org/analytics/CAR-2014-04-003/, as shown in the following screenshot:
In the Implementations section, you will find the pseudocode in a generic format that describes the components you need to look for.
You can use this information to create your own rules or searches for specific technologies, such as Eql and EQL native.
Now, you can start hunting to find this pattern of behavior on computers and servers on the network.
There are other amazing projects that you should explore to create and use data analytics and develop detection engineering, such as Threat Hunter Playbook, https://github.com/OTRF/ThreatHunter-Playbook, and Security-Datasets, https://github.com/OTRF/Security-Datasets. Both projects were developed as part of the Open Threat Research community initiative, led by the brothers Roberto Rodriguez and Jose Luis Rodriguez (you can connect with them on Twitter at @Cyb3rPandaH and @Cyb3rWar0g, respectively).
When you use data analytics and detection engineering in incident response, you will substantially improve the speed at which you identify malicious indicators and the capacity to contain the attack.
To finish configuring our detection lab, we need to install Red Canary's Invoke-AtomicRedTeam from https://github.com/redcanaryco/invoke-atomicredteam. To do this, follow these steps:
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
IEX (IWR 'https://raw.githubusercontent.com/redcanaryco/invoke-atomicredteam/master/install-atomicredteam.ps1' -UseBasicParsing)
Install-AtomicRedTeam –getAtomics
Import-Module "C:AtomicRedTeaminvoke-atomicredteamInvoke-AtomicRedTeam.psd1" -Force
You need to run this command every time you open a new PowerShell console. If you want to make this functionality always available, you need to add the import to your PowerShell profile, as described in the respective GitHub repository, by running the following commands:
Import-Module "C:AtomicRedTeaminvoke-atomicredteamInvoke-AtomicRedTeam.psd1" -Force
$PSDefaultParameterValues = @{"Invoke-AtomicTest:PathToAtomicsFolder"="C:AtomicRedTeamatomics"}
Now that you've installed Red Canary's Invoke-AtomicRedTeam, you can run your tests from a PowerShell console. You can find additional tests in the Chapter-12 folder on GitHub.
Additionally, you can create tests using the Atomic GUI by running the following command:
Start-AtomicGUI
This will open the Atomic Test Creation interface on port 8487, as shown in the following screenshot:
With the Atomic GUI, you can create tests for Windows, Linux, and macOS. You can find a short video demonstration about how to use this tool in this book's Code in Action section.
To start the test in our detection lab, we are going to select one of the MITRE ATT&CK techniques that's commonly used by attackers that we reviewed previously, known as Command and Scripting Interpreter: PowerShell(T1059.001) (https://attack.mitre.org/techniques/T1059/001/).
According to the Red Canary 2021 Threat Detection Report (https://redcanary.com/threat-detection-report/) and the Kaspersky Cybercriminals' top LOLBins report (https://usa.kaspersky.com/blog/most-used-lolbins/25456/), abusing Microsoft PowerShell, the legitimate software engine and scripting language, was the most common tool to be used in cyberattacks, so creating detection analytics for PowerShell-related activity will be very useful.
To start emulating this behavior using Red Canary's Invoke-AtomicRedTeam, follow these steps:
cd C:AtomicRedTeam
Invoke-AtomicTest T1059.001 -ShowDetailsBrief
The output will be as follows:
Invoke-AtomicTest T1059.001 -ShowDetails
The output will be as follows:
Invoke-AtomicTest T1059.001 -CheckPrereqs
You will see the following output:
To run the tests, execute the following command:
Invoke-AtomicTest T1059.001 -TestNumbers 4,11
You will be able to see the results of the tests, as shown in the following screenshot:
Now that you have run various tests on the technique to emulate this malicious behavior, let's create the analytics. According to MITRE ATT&CK, the data components we can use to detect this technique are as follows:
With this information, we can identify the sources of information to detect any malicious PowerShell activity:
In this case, we will use Elasticsearch/Kibana as a hunting platform, so we are going to create the analytics to run it in Kibana Query Language (KQL). Assuming that we have installed Sysmon, we can create a query for detection using the information from the Sysmon documentation at https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon, where the identifier for detecting process creation is Event ID: 1.
So, we could create the analytics using the following information:
The result of our analytics would be as follows:
winlog.event_id : 1 and winlog.event_data.ParentCommandLine : * Powershell.exe and not winlog.event_data.ParentImage :*explorer
Follow these steps to test the analytics:
Note
Don't forget to adjust the range of time according to the period when you ran the atomic test.
You will see the records that match the search criteria of your analytics.
Scroll down and click on the Toggle column in table button for the winlog.event_id field, as shown in the following screenshot:
Now, you will see those filtered fields in column format, which will allow you to analyze and search for information, as shown in the following screenshot:
Finally, review and identify the events related to the IoA that was generated when you ran the atomic tests using Red Canary's Invoke-AtomicTest.
As you can see, detection engineering and data analytics are very valuable when you need to identify possible malicious activity on your network in incident response.
In this chapter, you learned about the importance of detection engineering in incident response, how to create a detection lab by installing the ELK stack, and how to use the Invoke-AtomicRedteam framework to develop and test analytics.
You also learned how to find and contain threats efficiently using the MITRE CAR and MITRE ATT&CK frameworks.
In the next chapter, you will learn how to hunt threats by creating and using detection rules to find Indicators of Compromise (IoCs) and Indicators of Attack (IoAs).
18.226.180.16