CHAPTER 7

Red Teaming Operations

The concept of red teaming is as old as war itself. The red team is an independent group that assumes an adversarial point of view to perform stealthy attack emulations that can trigger active controls and countermeasures. The goal is to challenge an organization to significantly improve the effectiveness of its security program. Red teaming is exercised in business, technology, and the military, and it can be applied to any situation where offensive and defensive controls are used.

The members of the blue team are the cyberdefenders. We cover blue team operations in other chapters. The blue team, by far, has the hardest job. It guards an organization’s assets and sensitive data from both the red team and actual adversaries. Protecting an organization’s attack surface is a complex task. Blue teams do not sit around passively waiting for an event to occur. They are hunters, actively searching for threats and eradicating them from the environment. Granted, not all blue team activities are as exciting as threat hunting; some blue team activities are focused on detecting malicious activity, hardening, and maintaining an environment’s security posture.

Our goal as ethical hackers is to help mature an organization’s defenses. Ethical hackers must have an understanding of the blue team’s perspective, the other side of the coin, in order to provide the most valuable information possible. This chapter expands on ethical hacking methodologies and describes an enterprise red teaming effort, but it also highlights critical touchpoints with the blue team because, as ethical hackers, providing value to the blue team is our primary focus.

In this chapter, we cover the following topics:

•   Red team operations

•   Red team objectives

•   What can go wrong

•   Communications

•   Understanding threats

•   Attack frameworks

•   The red team testing environment

•   Adaptive testing

•   Lessons learned


Red Team Operations

Red team operations differ from other ethical hacking activities in a couple of significant ways. First, they are unannounced tests that are mostly stealthy in nature. Second, because the tests are unannounced, they allow the blue team to respond to them as if they were an actual security event. Red team operations are intended to demonstrate the insufficiency of response procedures or security controls. The concept of red teaming, if applied holistically, can help an organization mature at the strategic, operational, and tactical levels.1 The beauty of red teaming is taking war-game exercises out of the abstract and allowing your defenders to practice responding to challenges at a tactical level.

Red teaming has many definitions. Department of Defense Directive (DoDD) 8570.1 defines red teaming as “an independent and focused threat-based effort by an interdisciplinary, simulated adversary to expose and exploit vulnerabilities to improve the security posture of Information Security.”2 The US Military Joint Publication 1-16 defines a red team as “a decision support element that provides independent capability to fully explore alternatives in plans, operations, and intelligence analysis.”3 Both sources stress the fact that a level of independence and objectivity is needed to successfully execute a red team function.

Red team efforts often start with defining a specific goal and the rules of engagement. They can focus on accessing or exfiltrating actual data or even a token with no real value. Red team efforts can also focus on a test or QA environment or can occur in a live production environment. Either way, the goal is to understand how to refine an organization’s detection, response, and recovery activities. Typically, when professionals discuss incident response, the focus is on improving three metrics:

Mean time to detect

Mean time to respond

Mean time to eradicate

The ability to measure and report on the aforementioned metrics and the focus on improving the security team’s agility are the major benefits of conducting red teaming exercises.

Strategic, Operational, and Tactical Focus

Red teaming should focus on improvements in how an organization responds at the strategic, operational, and tactical levels. Organizations that focus solely on how their technical incident responders react are missing a great opportunity to ensure that all decision makers have the opportunity to participate in war games. An organization’s executive management, technical leadership, legal, public relations, risk management, and compliance teams can all benefit from participating in red team exercises.

Assessment Comparisons

Let’s take some time to discuss how red teaming exercises differ from other technical-based assessments.

Vulnerability Assessment

Vulnerability assessments often use tools to scan for vulnerabilities inside of an environment. Vulnerabilities are often validated as a part of the vulnerability assessment process. However, a vulnerability assessment will not show the business impact of what could happen if the vulnerabilities in an environment were combined in a targeted attack. It also doesn’t show the impact of missing security controls in the environment. Vulnerability assessments are important and should occur regularly, monthly in most circumstances, and should be supplemented with a penetration test or a red or purple team exercise.

Penetration Test

A penetration test can show the business impact of how missing security controls and existing vulnerabilities in the technical environment can be combined and taken advantage of by an attacker. The goal is to gain unauthorized access and demonstrate the business impact of the problems identified. Some penetration tests also have an exfiltration component to demonstrate to the business how easy or hard it is to remove data from its environment. Most penetration tests do not allow the blue team to respond to attacks and only note when the penetration testing team’s actions trigger an alert. Penetration tests are often required for compliance purposes and can give an organization valuable information. They are also ideal for organizations that are just starting to refine their security program and perhaps are not ready for red team or purple team exercises. Penetration tests are often point-in-time assessments and do not feature an ongoing testing component. Enterprise penetration tests often include social engineering and physical security assessments, as described later in this chapter.

Red Teaming

Red teaming can combine all the assessments just mentioned; a stealthy vulnerability assessment, penetration test, social engineering assessment, and physical security assessment can focus on a specific goal or application. Red team exercises vary in scope and focus in a variety of ways. Most significantly, red team exercises are unannounced. The blue team does not know if it is looking at a real-world attack or an attack simulation. The blue team must detect, respond, and recover from the security incident, thereby refining and practicing its incident response skills.

Communication between the blue team and red team is very limited during testing activities. This allows for the red team exercise to closely simulate a real-world attack. The white team is made up of key stakeholders from different business units or technical teams, project managers, business analysts, and so on. The white team provides a layer of abstraction and ensures that communication between the red and blue teams is appropriately limited.

Red team assessments also have a goal and an assertion. Often the assertion is “the network is secure” or “sensitive data cannot be exfiltrated without our knowledge.” Testing activities are then focused on proving whether the assertion is true or false. One of the main goals of a red team assessment is to try to go undetected to truly simulate a determined adversary. The red team should be independent of the blue team. Red teaming is usually performed on organizations with a mature security program. Many organizations use purple teaming, described next, to refine their detection, response, and recovery processes.

Purple Teaming

Purple teaming is covered in depth in the next chapter. A purple team exercise can have all of the components of a red team exercise, but communication and interaction between the blue team and the red team are encouraged, not discouraged. Communication between the two teams can be ongoing, and often many of the testing activities are automated. The red team is still independent of the blue team, but they work hand in hand to refine security controls as the assessment is in progress.

Red Teaming Objectives

Red teaming exercises can be very valuable in getting to the “ground truth” of the effectiveness of the security controls you have in place. The red team’s independence from the blue team minimizes bias and allows for a more accurate assessment. Red team exercises, like penetration tests, can be used for compliance purposes. For example, a red team’s goal can be to determine whether credit card data can be exfiltrated.

The heart of red teaming is centered on identifying a goal for the assessment based on an assertion. The assertion is really an assumption. The organization, often, is assuming that the controls it has put in place are effective and can’t be bypassed. However, new vulnerabilities are created and human error or environmental changes occur that have an impact on the effectiveness of security controls such as segmentation, proxies, and firewalls.

Red team engagements are often performed in cycles. Repetitive cycles allow a blue team to go through a red team assessment, create a hypothesis on how to improve its controls and processes, and then test the hypothesis in the next cycle. This process can be repeated until the organization is satisfied with the level of residual risk.

Mitre’s “Cyber Exercise Playbook” has valuable information that can be applied to red team exercises.4 The following testing objective list is adapted from this resource:

•   Determine the effectiveness of the cybereducation provided to the organization’s personnel prior to the start of the exercise.

•   Assess the effectiveness of the organization’s incident reporting and analysis policies and procedures.

•   Assess the ability of the blue team to detect and properly react to hostile activity during the exercise.

•   Assess the organization’s capability to determine operational impacts of cyberattacks and to implement proper recovery procedures for the exercise.

•   Determine the effectiveness of scenario planning and execution, and gauge the effectiveness in communication between the red team, the blue team, and the white team.

•   Understand the implications of losing trust in IT systems, and capture the workarounds for such losses.

•   Expose and correct weaknesses in cybersecurity systems.

•   Expose and correct weaknesses in cyberoperations policies and procedures.

•   Determine what enhancements or capabilities are needed to protect an information system and provide for operations in a hostile environment.

•   Enhance cyber awareness, readiness, and coordination.

•   Develop contingency plans for surviving the loss of some or all IT systems.

What Can Go Wrong

It’s important to understand where a red team engagement can go “off the rails.” There are common challenges that red teams face, and it’s important to be aware of them so that these issues can be addressed ahead of time. Justin Warner’s Common Ground blog series has a wealth of information about red teaming assessments and is a recommended resource.5

Limited Scope

To be successful, a red team must be able to maneuver through an environment just as an adversary would. However, most organizations have assets that they consider invaluable that they are not willing to put at risk in the case something goes wrong. This can severely hinder a red teaming effort and limit the benefit of such an engagement.

Limited Time

Many organizations have a hard time differentiating between a penetration test and a red teaming engagement. In order to truly mimic a real-world adversary, the red team must be able to take sufficient time to evaluate and gain access without raising alarms. The bad guys have months or years to prepare and execute, whereas most red teams are expected to accomplish the same goals within a limited time period. It’s too expensive for a lot of organizations to have an ongoing red teaming exercise, which is exactly the scenario most adversaries enjoy. The assessment should be long enough to be beneficial to the organization, but also have a clear-cut end where the team can be debriefed.

Limited Audience

To be able to get the most out of an engagement, an organization will want to include as many key personnel as possible. It would be ideal to have every person from an organization playing a part of the engagement, but at the end of the day, work still needs to be done and people are unlikely to participate unless necessary. Try to get as much involvement as possible, especially from C-level executives, but be cognizant that people are busy.

Overcoming Limitations

Overcoming limitations may take some creativity and collaboration, but several tactics can be used. If your scope is limited and you are not permitted to test specific critical systems, then perhaps a test or QA lab is available where testing could yield similar results to what would have been found in the production environment.

Limitations can be overcome by using a concept called the white card, which is a simulated portion of the assessment designed to help overcome limitations. It is often assumed that at least one user will click a phishing e-mail, so a white card approach would be to simulate a user clicking a phishing e-mail, thereby letting the red team into the environment. Granted, phishing isn’t the only way into an environment; white cards can be used to simulate a malicious insider, collusion, bringing a compromised asset into an organization, backdoor access through a trusted vendor, and so on.

Communications

Red teaming exercises vary greatly in duration. It’s important to determine the most appropriate cadence for communication for each exercise. For example, if you are working on a red team assessment that has a 12-month duration, you may want to break the exercise up into 3-month testing and communication cycles. This would allow the red team three months to perform its attack emulations. The blue team would be briefed after the three-month testing cycle and could then begin to research and implement improvements based on what was learned—granted that communication between the red team and the blue team is facilitated by the white team. In most instances, the white team will ensure that interaction between the red and blue teams does not occur and instead will bring the teams together at the end of the testing cycle.

Planning Meetings

The red and blue teams, with the support of the white team, will have to work together during a series of planning meetings. Red team assessment planning meetings initially begin with conceptual discussions that eventually lead to detailed plans that are completed before the assessment begins.

Planning begins with a high-level description of the red team assessment’s goals, assertions, and the rules of engagement. These items will be refined and finalized and should require the signature of the red team lead as well as the leaders from other teams involved in the assessment.

The different components of the red team assessment will be outlined in the planning meetings. Discussion points should include the following:

•   In addition to the technical test, will tabletop exercises be performed?

•   What types of scenarios will be involved?

•   What types of deliverables will be created and at what frequency?

•   What environment will be tested?

Depending on the nature of the assessment, the assessment team may be provided either no technical information or a lot of technical information, such as architecture and network diagrams, data flows, and so on.

Logistical considerations will also need to be accounted for, including the following:

•   Will onsite work be performed?

•   What types of visas, translators, transportation, and travel considerations need to be addressed to support onsite work?

Meetings should result in action items, general assessment timelines, target dates for deliverables, and the identification of a point of contact (POC) for each team.

Defining Measurable Events

For each step in the attack cycle, a set of activities should be measured to determine the following:

•   If the activity was visible to the blue team

•   How long it took the blue team to initially detect the activity

•   How long it took the blue team to begin response activities

•   How long it took to remediate the incident

Both the red team and the blue team will have to keep close track of their efforts. The frequency of communication depends on a variety of factors, but typically information is exchanged at least every three months, and frequently more often, depending on the duration of a testing cycle. Documentation is critical during a red team assessment. Often the red and blue teams are submitting information to the white team on an on-going basis.

Red Team

Having testing activity logs is critical. Accurately tracking what day and time certain actions were performed allows the organization to determine which red team activities were detected and, more importantly, which were not. Each day of the assessment the red team should be documenting its testing activities, the time they were performed, exactly what was done, and the outcome of the test.

In addition to creating deliverables to report on the red team’s efforts, it is imperative that testing activities be logged. A red team should be able to determine who or what acted on the environment, exactly what was done, and the outcome of every testing action. This means that logs should be maintained from every red team system and tool.

Blue Team

The blue team should always be tracking its response activities. This includes events that were categorized as incidents, events that were categorized as false positives, and events that were categorized as low severity. Once the blue team’s documentation is synced with the red team’s testing activities, an analysis will be performed. The analysis will determine which defensive tactics were effective and which were not, as well as which events were categorized incorrectly—for example, incidents determined to be low or medium severity when they should have been considered high priority. Some organizations only track events that become security incidents. This is a mistake. It’s important to be able to go back in time and understand why something was marked a false positive or categorized inappropriately.

Understanding Threats

As discussed in earlier chapters, knowing your enemy is key to defining your tactics and creating realistic emulations. The goal is to develop an early warning system based on historical context. Knowing who has attacked you in the past and what tools and tactics they’ve used is crucial in understanding how to best protect your organization. Context is often gleaned by looking at the bigger picture and understanding who is attacking your industry and your competitors. Information sharing among companies within the same industry is encouraged now, and industry-specific threat feeds can be a valuable source of information.

Performing an analysis of the adversaries that have attacked your organization in the past is vital. Who is targeting you? What are their motives? How do they normally operate? What malware has been used against you? What other attack vectors have been attempted in the past? An analysis of your adversaries can help you determine the potential impact of likely attacks. Understanding the threat can also help you test for blind spots and determine the best strategy for addressing them. It’s important to understand whether you are being targeted by sophisticated nation-states, your competitors, hacktivists, or organized crime. Your approach to red teaming will be customized by your adversaries’ profiles and their capabilities.

Equally important is to understand what is being targeted specifically. This is where traditional threat modeling can help. Threat modeling helps you apply a structured approach to address the most likely threats. Threat modeling typically begins with the identification of the assets you must protect. What are your business-critical systems? What sensitive information resides within the environment? What are your critical and sensitive data flows?

Next, you need to evaluate the current architecture of the asset you are targeting in your red teaming exercises. If these exercises are enterprise-wide, then the whole environment must be understood, including trust boundaries and connections in and out of the environment. The same applies if your red team exercises are targeting a specific data set or application. In the case of a product or application, all the components and technologies need to be documented.

Decomposing the architecture is key to documentation. What underlying network and infrastructure components are used? Breaking down the environment or application will allow you to spot deficiencies in how it was designed or deployed. What trust relationships are at play? What components interact with secure resources like directory services, event logs, file systems, and DNS servers?

Use a threat template to document all threats identified and the attributes related to them. The Open Web Application Security Project (OWASP) has an excellent threat risk model that uses STRIDE, a classification scheme for characterizing known threats concerning the kind of exploits used or the motivations of the attacker, and DREAD, a classification scheme for quantifying, comparing, and prioritizing the amount of risk presented by each evaluated threat.6 Creating a system to rate the threats will help you refine your testing methodologies.

Attack Frameworks

Using an attack framework is one of the most comprehensive ways you can plan the attack portion of your red teaming activities. Several attack frameworks and lists are available that can be excellent resources for a red team. One of the most useful ones is the Mitre Adversarial Tactics Techniques & Common Knowledge (ATT&CK) Matrix.7 The Mitre ATT&CK Matrix has a variety of focuses, including specific matrixes for Windows, Mac, and Linux systems, as well as a matrix focused on enterprises. The matrix categories include attacks focused on persistence, privilege escalation, defense evasion, credential access, discovery, lateral movement, execution, collection, exfiltration, and command and control (C2).

In general, it is always advised that security efforts be based on industry frameworks or standards. There’s no need to re-create the wheel when you can stand on the shoulders of giants. Basing your efforts on a framework lends credibility to your efforts and ensures that your attack list has the input of its many contributors. Another notable source for attack information is the tried-and-true OWASP Attack list.8 The OWASP Attack list contains categories of attacks like resource protocol manipulation, log injection, code injection, blind SQL injection, and so on.

There is rarely a discussion about cyberattacks without the mention of the Cyber Kill Chain framework developed by Lockheed Martin. The framework is based on the fact that cyberattacks often follow the similar patterns—reconnaissance, weaponization, delivery, exploitation, installation, command and control (C2), and acts on objectives—the idea being that if you can disrupt the chain, you can disrupt the attacker’s attempt. The Cyber Kill Chain framework also has a corresponding countermeasure component. The goal is to detect, deny, disrupt, degrade, or deceive an attacker and break the chain.

Testing Environment

When mimicking a determined adversary, it’s important to defend your testing environment in a variety of ways. Let’s start with the basics. Keep your testing infrastructure updated and patched. The blue team will eventually try to shut you down, but a determined adversary will anticipate this and defend against it using several methods.

Use redirectors to protect your testing infrastructures. Redirectors are typically proxies that look for a specific value and will only redirect traffic that meets a certain criterion. The blue team should have a tough time figuring out what the redirector is looking for, thereby providing a basic layer of abstraction. Redirectors come in many forms. Raphael Mudge, the creator of Cobalt Strike, provides excellent information on redirectors as well as a ton of other useful information in his Infrastructure for Ongoing Red Team Operations blog.9

Be sure to segregate your testing infrastructure assets based on function to minimize overlap. Place redirectors in front of every host—never let targets touch backend infrastructure directly. Maximize redundancy by spreading hosts across providers, regions, and so on. Monitor all relevant logs throughout the entire test. Be vigilant, and document your setup thoroughly!

You can use “dump pipe” or “smart” redirectors. Dump pipe redirectors redirect all traffic from point A to point B. Smart redirectors conditionally redirect various traffic to different destinations or drop traffic entirely. Redirectors can be based on HTTP redirection in a variety of ways, such as using iptables, socat, or Apache mod-write. Apache mod-write can be configured to only allow whitelisted URIs through. Invalid URIs will result in redirection to a benign-looking web page, as pictured here.

Images

DNS redirectors can also be set up with socat or iptables. Along the same lines, domain fronting can be used to route traffic through high-trust domains like Google App Engine, Amazon CloudFront, and Microsoft Azure. Traffic can be routed through legitimate domains using domain fronting, including .gov top-level domains (TLDs)!

Adaptive Testing

Although stealth activities are a big part of red team assessments, there’s a lot of value in taking an adaptive testing approach. The stealth activities in a red teaming engagement closely mimic what an advanced adversary would do. However, adaptive testing takes the perspective that there’s value in performing simulations that mimic unsophisticated adversaries too—adversaries that are easier to detect than others.

Because longer-term red team assessments allow for testing cycles, an organization can set a certain cadence to its work to build in an “adaptive testing” perspective and move from clumsy, noisy attacks to testing activities that are stealthy and silent. For example, a three-month testing cycle can be performed where activities progress from easy to detect to hard to detect. After the three-month cycle, outbrief meetings and a post-mortem analysis can occur, and the blue team can gain perspective on at what point testing activities stopped being detected or stopped “hitting its radar.” The blue team would then use this information to mature its detection capabilities. The next three-month cycle could then begin, giving the blue team the opportunity to test the improvements it has made.

Many different tactics can be used to employ an adaptive approach. You can begin testing by sending out a large phishing campaign to measure how the organization responds and then move to a quieter spear-phishing attack. Scanning activities can begin with aggressive scanning tactics and move to a low-and-slow approach.

External Assessment

Many people automatically think of a perimeter security assessment when they hear the term penetration test or red team engagement. Although it is not the only component of a red team engagement, performing adversarial emulations on your perimeter is very important. When I think of a red team engagement with an external focus, I think of the importance of understanding what a bad actor anywhere in the world could do with a computer.

Most red teaming activities will combine using tools to scan the environment for information and then using manual testing activities and exploits to take advantage of weakness identified. However, this is only one part of an external assessment. It’s important to also remember that there can be a “near site” component to a red team exercise, where the red team can show up in person to perform attacks. In addition to Internet-accessible resources, the red team should ensure it is looking for weakness in an organization’s wireless environment and vulnerabilities related to how mobile technology connects to an organization’s technical assets.

External assessments can focus on any IT asset that’s perimeter facing, including e-mail servers, VPNs, websites, firewalls, and proxies. Often an organization will have exposed internal protocols that aren’t intended to be exposed to the Internet, such as the Remote Desktop Protocol (RDP).

Physical Security Assessment

Protecting physical access to an organization’s devices and networks is just as important as any other security control. Many red teaming engagements find problems with the way that locks, doors, camera systems, and badge systems are implemented. Many organizations can’t tell the difference between an easy-to-pick lock and a good door lock and protective plate. Lock picking is a skill that most red teams will have because picking locks is a relatively easy skill to learn and grants unparalleled access to a target.

Motion detectors often open or unlock doors when someone walks past them. This feature is also convenient for attackers attempting to gain physical access to an organization. Many red team assessors have manipulated motion detectors to gain physical access. It can be as easy as taping an envelope to a coat hanger, sliding it between two doors, and wiggling it to trigger the motion detector on the other side of the door. Compressed air can also be used to trigger motion detectors.

Many physical security badges lack encryption. A favorite tactic of red team assessors is to obtain a badge cloner and then go to the local coffee shop or deli and stand in line behind an employee who has a physical security badge. Badge cloners are inexpensive, and all it takes to use one is to stand within three feet of the target to be able to clone their badge and gain the same level of physical access to the organization’s facilities.

Camera systems often have blinds spots or resolution that’s so poor that a vehicle’s license plate isn’t legible when captured by the camera. Touchpad locks rarely have their codes changed. Wear and tear often causes fading so that simply looking at the lock can reveal which four numbers are used in the code. All an attacker has to do then is enter the four digits in the right order.

The possibilities for physical compromise of an environment are endless, and like red teaming activities, they are only limited by your imagination.

Social Engineering

Humans will always be your security program’s weakest link. They are by far a red team’s easiest target. Humans can be targeted via phishing e-mails, USB drives, phone calls, and in person. Consider purchasing inexpensive pens or eyeglasses that contain cameras and replaying video of your in-person social engineering attempts for your client or organization.

Phishing e-mails can be crafted to be very high quality with spoofed e-mail addresses and an impressively accurate look and feel. There’s also a benefit to seeing how users respond to poorly crafted e-mails with generic greetings and misspellings. The two components to phishing are delivery and execution. Object linking and embedding (OLE), .iso files or ISO images, hyperlinks, and e-mail attachments are common payload delivery mechanisms, and .lnk files, VBScript, JavaScript, URL, and HTML applications (HTA) are common payloads.

When attempting to gather information about your target, don’t underestimate the effectiveness of developing online personas for use in social networking or in other capacities. Cat phishing is a term that describes creating enticing profiles online and then selectively making connections with your targets. The anonymity of the Internet means that people need to be wary of their new online friends. People also tend to disclose a surprising amount of information via tech forums, for example.

Finally, don’t be afraid to hide in plain sight. Consider performing a somewhat noisy attack with the intention of getting caught as a distraction for a stealthy attack that you are carrying out using a different tactic.

Internal Assessment

To my surprise, organizations sometimes still have to be convinced of the value of an internally focused red team assessment. An internal assessment can mimic a malicious insider, a piece of malware, or an external attacker who has gained physical access. An internal assessment is a great way to gauge how your protections stand up to a person who has made it onto your network.

A person with no credentials but access to a network port can gain a ton of information if the environment is not configured correctly. A variety of man-in-the-middle attacks can prove fruitful when you have access to the wire. SMB relay attacks and Windows Proxy Auto-discovery (WPAD) attacks are consistently effective in leading to credential harvesting, privilege escalation, and frequently the compromise of an enterprise.

Once you have code running in the desktop session of a user, many mechanisms are available to put a keylogger on a machine or to capture screenshots. Using Cobalt Strike’s Beacon is an extremely reliable method. The custom-written Start-ClipboardMonitor.ps1 will monitor the clipboard on a specific interval for changes to copied text. KeePass, a popular password safe, has several attack vectors (including KeeThief, a PowerShell version 2.0 compatible toolkit created by @tifkin_ and @harmj0y) that can extract key material out of the memory of unlocked databases. However, KeePass itself contains an event-condition-trigger system stored in KeePass.config.xml and does not need malware to be abused.

Once credentials are gained, using a low-tech or human approach can also yield fruitful results for the red team. Simply looking through a company’s file shares can reveal a ton of information due to an overly permissive setting and a lack of data encryption. Although some red teams will be capable of creating their own sophisticated tools, the reality is that in a lot of cases the investment needed to make custom tools is not worth the reward. In fact, blending into the environment by using tools that will not set off red flags is called “living off the land.”10 Living off the land could include using wmic.exe, msbuild.exe, net.exe, nltest.exe, and the ever-useful Sysinternals and PowerShell.

Also consider targeting user groups that are likely to have local admin permissions on their desktops. An organization’s developers are often treated like VIPs and have fewer security controls on their systems. Same goes for an organization’s IT team. Many IT personnel still use their domain admin account for day-to-day use and don’t understand that it should be used sparingly. Also consider targeting groups that are likely to bypass user security awareness training. An organization’s executive leadership is frequently an attacker’s target, and ironically these people are the first to request an exemption from security training.

Privilege escalation methods used to focus on escalating privileges to local admin. However, organizations are getting wise to this risk of allowing everyone to be a local administrator. Tools like PowerUp—a self-contained PowerShell tool that automates the exploitation of a number of common privilege escalation misconfigurations—is perfect for escalating privileges. Many privilege escalation options are available, including manually manipulating a service to modify binPath to trigger a malicious command, taking advantage of misconfigured permissions on the binary associated with a service, %PATH% hijacking, and taking advantage of DLL load order, to name a few.

Search for unprotected virtual machine backups. It’s amazing what you can find on a regular file server. Using default credentials is still a tried-and-true approach to gaining access in many organizations.

When exfiltrating data from an environment, first of all, be sure it is sanctioned via the assessment’s rules of engagement. Then find creative ways to remove the data from the environment. Some red team assessors have masqueraded their data as offsite backup data, for example.

Lessons Learned

Postmortem exercises performed as a part of a red team engagement are often detailed and have a strong emphasis on knowledge transfer. Red team assessments need to have a heavy focus on “documenting as you go,” in order to capture all the information that will allow an organization to perform a detailed analysis of what is working and what needs to be redesigned. This postassessment analysis is often called an after action report (AAR).

An AAR should include lessons learned from different perspectives. It’s also important to document what went right. A detailed understanding of which tools and processes were effective can help an organization mimic that success in future endeavors. Including different perspectives also means capturing information from different teams and sources. “Lessons” can come from unlikely sources, and the more input that goes into the AAR, the less likely an important observation will be lost.

The AAR should be used by the organization’s leadership to inform strategic plans and create remediation plans for specific control gaps that need to be addressed.

Summary

Red team exercises are stealthy ethical hacking exercises that are unannounced to the blue team. They allow the blue team to defend a target and an organization to gauge how its controls and response processes perform in an emulation situation that closely mimics a real-world attack. Red team exercises limit communication and interaction between the red and blue teams. They are most beneficial to organizations that have mature security programs, those that have invested a significant amount of effort in establishing and testing their security controls. Organizations that are still in the process of building a security program and refining their security controls and processes may benefit more from the collaboration and communication inherent to purple team exercises, covered in the next chapter. Purple team exercises are ideal for getting an organization to the point where it is ready for the stealthy nature of a red team exercise.

References

1.  Carl von Clausewitz, On War, 1832. For more information, see https://en.wikipedia.org/wiki/On_War.

2.  Department of Defense Directive (DoDD) 8570.1, August 15, 2004, https://static1.squarespace.com/static/5606c039e4b0392b97642a02/t/57375967ab48de6e3b4d00.15/1463245159237/dodd85701.pdf.

3.  US Military Joint Publication 1-16: “Department of Defense Dictionary of Military and Associated Terms,” Joint Publication 1-02, January 31, 2011, www.people.mil/Portals/56/Documents/rtm/jp1_02.pdf; “Multinational Operations,” Joint Publication 3-16, July 16, 2013, www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_16.pdf.

4.  Jason Kick, “Cyber Exercise Playbook,” The Mitre Corporation, November 2014, https://www.mitre.org/sites/default/files/publications/pr_14-3929-cyber-exercise-playbook.pdf.

5.  Justin Warner, Common Ground blog, https://www.sixdub.net/?p=705.

6.  “Threat Risk Modeling,” OWASP, https://www.owasp.org/index.php/Threat_Risk_Modeling.

7.  Adversarial Tactics, Techniques & Common Knowledge, ATT&CK, Mitre, https://attack.mitre.org/wiki/Main_Page.

8.  Category:Attack, OWASP, https://www.owasp.org/index.php/Category:Attack.

9.  Raphael Mudge, “Infrastructure for Ongoing Red Team Operations,” Cobalt Strike, September 9, 2014, https://blog.cobaltstrike.com/2014/09/09/infrastructure-for-ongoing-red-team-operations/.

10.  Christopher Campbell and Matthew Graeber, “Living Off the Land: A Minimalist Guide to Windows Post-Exploitation,” DerbyCon 2013, www.irongeek.com/i.php?page=videos/derbycon3/1209-living-off-the-land-a-minimalist-s-guide-to-windows-post-exploitation-christopher-campbell-matthew-graeber.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.244.83