Adversaries and Attribution

Only a small percentage of malicious activity originates from nation-state attackers. Despite this, these attackers do far more damage than most threats an organization will face. In fact, nation-states are responsible for some of the most damaging breaches in history, such as the attacks against Anthem, U.S. Office of Personnel Management, the World Anti-Doping Agency, Google, Sony Entertainment, and many more.1

Breaches cost victims millions of dollars. In 2019, the average cost of a data breach to private-sector companies was nearly $4 million, but some cost organizations far more.2 For example, the 2017 Equifax breach cost the company $275 million, and though it’s still inconclusive, security outlets have suggested that a nation-state attacker was responsible for it.3 The breach was also highly publicized, likely affecting customer confidence in Equifax.

When an organization is under attack, the security team immediately focuses on defending the organization and mitigating the breach to prevent further compromise. But while defending against the initial threat may discourage less sophisticated attackers, this generally isn’t the case when it comes to advanced adversaries. Remember that nation-state attackers conduct long-term and persistent attacks; the first malicious event that a victim identifies may be just one of several stages of a multiphase attack. And if the attacker fails, they may simply regroup and try another method. Understanding who your attacker is, including the tactics and malware they have previously used, will significantly increase your chances of mitigating nation-state attacks.

Threat Group Classification

The first step to attributing a threat is categorizing it. Some advanced attackers may fall into multiple categories, but most fit into one of the following four: hacktivism, cybercrime, cyber espionage, and unknown.


Hacktivist groups are often motivated by political or religious beliefs. For example, the hacking collective Anonymous often conducts attacks to harm those they deem a threat to human rights and freedom of speech. The group is decentralized, with members located across the world, and it has no formal requirements to join except a belief in the cause. On social media, Anonymous uses the #operations tag to market its efforts, which often involve denial-of-service attacks and website defacements. You may recognize Anonymous by their use of the Guy Fawkes mask, which represents the group’s anonymity. Since anyone can claim allegiance to Anonymous regardless of their hacking skills or capabilities, Anonymous’s level of success varies greatly. Generally, hacktivist attacks have personal aims, a fact which separates them from other threat categories. Moreover, hacktivists may pose a high level of risk to organizations, since they can have many followers who can themselves participate in attacks. The level of sophistication of these attacks varies widely, but due to the use of both human and technical resources, hacktivists can achieve medium to high success levels.

DDoS attacks are popular within this category because of their low cost and comparatively high level of damage—which can be significant. Free, open source denial-of-service tools such as Slowloris4 and the Low Orbit Ion Cannon5 have made it easy for hacktivist groups to allow their followers to participate in attacks. Many of these tools are equipped with graphical interfaces, such as the one for the Low Orbit Ion Cannon shown in Figure 5-1, making them accessible to almost anyone. Naturally, this minimizes the level of technical sophistication necessary to conduct these operations.

Screenshot of a graphical interface with a “Select your target” field that accepts a URL or IP address and an “Attack options” field that accepts a timeout value and TCP/UDP message.

Figure 5-1: Low Orbit Ion Cannon graphical interface6

Another common tactic is to embarrass the targeted organization publicly. To achieve this, the hacktivist group may conduct attacks aimed at compromising the organization’s data. Unlike others who use this tactic, hacktivists do not usually steal data for financial gain or intelligence. Instead, they publicly post stolen data, such as sensitive emails, intellectual property, and confidential documents, for anyone to view, often causing the targets to lose their jobs and embarrass the target organization.

A third tactic for which hacktivists are known is website defacements, which are similar to posting victim information publicly. Hacktivists conduct website defacements to embarrass the victim and to post messages in an effort to spread propaganda. Hacktivist groups have previously used this tactic along with DDoS attacks with relative success.


Criminal groups are financially motivated, and thus they generally operate differently than espionage or hacktivist organizations. In past years these groups have achieved high-level compromises within the retail and consumer finance industries, often by relying on social engineering to gain initial access to the victim’s environment. Additionally, criminal organizations can purchase cybercrime services on hacker forums hosted on the internet and the so-called Dark Web.

The Dark Web is an online space for websites unknown to most search engines and inaccessible without special encryption applications or protocols. To access the Dark Web, you must traverse encrypted networks known as Darknet. (If these terms seem confusing, keep the following in mind: websites are what make up the Dark Web, whereas networks are what make up Darknet. Darknet provides the infrastructure that the Dark Web lives on.) Together, the Dark Web and Darknet add up to a hidden layer of the internet, one designed to keep its websites and communications anonymous, making it attractive for cybercriminals who want to stay under the radar. This also makes it an excellent place for criminals to sell compromised data and purchase and distribute malware.

The malware that cybercriminal groups use may be custom made or publicly available for sale. Individuals behind the activity often distribute and control who has access to both the malware and its supporting infrastructure. Lower-level criminals tend to use commodity malware, which is publicly available and usually not custom-made or unique to a specific attacker. Some cybercriminals are more sophisticated, of course. They may purchase and modify commodity malware to elude detection and suit their cause. But even then, their malware typically isn’t as advanced as that seen in espionage activity.

Cybercriminals often use malware designed to steal credentials, demand a ransom, or compromise the point-of-sale systems that retailers use. And as I discussed in Chapter 3, these financially driven attacks do not usually target financial institutions themselves. That’s because most banks have robust defenses that make a successful compromise far more difficult than attacks targeting individual consumers. While certain advanced criminal groups, who often share many of the TTPs seen with espionage attackers, do conduct attacks against institutions (and make headlines when they do so), they’re only a small percentage of the cybercrime landscape.

Cybercrime is the largest adversary category and the only one that sells services. These services often appear in online markets, paid for in cryptocurrency. Unfortunately, this makes tracking the money difficult. The following are examples of services that fall within this category.

Hacking as a service

  1. Some hackers try to make a living by posting ads to online markets or Darknet, offering their skills to the highest bidder. Any consumer can purchase these hacking services. Figure 5-2 is an example of a “hacker for hire” post on a Dark Web marketplace.
Screenshot of an e-commerce page selling a product called “1 highly skilled hackers can help with hacked email, Facebook, websites, social media, mobile devic” for 460 dollars

Figure 5-2: A Dark Web posting for hacking services

Malware as a service

  1. Malware developers may also sell their malware in online markets. This malware is often designed to make an illegal profit, or else it provides remote access to a target’s system or data. Criminal consumers (those who purchase access to this malware) may, in some instances, lease the malware instead of buying it directly, or else they pay for access to its supporting infrastructure, which often includes the servers and software used for command and control, allowing the criminal consumer to conduct attacks, track victims, and even collect funds from the victim. Leasing the malware places the responsibility of maintaining the product on the provider.
  2. Criminal service providers have a vested interest in keeping their malware and supporting services up and running. To effectively infect victims, their malware must go undetected, requiring the provider to regularly patch and update it. Fortunately, most cybercriminals don’t have the time and resources to keep their malware code up-to-date. Service providers usually have more time, as they’re not actively engaged in the attacks themselves.

Infrastructure as a service

  1. Infrastructure as a service relies on a similar client/provider model. A provider will own, control, and service the infrastructure, which the client then leases. This infrastructure allows the consumer to stage and distribute their malware, as well as administer command-and-control services. Criminal consumers greatly value this approach, because it provides an additional layer of anonymity between themselves and the victim; the victim can’t easily link the infrastructure back to them.
  2. Usually, the infrastructure provider takes a percentage of the profit as a charge for their services. And like the malware-as-a-service model, the provider must ensure that their infrastructure is available and accessible to their criminal clients. This may involve updating and changing infrastructure to evade law enforcement, as well as using encryption services.
  3. Bulletproof hosting (BPH) is a good example of infrastructure as a service and is popular among cybercriminals. Unlike legitimate infrastructure providers, BPH providers allow malicious activities to take place on their networks and domains. For example, you can host malware on its servers or use the BPH for command and control of botnets and other malicious and illegal activities. BPH providers often sell their services in criminal markets, allowing anyone who can pay for the service to take advantage of its malicious capabilities. This also provides a level of anonymity for BPH customers, since they aren’t registering infrastructure themselves.


  1. Another service that criminal consumers can purchase or lease for use in attacks, a botnet is a network of infected computers (also known as zombies) controlled by one individual. Criminal consumers can purchase access to the botnet to conduct DDoS attacks or spam phishing campaigns. Usually, the hosts that power the botnet are unaware that criminals are using their systems and resources. Unfortunately, this makes it difficult for law enforcement to identify the attackers since the victims who power the botnet usually have no affiliation with the controlling attacker or service provider.

Cyber Espionage

The goal of cyber-espionage attackers is to steal sensitive information—intellectual property, for example, or internal communications such as emails. The stolen data usually gives the attacker, which is typically a nation-state, a geopolitical advantage: it isn’t posted publicly or sold, as in other threats discussed. And nation-states typically conduct advanced, long-term, and persistent attacks that are difficult to defend against. Because of this, they pose the greatest level of cyber risk to an organization.

Espionage attackers typically have more resources available than other categories of threats due to state funding. This grants espionage groups access to custom-developed, and often very sophisticated, malware. Additionally, they have the ability and resources to frequently change or expand the types of cyber infrastructure and tools they use in their attack campaigns. And as we discussed in Part I of this book, espionage groups often have access to zero-day exploits. True zero-day exploits are rare, but they’re extremely effective, since no patch exists to fix the exploited vulnerabilities. Developing solutions to address the issue takes time; the attacker has free reign until the vendors create and apply the patch.

Like some of the other categories, cyber spies frequently use spear-phishing emails to deliver malware and access the target’s environment. But before they do so, they conduct reconnaissance to learn about the target. At that point, they use what they have learned to craft targeted spear-phishing emails for later use. Unlike spam and criminal phishing campaigns, espionage attackers usually target a small number of predetermined recipients. Furthermore, these attackers spend much more time profiling the target, perhaps by tracking the target’s social media interests and their professional or personal associations.

Whereas spear-phishing emails can be generic, attackers in the espionage category often send emails that they tailor to their target’s interests. Sometimes, they will spoof their email address to masquerade as someone the target knows. In other cases, the attacker will compromise an account associated with someone familiar to the target. The attacker will then use the account to send the spear phish, which adds an additional level of authenticity. If the victim believes the email is from someone they know, they are more likely to open it since they would believe the message’s legitimacy, all the while remaining unaware that the sender was compromised.

In Chapter 2, we explored how spear-phishing emails serve as the primary attack vector in espionage campaigns. In Chapter 1, we discussed how China conducted watering-hole attacks in an effort to compromise U.S. political and government-affiliated organizations. Watering-hole attacks are popular among nation-states and have proven particularly effective. Recall that a watering-hole attack is when an attacker compromises a legitimate website and uses it to serve malware to unknowing visitors. If an attacker compromises a website within the target base’s industry, they can use it to gain a foothold on an organization.

The most common technique used to weaponize a legitimate website in watering-hole attacks is to gain access and place an HTML iframe in a web page’s source code. The iframe is a feature that redirects visitors to the attacker-controlled infrastructure, which then covertly downloads malware onto the target’s system in the background. In past incidents, watering-hole websites have used advanced measures to compromise visitors’ systems. For example, some of these websites have used scripts or malware configuration parameters to execute only if the victim has a specific language set or browser. This filtering technique is likely a solution to the large traffic volume that may traverse the website.

For example, if an attacker compromised a legitimate website but was interested only in targeting South Korean victims, they would face a problem: by hosting malicious code on the site, they’d wind up with a lot of indirect victims from the rest of the world. Not only would this bring additional attention to the activity, but it would also generate a lot of “noise” from the sheer volume of victims. Let’s further suppose that in this example, the attacker is using malware designed to exploit a vulnerability in Microsoft Internet Explorer. If the infection attempt affected each of the website’s visitors regardless of their browser, all downloads by users on other browsers would be ineffective.

Simply put, the volume of affected users would pull even more of the attacker’s resources, thus drawing attention to the attack. To minimize these issues, the attacker could configure the malware to execute only when it identifies an Internet Explorer user agent, along with browsers that have Korean set as the default language. Now only Korean users who browsed the site with Internet Explorer would be targeted. While this is just an example, actual espionage attacks have used this tactic. For a description of other common attack vectors, see Table 5-1.

Table 5-1: Common Attack Vectors

Vector Description
Spear phish A form of email deception involving social engineering, designed to get the user to click a malicious link or open an attachment to deliver malware and infect the victim’s computer.
Man-in-the-mailbox A type of attack in which the attacker intercepts, monitors, and sometimes alters the communication between two email accounts.
Watering hole/strategic web compromise An attack that infects the targeted domain’s visitor base. The attacker finds a vulnerability, thus acquiring access to a public-facing web server. They then exploit it to allow access to the domain. Once present, the attacker will add code that redirects or distributes malware to visitors of the infected domain.
Distributed denial of service A less sophisticated but highly effective attack for bringing services offline. Attackers do not need a high level of technical sophistication but do require many participants to take part in the attack (with or without their knowledge). Propaganda and activism are often the motivation behind DDoS attacks.
SQL injection Frequently used to target web servers, a type of attack that is often the initial vector to compromise and stage watering-hole attacks.
DNS poisoning A type of attack in which the adversary overrides the authentic name resolution to direct the user’s name request to attacker-controlled infrastructure. This can take place at the host level or on a DNS server.


Every threat is considered unknown at the beginning of an investigation. However, some threats fall into a gray area even after they’ve been analyzed. These attacks elude a simpler classification, perhaps because their TTPs cross multiple categories. Sometimes, there isn’t enough information about the attack, which makes it hard to determine the attacker’s motivations. Investigators should place these threats into the “Unknown” bucket. From here, they can analyze the activity and identify indicators of compromise, which may link similar instances together.

Investigators should classify and analyze unknown threats based on the behaviors and tactics they observe in each attack. At this point, they can cluster threats with similar activities and behaviors into “buckets” until they can give them definitive attributions. Monitoring and comparing tactics with other activities often helps bring to light similarities found across multiple attacks. This allows you to map out or link one attack to another.


Now that we’ve identified the categories threats can fall into, you need to understand how to conduct attribution properly. To do this, you need to use an approach that is both consistent and repeatable. More than one such model exists;7 the one you choose to use may depend on the organization you work for. Figure 5-3 is the model I use when conducting attribution.

Circular diagram divided into four quadrants: Infrastructure, with the sub-bullets “C&C domain / IP,” “Malware hosting domain / IP,” “Spear Phishing originating IP,” and “Domain registrant”; Persona, with the sub-bullets “Similar façade in sending email address,” “Spear phishing alias,” and “Spear phishing theme”; Targeting, with the sub-bullets “Similar target list,” “Same targeter,” and “Similar spear phishing email,” and Malware with the sub-bullets “Similar malware,” “Malware author,” “Exploit,” “Encoding / XOR key,” “Digital certificate.”

Figure 5-3: An attribution model

Attribution is a complex process. Attackers don’t want you to know who they are and will go to great lengths to maintain their anonymity. Additionally, nation-states intentionally build deception into their operations to point researchers down the wrong path. It is also important to conduct attribution in a consistent and repeatable manner. Attribution models can help achieve this. Several models exist, and the model on which you base your attribution is important; further, you should always use a model to ensure your attribution is evidence based. The model in Figure 5-3 is a modified version of another popular model, the Diamond model,8 and has four categories to derive attribution.

Before we walk through the process of attributing an attack, a few words of advice. First, when appropriately done, attribution can help you identify the attacker, as well as their motivations. Depending on the attacker, this can be valuable; but in other situations, attribution is irrelevant. For example, if your organization receives a phish, generated using a templated mass email, to deliver highly prevalent malware, you would not likely need to conduct attribution, as anyone can purchase and use commodity malware phishing kits. Depending on the size of your organization, you may receive many of these in a day. Attributing this type of attack would take time and resources better put to use investigating targeted attacks threatening your organization. When an attacker profiles and selects a target, it is much more likely to be an advanced threat and so warrants attribution.

Next, attribution claims should always derive from evidence and facts, never from assumptions. Aside from simply being inaccurate, misattributions provide faulty intelligence. This faulty intelligence often informs future decisions critical to defending against an advanced threat, potentially leading investigators on a wild goose chase. This may leave an organization vulnerable in one area while erroneously dedicating resources to defending another. Furthermore, it causes confusion and may cause other analysts to base their attributions on less-than-valid data. In the end, incorrect attribution will leave you looking at the wrong tactics, malware, and other critical indicators of compromise that you’d want to leverage when battling an advanced attacker.

Lastly, whenever you read an article about a major breach, keep in mind there’s always someone behind the attack. The important thing to remember about cyberattacks is that real people are behind them, people who have habits and preferences, such as the specific tools or passwords they like to use, the aliases or personas used to create fake accounts, domain registration, and themes in infrastructure names, among many others.

Attribution Confidence

Investigators are rarely 100 percent certain about their attributions. For that reason, you must qualify every attribution with a rating based on your confidence in it. When rating your confidence, it’s important to be consistent; this forces organizations to clearly define the requirements for each confidence category. Of course, these categories will likely include a broad set of criteria so as to encompass many situations. Here are some examples:

  1. Low The evidence leading to your attribution is weak or circumstantial. There may be a lack of data, which leaves information and intelligence gaps.
  2. Moderate You have evidence from at least one of the quadrants of the attribution model shown in Figure 5-3. For example, you may identify unique malware associated with a known attacker or a known registrant email previously used to register adversary infrastructure. You may have additional circumstantial evidence or secondhand information from another source that appears valid but does not originate from your own sourced data.
  3. High You have conclusive evidence that supports your attribution assessment. The evidence should be overwhelmingly strong and leave little doubt to anyone who reviews the evidence. Generally, you’ll want to have supporting evidence from multiple quadrants of the attribution model to give your attribution a high confidence rating.

The use of confidence bands additionally helps to prevent poor or inaccurate attributions. Poor attribution is when an organization or security analyst decides to attribute a cyber threat based on weak (or no) evidence or, even worse, on an assumption. Good, strong attributions are those derived from two or more vectors in the model.

The Attribution Process

Having a process or model to follow when conducting attribution is critical to maintaining consistent validity with each attribution. The attribution process can vary from one analyst or organization to the next, but Figure 5-4 shows one such model.

Circular diagram with six nodes: Gather supporting data, assess, hypothesize, challenge/defend, confidence assessment, and document

Figure 5-4: The attribution process

Gather supporting data

  1. Analysts gather and analyze a great deal of data during an investigation, but not all of it is pertinent for the purposes of attribution. An analyst should aim to gather attributable data—that is, any data that can provide supporting evidence toward making a valid attribution. Examples of relevant data include information about infrastructure, malware, persona, or targeting data.
  2. You may also want to conduct open source research to bolster the evidence gleaned from the attack itself. Open source information can be as detailed as finding the identity of a malware author on a hacking forum, but it can also include data as circumstantial as a political event that might serve as motivation for a nation-state attack.


  1. Once you collect the data, you need to process and analyze it. This will allow you to assess the threats and create charts or visualizations based on metrics and analytics. You’ll want to track attacker activities and timeframes by analyzing timestamps on log data associated with the activity. Time-zone analysis—that is, documenting the exact time at which each malicious event took place on your network—can help you track the times when the attacker was active. Often, trends in this data will allow you to determine or narrow down the attacker’s time zone. You can then cross-reference this data against various regions that use those time zones to determine the origin of the attacks.
  2. You’ll also want to look at any malicious binaries for interesting strings or language settings in custom malware. Sometimes you’ll find file paths with operation or malware names written in a specific language or even an adversary’s alias or username.


  1. In this step, you will generate your hypothesis. Brainstorm ideas and look at the complete analysis you conducted in the Assess step, and then try to examine the big picture. Where does the evidence take you? Are there outliers in the data that may provide motivation hints? You can have several attribution theories; in the next step, you’ll conduct analysis to test your hypotheses.


  1. In this step, all parties invested in the attribution process should have a meeting to debate, evaluate, and rank all competing hypotheses. To do this, all stakeholders should attempt to poke holes in each theory. The individual spearheading each hypothesis will then defend it. Once done, you should have enough information to rank each attribution hypothesis from strongest to weakest.

Confidence assessment

  1. Next, take the top-ranked hypothesis and conduct a confidence assessment. Use the bands discussed in the “Attribution Confidence” section earlier in this chapter to accomplish this.

Document results

  1. At this point, you’ve analyzed all your attributable data, identified relevant evidence, and created and challenged each of the competing hypotheses. Of course, all this time and work is worthless unless you document and communicate your analysis results. Record your attribution assessment and confidence rating in the attacker’s threat profile, which we discuss later in the book. Regardless of where you put the information, documenting your work and results is critical and one of the most overlooked steps of the attribution process. When in doubt, write it out.

Identifying Tactics, Techniques, and Procedures

Identifying an attack’s tactics, techniques, and procedures can help you profile an attacker. Understanding the attacker’s TTPs is especially useful when defending against future attacks: it’s helpful to know an attacker’s go-to tactics. Some tactics scale across multiple threat categories, while others are relatively consistent. Table 5-2 shows an example of popular TTPs seen across cyberattacks.

Table 5-2: Comparison of Common TTPs by Group

Cybercrime Cyber espionage Hacktivism
Phishing email X X
Spam campaign X
Spoofed accounts known by the target used in phishing campaigns X
Strategic web compromise (SWC) X X
Custom malware X
Publicly available malware X X
Use of Dynamic DNS X X
Use of C&C servers X X
“For sale” malware X X
Strong use of malware X
Use of zero-day exploits X X

The TTPs listed here are some of the more common tactics seen with cyber threats. However, these change frequently, and you should evaluate them based on the relevant factors you see during the time of the activity. Also notice that some TTPs appear in more than one threat category, while others are unique. For instance, phishing emails appear in both cybercrime and cyber espionage activity.

DDoS attacks frequently appear in hacktivist-based attacks, but they also show up in instances of money-motivated cybercrime. Additionally, DDoS attacks even occur in nation-state attacks, though they are less common. In hacktivist and cybercrime-motivated attacks, however, the adversary notifies the victims themselves and tells them to either pay up or the attacker will intensify the DDoS attack. Here, the attacker would attempt to make the victims’ websites and services unavailable to legitimate customers. Nation-state attackers may use DDoS attacks either as a distraction or as a method of sending a message to the nation that the victims’ organizations are from. Understanding the attacker’s motivations through the TTPs they use can help in qualifying the agent behind the attacks. Hacktivist groups almost always announce their plan of attack before executing it; however, cybercriminals do not. The primary difference is the attacker’s motivation and end goal. Cybercrime is financially motivated, whereas hacktivists are often looking to embarrass or disrupt their targets’ operations or services.

Conducting Time-Zone Analysis

As we previously discussed, timestamp logs from victim data can tell you important information about your attacker. An analyst can use victim timestamp data to plot out the hours, days, and weeks in which the attacks were actively taking place. You can often identify patterns to determine the attacker’s workdays and off days, which is especially relevant when facing a nation-state attacker. Nation-state attacks frequently take place over several months to a year, and because of this, they make good candidates for time-zone analysis.

The first step is to collect and document the attack and the times at which it took place in the victim’s environment. This evidence can help us identify the times of activity, and you can find these in the victim system, network, and security device logs. There are two common ways to gather this time-based evidence: from post-compromise activity and compile times. Post-compromise activity is the part of the attack conducted after acquiring initial access. The attacker often spends time conducting manual operations in this phase. Because of this, the post-compromise often requires human on-keyboard interaction to further exploit the victim network. The following are some examples of post-compromise activities:

  • Credential collection: Many attackers will use password-collection tools like Mimikatz to obtain their victim’s credentials. Though these tools often execute in the victim’s system memory, many security products will timestamp each tool’s usage and the commands that the attacker entered to use them.
  • Network and vulnerability scanning: Often, attackers can gain access to a target’s environment but still have limited access to both system and network resources. Network enclaves and Active Directory rules and permissions will often restrict much of the victim’s environment. Sometimes, an attacker can get around this by using network or vulnerability scanning tools to identify critical infrastructure and any of its weaknesses or vulnerabilities. The use of these scanning tools can tell us when the attacker was live and active on the network.
  • Command line or PowerShell use: Attackers will often obtain remote access during the initial infection. Once in the environment, a common practice is to take advantage of what is already present and available. As previously discussed, using PowerShell is a popular choice for attackers, particularly given that it’s already present in most current Windows environments. Attackers frequently use PowerShell for a variety of tasks, and many endpoint detection technologies can capture this information. Security products might identify PowerShell activity, but unfortunately they’ll rarely block it, because they typically won’t identify the activity as malicious. When a user runs commands and PowerShell scripts, the specific commands entered, resources used, and times of each use are often logged as they appear. All of these are helpful data sources for attacker time-zone analysis.

The second kind of timestamps you can collect are those indicating the time of the malware’s creation, known as the compile time. All files have a compile timestamp documenting the binary’s compilation. Keep in mind that an attacker can forge timestamps, which weakens them as a data source. Still, when the data is valid and you have a lot of it, you can determine valuable information about the attacker. For example, since nation-state attackers are government operators who often work a standard workday, this data can provide meaningful insight. That said, to make these judgments you’ll need to identify a grouping of samples to assess to have both statistical validities and consistency. Another way to gather compile-time data is by searching public malware repositories for detection names.

Recall that malware compile times are useful only if you believe that the malware is unique. If the attacker purchased malware or acquired it from somewhere publicly available, these compile times lose significance.

Make sure to collect the following data points:

  • The first and last times the attacker was active in your environment
  • The date and time at which the attacker used a remote shell to access your network
  • Login/logoff times and dates (assuming the attacker accessed your network using a compromised account)

Next, you’ll need to plot your data on a graph. It’s essential to be thorough here; include the times of activity broken out by the hour, day, week, and, if you have enough data, month. When assessing the attacker’s activity timeline, overlay your graph across various time zones. Start at UTC 0 and walk your data forward hour by hour (UTC +1, +2, +3, and so on) until you have a window of consistent activity that fits within an eight- or nine-hour block of time. Again, this is useful only when you have a large pool of data from the same attacker over time. This may sound like a crude way to conduct time analysis, but it genuinely is a common practice that security vendors use.

For example, PwC, a cybersecurity consulting company, wrote a blog in 2017 demonstrating the use of time-zone analysis.12 The blog, by Gabriel Currie, used data from a nation-state attacker known as APT10. PwC took the data and plotted it on a graph and then moved through each time zone. As you can see in Figure 5-5, the activity does not fit the typical work hours expected of a government employee in UTC 0.

Graph with two axes, Date and Time of Day (UTC). Shows clusters of activity between 0:200 UTC and 08:00 UTC

Figure 5-5: Time-zone analysis of attacker events overlaid with UTC 0 time zone (Source: PricewaterhouseCoopers LLC and Gabriel Currie)13

PwC compared the data to each time zone until it identified a pattern. As shown in Figure 5-6, UTC+8 fits nicely with a typical workday schedule, showing activity primarily between 0600 hours (6 am) through 1700 hours (5 pm). Based on the assessment, PwC could hypothesize the attackers’ time zone is UTC+8.

Graph with two axes, Date and Time of Day (UTC+8). Shows clusters of activity between 06:00 and 18:00 UTC+8

Figure 5-6: Time-zone analysis of attacker events overlaid with UTC+8 time zone (Source: PricewaterhouseCoopers LLC and Gabriel Currie)14

An easy way to identify which countries fall under the UTC+8 time zone is to look at the time zones overlaid on a world map. As you can see in Figure 5-7, countries in the UTC+8 time zone include Russia and China. Based on this and other supporting evidence derived from the attacks, PwC analysis led them to attribute the activity to China.

World map with portions of central Russia, Mongolia, China, the Philippines, Indonesia, and western Australia highlighted.

Figure 5-7: UTC+8 time zone overlaid on a world map15

As demonstrated by the PwC example, to further support your analysis, you’ll next want to look at the days on which there is activity to try to estimate the attacker’s work schedule. For example, some countries, such as Iran, work Sunday through Thursday. Looking at weeks or even months of data can reveal patterns of activity. Also compare these dates with various holidays to narrow the search further. Many holidays are specific to a particular region of the world, so identifying any regular intervals where attackers pause their work can significantly narrow your search. Regardless, you can use all of this information to support your various attribution theories.

Data correlation tools such as Splunk, Kibana, and others can even help automate the process for you. If correlation tools aren’t available for time-zone analysis, you can graph the data through Microsoft Excel.

Attribution Mistakes

Certain pitfalls can cause you to incorrectly attribute attack activity. One of the most common is when an analyst bases their conclusions on assumptions instead of verifiable evidence, which is known as analytical bias. When in doubt, make a list of the supporting evidence you’ve identified in the investigation. Does the evidence provide you with information on the attacker’s language, or perhaps the activity’s timestamps and regional time-zone data? Is there malware or infrastructure unique to a specific attacker that complements any other evidence to support attribution?

These details may not reveal much on their own, but as you collect more information and supporting data, you can build out a bigger picture that leads to a stronger attribution assessment. The following elements demonstrate events or areas of the attack that can lead to misattribution. Avoid these pitfalls when making attribution, as these are areas often misunderstood and used incorrectly.

Don’t Identify Attacker Infrastructure Based on DDNS

Attackers are constantly looking for ways to evade detection. One method that has become quite popular is the use of Dynamic DNS (DDNS) to host attack infrastructure. DDNS providers use their own domains to host their customers’ infrastructure as a subdomain of their root domain. In other words, the attacker controls their specific subdomain but does not own or register the infrastructure itself. Instead, the infrastructure remains part of the DDNS provider’s network, making it difficult to trace back to its source. For example, the legitimate Dynamic DNS provider DYN DNS uses the format yourname.dyndns.org. The root domain dyndns.org is owned and controlled by the Dynamic DNS provider. The subdomain yourname is the attacker infrastructure that the adversary uses.

Dynamic DNS is appealing to attackers because it provides them with an additional anonymity level and makes attribution more difficult for defenders. In fact, new analysts often make the mistake of using DDNS infrastructure for attribution. This is problematic. For example, bad guy #1 could use the domain bad.dyndns.org for their command-and-control infrastructure, while bad guy #2 could use evil.dyndns.org. If an inexperienced analyst saw this without understanding how attackers use DDNS in attacks, they may think the attacks came from the same attacker due to the shared root domain name dyndns.org. Unfortunately, there is no clear way to get around the attribution difficulties that a Dynamic DNS creates. Do take note of which specific groups use which providers and any subdomain themes. But that’s the extent to which you should use DDNS infrastructure when leveraging it for evidence to support the attribution.

Don’t Assume Domains Hosted on the Same IP Address Belong to the Same Attacker

After eliminating evidence based on DDNS, the next thing an analyst should do is map out the domains and hosting IP addresses associated with the attack. To do this, look at malware activity and identify any command-and-control servers communicating with the victims. Often, though not always, these servers will be identified by domains as opposed to IP addresses. When you locate a domain name, look up the IP address associated with it at the time of the activity.

This step is important, as it allows you to identify any other domains hosted on the same IP address during the attack’s timeframe. In some cases, these other domains won’t be related, especially if the IP address is associated with a web server that hosts hundreds or even thousands of domains. In other cases, however, there may be only a few domains hosted on the IP address. In cases like these, it’s worth taking the time to research further.

It’s critical to determine whether the domains hosted on the same server are related before drawing any conclusions from the data. To do this, you’ll need to conduct further investigation. Even two bad guy domains sharing the same IP address does not provide a strong enough indicator for attribution. A much stronger link is when an IP address hosts both malicious domains simultaneously and the hosting IP isn’t a provider web server.

To better illustrate this idea, let’s walk through a scenario where misattribution takes place. Let’s say that you’re investigating a targeted attack by an unknown adversary. The unknown attacker is sending spear-phishing emails with a malicious attachment to target individuals. When targets open the attachment, malware infects their computer, calling out to Bad-domain#1.

You want to map out the adversary’s infrastructure, so you query passive DNS for Bad-domain#1. The results indicate that the domain was first seen hosted on 2019-03-19. Next, you take the IP address you’ve identified and perform another passive DNS query. This time, you get two results. The records show that Bad-domain#1 and Bad-domain#2 were both hosted on this IP address, as shown in Figure 5-8.

Diagram showing two attackers, “Unknown Attacker” and “APT Group A,” using different host domains both hosted on the IP address at different dates

Figure 5-8: Misattribution example diagram

You conduct some research on Bad-domain#2 and find a report from a security vendor identifying APT group A as the creator and user of Bad-domain#2 in previous attacks. Once you see the domain on the same IP address, you decide that this must be the same group and attribute the activity from the unknown attacker to APT group A based on the shared infrastructure.

The problem with this scenario’s attribution is that the analyst should have realized that the domains associated with the malicious activity were not hosted on the IP at that same time. If you look closely at Figure 5-8, you’ll notice they were actually hosted almost a year apart from one another. It’s still possible that they’re related. But it’s more likely they’re two separate attackers. For various reasons, attackers prefer some ISPs to others and use them more often than others in attacks. This is largely because not all ISPs cooperate with law enforcement, especially if they’re located outside of the victim’s country. These providers tend to attract adversaries because they know that it’s less likely that law enforcement will seize the provider’s domain and infrastructure. In other instances, some infrastructure might be popular with attackers because it’s vulnerable, making it easy for an adversary to compromise and use it. Over time, other adversaries might use the same infrastructure simply because it’s accessible.

Keep in mind that advanced nation-state attackers don’t use the same infrastructure often, so you should take care to validate any shared infrastructure. When you come across a situation like the one described here, search for additional evidence and treat the two as separate instances until proven otherwise.

Don’t Use Domains Registered by Brokers in Attribution

Domain brokers are organizations that buy and sell domains on behalf of someone else.16 Like many other services on the internet, not everyone uses them for legitimate purposes. Domains that use broker registers can cause confusion; because domain brokers are associated with many domains, if the analyst does not identify the registrant as a domain broker, they may attribute all the broker-associated domains with a single attacker. This would not only be incorrect, it would also cause analysts to incorrectly attribute future attacks if any of the broker’s other domains were involved. Once a broker is associated with a domain, the registration information is no longer useful for attribution.

For example, several China-based espionage groups have used infrastructure registered with the email address [email protected]. This infrastructure hosts multiple domains associated with unique malware from several attackers. If you don’t understand the domain brokers’ concept, you might incorrectly attribute the activity to the same actor, as the domains all share the same registrant email. Yet, as shown in Figure 5-9, further analysis would show that the same address had registered more than 500 domains. It is doubtful a nation-state would register this many domains under a single registrant address—but a domain broker would.

Often, you can use a simple search query to show the domains registered to an email address. Tools such as https://whoisology.com/ also exist to identify the number of domains registered. We’ll talk more about these resources in Chapter 7, but for now, understand that you need to rule out the use of a domain broker account before making attribution decisions.

Historically, registration information was one of the best ways to identify attacker infrastructure. Yet over the years, registration information has become far less useful due to changes in privacy laws and the rise of privacy protection services that mask registration details. When there’s no privacy protection hiding the registrant information, the first thing an analyst should do is determine if a broker registered the domain. When in doubt, look up the registrant’s email or physical address online and find the associated brokerage-serviced domains in the results. Usually, domains registered through a broker account won’t have any privacy protection services; since privacy is a lesser concern to domain brokers, their contact information is usually visible to the general public. In addition to this, the brokers would have to pay for this privacy protection. Since brokers will own or be associated with many domains, privacy protection would add a considerable cost to their business.

Graph with two axes, “Domains Count” and dates ranging from 2013 to 2020. The number of domains declines from 550 in 2013 to 0 in 2020.

Figure 5-9: Domain registration data associated with [email protected]

Another clue that a registrant might be a domain broker lies in the number of domains registered. If the registration information is associated with many domains, it may belong to a broker. Most individuals registering domains for their own use will own fewer than 50 domains. If you see more than 50 domains, the account is likely associated with either a domain broker or a legitimate corporate entity that has registered the domains as infrastructure for business purposes. Thus, consider it a red flag if you cannot link a large number of domains to a corporate entity.

If you look up the registration address and are still unsure, you can also research the domain registrant’s physical address in the registration record. Legitimate domain brokers may use more than one email address to register domains, but the registrant’s physical address will likely be the same across all registration records. Also check whether a registrant’s address is associated with many domains. A search engine query is the fastest way to determine this.

Don’t Attribute Based on Publicly Available Hacktools

One of the most significant trends in recent targeted attacks is for the attacker to live off the land. Living off the land is when an attacker uses the tools already present in a victim’s environment to perform their attack. Part I of this book explored how software like PowerShell can help an attacker gain a further foothold in the victim’s infrastructure. Since the victim regularly sees activity from these tools, the attacker’s use of them often goes undetected. But in many cases, adversaries will still need to perform certain tasks themselves. Simply put, they can’t always do everything they need to do just with the tools and resources already present in the environment.

This doesn’t necessarily mean that the attacker will put even more effort into the task at hand. Instead of creating their own hacktools, targeted attackers will often rely on publicly available ones. But most of these tools have legitimate uses, such as penetration testing activities. And while these tools may draw more attention than ones already present in the target environment, they make attribution difficult. Anyone can access these publicly available tools, so you shouldn’t use them for attribution. (Granted, you should still document this as one of the attacker’s tools. It may be useful knowledge for future attacks.)

Despite the prevalence of publicly available tools, you’ll still come across custom-made malware, particularly in nation-state-driven attacks. For instance, China is known to share tools and malware across multiple threat groups. Even if the particular tool isn’t very prevalent, its shared use makes it less useful of an indicator for attribution. It is certainly valuable to note when a tool isn’t prevalent because that can be a unique indicator to consider during the process of attribution. However, it is more important to keep an open mind; every custom tool is initially unique to a single group for some period of time regardless. Hackers have to create a tool before sharing it, after all. If you have decided to attribute a particular piece of malware to a specific group—and then it shows up in another campaign—you will have made an incorrect attribution.

Whenever you’re conducting attribution, keep in mind that you should always look at the larger picture. If the attack uses malware you believe is unique to a specific threat group but the other TTPs and/or targeting are different, this may indicate the activity is not from the same group.

Attribution Tips

It is impossible to think of every scenario, but the following are a few helpful tips to keep your attribution honest:

  • Attribution does not always point to a specific person or group. There will be more occasions where the best that you can do is attribute an attacker to a particular region or country. You may think to yourself, “This is espionage, and it’s coming from country X,” and so attribute it to the government of country X. Before saying or writing this, however, ask yourself, “Do I have evidence to support the claim against the government I have attributed to this attack?” There is never anything wrong with providing an attribution hypothesis, as long as it is prequalified as a hypothesis and made clear it is not your attribution assessment. A hypothesis can be proven or disproven, while an assessment uses hard evidence to make a determination.
  • Solid attribution will always have supporting evidence. If you can’t back it up with data or evidence, then don’t write or state anything officially as of yet. The fact is, attribution without evidence is nothing more than your opinion. The only attribution worse than relying solely on your own opinion is when you rely on someone else’s opinion. Always require attribution theories to have distinct and clear evidence.
  • While sometimes difficult, never go into an investigation thinking you know who is behind it. Creating attribution theories to prove or disprove should be part of your investigation process. However, when making an attribution assessment, keeping an open mind is just as important as following the evidence.
  • If an attribution doesn’t make sense, then question it. Never take someone’s word on attribution. If they can’t back it up, then it’s not worth considering. Hold your peers accountable for doing attribution correctly.
  • Everyone makes mistakes. If you make a mistake in attribution, don’t be shy about it. Make the correction as quickly as possible to alleviate any confusion or additional work for others trying to determine how you attributed the activity to the group.
  • Always follow the activity and identify the behaviors of your attacker. Attackers are human, and they will have tools and tactics they favor and frequently use. They also likely have unique behaviors or methods that they use and reuse from one attack to another.
  • The most important tip is “When in doubt, split it out.” When you are unsure of attribution, don’t make it. Split out or keep the activity separate and track it as an isolated, unattributed attacker. Over time you will continue to grow your data set on the attacker and eventually find evidence to associate or disassociate attribution to another known threat group or create a new one. It’s always easier to merge two groups at a later point than it is to break out a single group into two separate groups.

Building Threat Profiles

Once you’ve attributed an incident to an attacker, you should profile the attacker. A threat profile is a document that provides information about a certain attacker based on their previous activities and behaviors. These profiles should be no longer than a few pages in length; they need to be quick to read and efficient to use. You can think of them as digital fingerprints that point to a particular adversary. Threat profiles help identify an attacker in future incidents and tell defenders how to best defend against their attacks. Using historical information, such as the TTPs associated with a specific adversary, future analysts can even predict attacker behaviors.

Consider the following situation as an example of why profiling is valuable. You are a defender working in a security operations center. While reviewing logs and alerts generated by automated defenses, a signature alerts you to traffic originating from your network and beaconing to a suspicious domain. The signature identifies unique patterns in the uniform resource identifier (URI) associated with malware from a known nation-state attacker. You recognize that the attacker may have already gained access to your network; after all, the malware beacon activity is now calling out to external infrastructure.

Suppose you and your organization do not conduct threat profiling. Now your only course of action is to find and mitigate the malware from which the beacon originated. But remember, persistent attackers won’t go away and stop the attack because you block one of their exploitation attempts. If the attacker is present in the environment, they’re likely working to escalate their privileges and move further into the network. They’re also likely establishing persistence to ensure they do not lose access upon discovery. Without knowing what to look for or where to look for it, tracking the attacker and defending against the threat will be far more challenging, as you’re stuck in a reactive state of defense.

Now let’s imagine you have a detailed threat profile. Great: you can proactively hunt the attacker. You look at the attacker’s tool preference and post-compromise actions. The profile tells you the attacker likes to use Cobalt Strike to increase their foothold and the hacktool Mimikatz to extract credentials from the environment. Additionally, the attacker uses a custom-developed proxy tool to facilitate anonymous communications with their infrastructure. In previous campaigns, they’ve shown an interest in obtaining access to domain controllers with the end game of stealing technology and engineering data. With this information, you can proactively hunt for the attacker on your network. You know what tools and malware to look for and understand where the attacker might be going.

Before creating a threat profile, you need to identify as much information as you can about your adversary. Appendix A provides a list of questions that will help identify data you should include in your threat profile. Use these questions as a guide when conducting a profile. Conducting good attribution and categorizing threats will ensure all profiles cover the same content and include the correct level of detail and information across all threat profiles.

Next, determine the threat profile’s structure and content. This is important, as the structure you choose needs to apply across all profiles so that they have a consistent level of detail. You may not always have enough information to create a complete profile. That is okay. As you learn more over time, you can add to what you have. Attackers will change tactics as well as the malware and tools they use. If threat profiles are not up-to-date, they will not be effective. Appendix B provides a template for creating a threat profile.


Attribution is one of the most significant and challenging aspects of the analysis process. When done wrong, it can cause an organization to incorrectly allocate its defensive resources. If this happens, the probability of the attacker’s success increases, as the defender’s time and energy are being used inefficiently.

Correctly attributing an attack to an attacker begins with understanding the attacker’s motivation. You can use the methods discussed in this chapter to make an attribution assessment and apply confidence levels to qualify your evaluation. By capturing attacker TTPs, time-zone information, and other evidence, you can map out the adversary’s behavior and then create a threat profile that defenders can use to become familiar with the adversary.

Using your attribution assessment and threat profile, key stakeholders and decision makers can better understand the risk their organization faces and more effectively dedicate the necessary resources to mitigating the threat. Remember that incident data is invaluable when collected and appropriately analyzed. Use it to your advantage by turning the tables on the adversary trying to breach your organization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.