© Jacob G. Oakley 2019
J. G. OakleyWaging Cyber Warhttps://doi.org/10.1007/978-1-4842-4950-5_9

9. Self-Attribution

Jacob G. Oakley1 
(1)
Owens Cross Roads, AL, USA
 

Earlier we covered enemy attribution and the process of attribution by which indicators of compromise eventually lead to identification of an actor and its potential motivation so that appropriate responses can be directed at strategic targets. Conversely, self-attribution is something that is typically avoided, especially when it is unintentional. Self-attribution happens when any portion of the attribution process yields an indication of perpetrated cyber activity. When a victim attempts to complete attribution of actors conducting cyber warfighting activity within its networks, the focus is on fully attributing that enemy such that responses can be responsible and appropriate. Where self-attribution is concerned, each phase of the attribution process can have huge impacts on the ability of the perpetrating party to continue to carry out warfighting activity in the cyber domain.

It is also important to recall that in the case of Title 10 warfighting actions specifically, there is the expectation of eventual acknowledgment and culpability. This is true for most Title 10–type actions; however, there was also the concept of covert action with its own special rules. In covert action the perpetrating organization is never going to acknowledge its role, even in the face of seemingly factual evidence. Similarly, intelligence gathering activities have no mention on whether or not acknowledgment of the activity is ever expected or required. Even in the case of Title 10 attack actions, acknowledgment of such activity has far-reaching ripple effects with impact on seemingly unrelated efforts. As such, intentional acknowledgment which attributes cyber-attack efforts must be carefully considered and planned. Further, in most cases, unintentional attribution of any fidelity is something to be avoided.

There is a careful analysis that should go in to when self-attribution should occur, just as the impacts and issues with unintentional self-attribution must be known, and appropriate responses to unintentional self-attribution prepared. The first consideration that must be had is whether or not self-attribution is ever acceptable. If the answer is never, then every precaution must be made to avoid it and implications and response actions must be planned accordingly in case self-attribution does occur. If the answer is yes, self-attribution at some point becomes acceptable the determination of when and how is most responsible and beneficial must be made prior to intentional self-attribution. The decision to self-attribute cyber domain activity must weigh operational concerns, political ramifications, strategic impacts, and moral dilemmas.

Unintentional Self-Attribution

We will first take a look at those cyber domain activities which at no point support intentional self-attribution. Intelligence gathering, battlefield preparation, and covert actions do not reach a point during or post operation where self-attribution by the perpetrating state is beneficial. That isn’t to say that at a certain point, if self-attribution does occur, it does not have an extremely negative effect on the operation. As an example, imagine intelligence collection activity was happening on enemy cyber systems over 10 years ago and several of the tools did not uninstall correctly when the mission was over. If the perpetrating state was attributed this long after operations had already ceased, it would not affect the operation itself nor likely prove costly at all to the perpetrator. At worst it probably would lead to some awkward political issues had the two states mended relationships.

In the case of battlefield preparation, so long as attribution does not happen until that prepared battlefield has been used by an attack effect, the impact of self-attribution would be low. It is when self-attribution of battlefield preparation activities hinders the performance of the follow-on attack effect that self-attribution is dangerous. With regard to covert action however, there is essentially never a point where self-attribution of such activity is without heavy consequence. The nature of covert actions and their inherent need to avoid being tied to the perpetrating state lead to a continuous effort at avoiding any attribution.

Examples of Self-Attribution

We will walk through some examples of self-attribution for both intelligence gathering and battlefield preparation cyber activities. Attribution at each phase of the attribution process has varying impacts on these cyber activities, and we will explore how self-attribution occurs at that point and the issues it leads to for the related cyber operations. Covert action will not be covered in these examples as it is simply avoided at all costs and the impact of attribution at any phase of the attribution process is unacceptable. I will also not cover the motivation phase of the attribution process, where the motivation of the perpetrator’s actions is understood. This is because when the end effect of the cyber activity is not a cyber-attack effect, which has already happened, attribution of motivation is tantamount to guesswork and does not support doctrinal responses.

Indicators of Compromise

This phase of the attribution process relates to the discovery of a clue or clues to potential unauthorized behavior which is not yet tied to another, or at least not tied to enough others to indicate the presence of an actor.

Intelligence Gathering Activity

In the cold war and probably other conflicts, practitioners of espionage utilized dead drops and markers to pass along information without ever meeting. For example, maybe there was a bench at a particular park that, when it had a chalk mark on the side, meant information was waiting in a predetermined spot to be picked up. This is a way intelligence was moved from one individual to another in hopes of avoiding detection. This activity might be considered to have disclosed an indicator of compromise if the mark was not whipped off by the party that picked up the intelligence and it was later noticed by someone unrelated to the intelligence gathering and passing activity. By itself, this mark on the bench does not represent the threat of an actual actor in the country gathering and passing intelligence, but it is certainly an indicator of compromise that can eventually lead to that picture of compromise being painted and attribution completed.

Cyber domain intelligence-gathering activities have the same necessities as physical activity in that the intelligence has to make it back to those who act upon it. This means getting intelligence collected on cyber systems out of the network it was found on and back to systems controlled by the perpetrating party for processing and analysis. Taking data out of networks like this is known as exfiltration, and if it is not conducted carefully, there is a chance that the network traffic related to the exfiltration of data and intelligence might stand out against the normal network activity. The network administrators of the enemy network might see some of his exfiltration traffic, and it may simply appear that there is a slightly heavier than normal flow from several machines in the network to web sites on the internet. Since this could be due to user activity or malicious activity, it does not on the face of it belay a cyber compromise, but it is a potential indicator.

The biggest repercussion to self-attributing even a single indicator of compromise is that it has the potential to tip off the victim to the activity. Even if a search for the compromise doesn’t begin off a single self-attributed indicator, it does frame future observations by the enemy which may lead to quicker attribution later. For example, the chalk mark on the bench by itself doesn’t raise much alarm; however, if the same type of mark started appearing with regular intervals or appeared in almost the same manner on benches outside several government buildings, it becomes much more concerning. There were no additional types of indicators that led to this deduction, simply the continued observance of the same type of indicator. The danger to operations when even a single indicator being discovered in the cyber domain is similar. Perhaps the exfiltration traffic was not very concerning at first as it was close to the same amount as normal users and seemed to go to a normal web site. If the victim was to search for that same type of traffic across a wide span of time or across multiple hosts and discovered, it happened at regular intervals or only across certain machine types not there is an elevated threat the victim will perceive and potentially act on. In this scenario too, there was no second type of indicator, only further observation of the initial indicator based on the fact that it was noticed in the first place.

The only real mitigation that can be offered for avoiding self-attribution at the indicator of compromise phase is to not be noticed. As we have just shown, even a seemingly uninteresting indicator by itself can lead to wider attribution of cyber intelligence gathering operations. There are two main ways in which cyber operations tradecraft aids avoiding detection. First, the perpetrator can simply be more careful and stealthier and put a focus on non-attribution nearly as high as accomplishing the mission. The decision to embrace this type of tradecraft may avoid detection but also may lead to more mission failures due to time-constrained issues. There is also the concept of acting within the noise, which when done correctly can be a more efficient way of avoiding self-attribution. If artifacts and clues of the perpetrating actor are indistinguishable from normal behavior, they don’t indicate anything to the victim.

In the example where the bench was marked to indicate intelligence was placed at a predetermined location for pickup, the actors could have behaved differently and potentially avoided creating indicators of compromise. After all, writing on the sides of benches at a park may be discreet, but it is certainly not normal behavior for park-goers. If instead the individual who was dropping off the intelligence fed ducks bread and left the bag of bread on the ground under bench and walked away with a certain amount of slices in the bag, the person picking up the intelligence could simply pick up the bag and throw it out as if a concerned citizen and then go pick up the intelligence. Taking it a step further, the dropping individual could even use the number of slices left in the bag to communicate to the picking up party. Maybe two slices meant “do not get the intelligence, we are being watched”, one slice meant the intelligence was placed, and three slices meant break contact permanently.

In the cyber intelligence gathering activity, the slightly abnormal web site visitation traffic cued the organization as an indicator. What if instead of that, the exfiltrating party simply used compromised machines to message a Facebook or LinkedIn account and offload data and intelligence that way. Now to the administrators, they just see a user doing excessive browsing on social media sites (which the actual user of the box also probably does). Even if the administrators took corrective measures and contacted the user of the machine to tell them to calm down their social media behavior at work, it is unlikely to make anyone suspicious and thus discourage further prosecution.

Battlefield Preparation

In medieval battlefields, as well as in other times and places, the use of markers to indicate range measurements has been used to help dial in fire by archers and siege engines such as catapults. To prepare the battlefield scouts might stack stones at observable positions in known intervals to help the friendly forces range in their attacks. This type of activity certainly falls within the definition of battlefield preparation and as such is a Title 10–type activity and not a Title 50 one as it does not afford for any collection of intelligence, it has the sole purpose of benefiting attack activities once they begin. Here the indicator which may lead to self-attribution is the stack of stones. If the enemy forces did not know there was an encampment of soldiers across the battlefield, but their scouts found the stacked stones, it might lead them to further investigation. Finding a single stack of stones may indicate the presence of humans in the area but not indicate that the scout is actually on a battlefield prepared by another force for attack. Similar to the intelligence gathering example, if the other stacks of stones were seen by the scout in patrolling the area, it would potentially lead to the deduction that there was something else going on. No new indicators were present, but finding the one stack and then identifying it to be the same as others in other locations at seemingly regular intervals might allow the enemy to believe there was something malicious going on.

There are actions an attacker within the cyber domain can also do to prepare the battlefield for eventual attack activities. Altering firewall rules slightly so that an attack effect, when executed on a system being used as an attack position, is more effective would fall within this category. A firewall change that was innocuous enough by itself may seem more malicious if it was determined to also happen across other systems, all of which afford access to sensitive areas of the organization. Here the single rule change which didn’t point to much of anything by itself may be interpreted as preparation for something more nefarious of the same change was discovered across the network.

Discovering indicators of compromise related to battlefield preparations has the damaging potential to take away the element of surprise. Even though sole indicators may not highlight the actions of an individual actor, they may be enough to tip off enemy forces that some form of battlefield preparation has been conducted, even if it is not clear that the preparation was specific to them as the enemy. In the medieval range stones example, the enemy may not think there is an enemy force around or that they are in fact on part of a prepared battlefield, but the stacked stones do indicate at least a man-made item. If the force was attempting to avoid detection itself, it may take greater care to avoid detection or even change course and route. Any of these things impact the ability of any follow-on attack to be less successful and just because an indicator was discovered, not even having pointed to a malicious actor.

The changed firewalls might also be identified as simply a widespread error within the network and be fixed by the systems administrator as they are viewed as unnecessary. Again, the simple discovery of the indicator can lead to behavior by the target which hampers an attack ability. The enemy administrators may think there is nothing malicious or even an actor related to the firewall rule change, but they still acted upon the discovered indicator of compromise. This self-attribution is completely unspecific to the attacking party, and yet it affects the ability for warfighting activity in the cyber domain just as it can in the physical.

The stacked stones used for range finding stood out because once again they were abnormal to the surrounding area. If the forces had instead picked natural land marks or perhaps made less obviously man-made markers for range finding, the enemy scouts may have not discovered them and the battlefield would remain prepared, the element of surprise maintained, and the enemy actions unaltered. In the cyber domain example, if instead of adding a new firewall rule to the list of rules the preparation was to expand an existing firewall rule to include allowing for traffic related to the attack, it may have gone unnoticed. Once again staying within the noise is a great way for the activity, cyber or otherwise, to remain undetected by intended victims and not impact the ability for attack effects to be deployed.

Actor Association

Actor association is when multiple indicators of compromise are associated together as representing the same actor. As opposed to just an indicator of compromise, self-attribution of this level means the enemy knows that there is an actual unauthorized presence conducting activity.

Intelligence Gathering

Carrying on the initial example of the chalk marks on benches used to indicate and help pass intelligence between individuals, further indicators of compromise can be associated together to indicate the presence of unauthorized actors and activities. The chalk mark itself was innocuous if not odd and maybe did not set alarm bells ringing immediately. But what if on the ground behind the bench with the chalk mark was a divot in the ground as if a cone-shaped stake had been there. Also, in the trash can down the path from the bench, a cone-shaped stake with a cavity big enough for a roll of film was discovered. Any three of these indicators by themselves are not likely to be associated to an actor attempting to move intelligence, but together they represent continued self-attribution.

If we look to exemplify the same scenario in the cyber domain, the initial indicator of odd exfil traffic must be correlated to other individually benign indicators. In addition to the traffic, let’s say those same machines started to experience a slowdown in performance, as if they were working harder than they should be. Also, let’s say that abnormal remote user credentials were found on the same and other machines in the network. Alone, slowdown in performance could be attributed to anything, with malicious activity probably not even near the top of potential responsible candidates. Abnormal user keys could be explained away by people using other people’s machines in the company or even administrative activity. Together with the abnormal firewall rules though these indicators associate together to represent a definite unauthorized actor within the network. This level of self-attribution by the perpetrating actor can be much more costly than any of the indicators by themselves.

The impact of self-attribution at this level can be much more detrimental to the end goal of intelligence gathering activities. When it is only an indicator that is attributed, reactions by the target may as a byproduct affect the operational end goal. In the case of self-attribution establishing that there is an actor present, the reaction by the victim is going to be a specific attempt to thwart that discovered actor. The association of indicators to an actor means a doctrinal change in the ability for the victim organization to respond. When only indicators are self-attributed, the victim organization may use information about those indicators to improve actor agnostic security measures for the organization. On the other hand, when the presence of an actor is established by association of indicators, the victim organization can improve actor-specific defensive measures which are likely to end the actor ability to exfil or move intelligence.

Mitigation of individual indicators of compromise revolved around avoiding detection via improved tradecraft. Being quieter or living within the noise were good ways for indicators to not be noticed. When attempting to mitigate the risk to operations posed by indicators being associated to one actor, the perpetrator must do everything possible to appear unrelated to that identified actor. This means changing tactics, techniques, and procedures as well as resources leveraged to collect intelligence. If multiple indicators have been discovered and associated as a specific actor, the perpetrating party must avoid further association with that established actor profile.

In the case of physical passing of information, this might mean changing from marking benches to marking tables or from using hollowed out stakes to some other form of information storage. They key is change, and to change techniques and tactics as often as possible, so that even in the face of indicators being associated to one actor, the perpetrating actor prevents its actions and their resulting artifacts from pointing back to the same entity. In the case of the cyber example, the change needed to mitigate actor association can happen any number of ways. Perhaps instead of having abnormal credentials used to gather intelligence on the same machines used to send that intelligence out of the network, the actor has a more disjointed approach where intelligence is gathered and aggregated on one group of machines and then passed internally to separate machines responsible for getting that data out of the network. The intelligence gathering actor might also change the methods of exfiltration every so often to appear different and avoid being associated as one singular actor.

Battlefield Preparation

Piled stones placed as range markers may seem unthreatening, but what if other indicators were discovered that seemed related to the same activity of battlefield preparation and thus a singular actor. Perhaps the scouts this time see the stacks of stones and then also observe trenches in the distance and some trip wires and traps across well-worn paths troops are likely to use. Now the scouts are likely to report that it appears a hostile force of some kind has been preparing the area to hinder troop movements and sway the battle.

In our cyber example of battlefield preparation, maybe the administrators, in addition to noticing the odd firewall rules, also find further artifacts that when paired together are easily associated together. If the administrator also found that several important security system logs were set to delete every few minutes and that some unknown executable files had been found on some of the machines with odd firewall rules as well, it would certainly seem like evidence of a singular actor preparing the machine for something, or at least efforts to hide certain activity.

The impact of self-attribution resulting in actor association for intelligence gathering activity may be the end of a stream of important information. When it happens with battlefield preparation activity, it can endanger the success of attack effect missions as well as the livelihood of those carrying out the attack. Upon seeing the range stones and trenches and tripe wires, the scout now reports that there is a specific hostile force somewhere preparing to do battle. In this case the enemy forces may avoid the prepared battlefield altogether, meaning if the perpetrator still wanted to do battle, it would be in a less advantageous environment. Worse yet, the enemy scouts may perform counter-attack activities unbeknownst to the preparing forces. If the enemy scouts decided to say “move the range stones to make attacks less effective,” they may change the placement of trip wires and traps to instead affect the preparing party as well. They might also find a way to use the trenches dug in battlefield preparation to their own advantage, turning the work of the preparing party against them. In the case of the cyber example, the discovered binaries on systems may be copied and forensically analyzed. Best case result of that might be the enemy knowing how and what was targeted by the follow-on attack activities related to the cyber domain battlefield preparation. Worst-case scenario, those executable binaries reveal publicly unknown attacks and tools which the enemy now can turn against the preparing forces or other targets.

Mitigation of association for battlefield preparation activities within the cyber domain can benefit from routine changes to methods and tools used in an effort to avoid being associated with a singular actor profile. Additionally, such activity benefits from being carried out as close to the attack effect as possible so that there is little time available for the battlefield preparations to be discovered and impact the ability of the preparing forces to conduct their cyber-attacks. Where intelligence gathering activities may go on for years, cyber-attack effects are likely short in duration and likely or even intended to be discovered. As such, a more efficient way of avoiding association may be limiting the time preparation activities, and associated artifacts are exposed to potential detection instead of putting high amounts of work into disassociation of those activities and indicators.

Actor Identification

The identification of the actor is a determination using indicators of compromise and related information to not only associate activities with an actor but to identify who the actual individual or organization is behind those activities.

Intelligence Gathering

In our espionage intelligence gathering scenario, actor identification can be relatively straightforward. The individual dropping off the intelligence may be a local source and not indicative of who is collecting and processing the information; if the person who picks up the intelligence is identified, it might reveal what organization is behind the effort. At a minimum witnessing who picks up the information identifies the singular agent behind the activity. Less obviously, if the intelligence were discovered and collected by the victim after it is dropped in the hidden stake but before the person who picks it up gets there, identification of the perpetrating party may be possible based on how specific the intelligence contained in the take is. If the target of intelligence gathering is specific enough, it can indicate who the customer of the intelligence is likely to be.

In the cyber example of exfiltrated intelligence, it can be more difficult to determine the identity of the actor gathering the intelligence simply based on where the data goes. If the person who is picking up the espionage at the park is identified, it can be easily verified what the person’s identity is. In the cyber example, even if the external systems where the data is being sent are identified as belonging to a given organization or state, that information does not necessarily indicate who the end customer is. As we covered when discussing attribution in general, in cyber it is very easy to obfuscate and alter, and while the destination addresses of the intelligence gathering exfiltration might belong to China one minute, it could change to belonging to Ireland the next if it is in a cloud-hosted environment. Therefore, identification of an actor in the cyber domain more heavily relies on self-attribution through the type of information being gathered and taken out of the network. Even unspecific intelligence can give a range of potential identities for the collecting actor, and the more specific the intelligence, the easier to tie to certain potential actor identities.

When self-attribution in intelligence gathering activities leads to the actual identification of the perpetrating part, there are certainly political and perceptual ramifications that may result. There are also operational problems that arise when an enemy is able to identify who is trying to collect information from them. The worst thing that can happen to intelligence gathering activities in the cyber domain and the physical domain is discovery and identification by the enemy without the perpetrator’s knowledge. If this happens, then the enemy victim organization can perform misinformation and counterintelligence efforts with extreme efficiency. Passing misinformation and incorrect intelligence can undermine the state security at every level of the perpetrating nation. Troops can be sent to the wrong locations, strategic warfighting decisions are made based on enemy provided facts, and false or inaccurate senses of security can be established by the intended victim.

The best way to avoid self-attribution resulting in identification of the perpetrating party is to first and foremost avoid leaving behind indicators of compromise or performing activity in ways which ease association to a singular actor. When that doesn’t work and the enemy has determined that an actor is present, it may be appropriate to completely cease operations and/or attempt to remove artifacts and indicators. This tactic only works if the perpetrator knows the enemy has decided there is an actor present, in the physical domains, intelligence gathering activities certainly attempt to adhere to stealth to avoid being identified. In the cyber realm, identification without admission is extremely difficult to do with any reliable level of fidelity. Still, cyber intelligence gathering activities should do everything possible to avoid self-attribution resulting in even cursory attempts at identification by the enemy.

Battlefield Preparation

Self-attribution identifying those conducting intelligence gathering activities means the perpetrator is either denied further intelligence gathering activity or worse yet is potentially mislead via counterintelligence activities of the enemy. When battlefield preparation reaches the identification phase of the attribution process, self-attribution may result in the perpetrating state failing to secure strategic goals or even become the target of hostile actions itself.

Now, upon realizing that there is a potentially hostile actor around the prepared battlefield, the enemy scouts conduct further reconnaissance. In doing so they observe siege weapon technology specific to only a few possible adversaries and even see several shield and banner emblems actually indicative of the specific state those troops belong to. The scout is now able to return to its forces with an identification of the enemy who prepared the battlefield. In the cyber example for battlefield preparation, upon forensically reversing the executable binaries found on several machines, the enemy was able to determine their intent. The effect of the cyber tools discovered on enemy machines was to turn off sections of security perimeters between the victim state and the perpetrating one. At this point the perpetrating actor has self-attributed itself as being the neighboring state who was preparing for invasion through the security perimeter.

Repercussions form this level of self-attribution for battlefield preparations may have grave consequences. Upon identifying who was preparing the battlefield with trenches range markers and traps, the enemy forces could simply choose to not engage the perpetrator on that battlefield but begin marching toward another exposed portion of the perpetrating forces’ territory. Not only does this rob the preparing forces of the planned strategic defeat of their enemy, it has not allowed the enemy to take over the element of surprise and march in forces on an unsuspecting portion of the perpetrator’s territory while its forces wait to do battle on the prepared battlefield against an enemy force which will now never show up.

In the cyber example, self-attribution of this fidelity is likely to implicate other warfighting activities. If the enemy now knew that its perimeter was going to be unsecure and that was the location the neighboring enemy forces would invade from, maybe they prepare bombing runs and evacuations from that area. The self-attributed and unwitting perpetrator of battlefield preparations now has its forces crossing at a known location where the invasion attempt will be disastrous. Even on a less severe scale, identification of who is behind battlefield preparation activity allows for preemptive strikes and targeting by the enemy against the perpetrating state.

To lessen the issues that come from self-attribution of who is conducting battlefield preparation, obfuscation and generalization are likely methods. Limiting the exposure of battlefield preparations to discovery prior to attack is still best practice, but that is not always feasible. In such cases, battlefield preparation should be conducted in a way that identification of the perpetrating forces potentially misleads the enemy into thinking it has a different enemy than the preparing force, or that the preparation is so general that it could enable an attack from anyone and thus leaves the enemy forces to discern which among any of its potential enemies has prepared the cyber or physical domain for battle.

Intended Self-Attribution

In the case of Title 10–type warfighting activity, there is the possibility that self-attribution is an intended action. Remember that cyber-attack effects are executed under Title 10–like authorities by uniformed members of the military or their agents and under the command and oversight of the nation’s military apparatus. As such, there is an expectation that at some point the state’s role in the attack effect will be disclosed. This is important to stay within the concepts of just war and abide by international convention, but it is also part of the projection of power. There are two ways a state may seek to self-attribute its attack activities within the cyber domain. The activity may either be made so obvious as to indisputably implicate the perpetrating party or the nation who conducted the attack may come out and announce its participation.

In either case, purposeful self-attribution must be done when the acknowledgment of activity does not impact the effectiveness of that action or of other related actions yet to come. If announcing responsibility for a cyber-attack effect would implicate other warfighting actions yet to be executed, that announcement must be delayed until all related actions had taken place. For example, if a state announced it was able to shut off power to a city so its forces could safely move through it, and such information would belay the path an invasion force was taking deeper into enemy territory, such an announcement would need to be delayed until after the invasion to avoid impacting ongoing operations. If that same cyber-attack which crippled power in that city was going to be used across the enemy territory at different times during a conflict, self-attributing the capability to do so on purpose may allow the enemy to become more resilient to the attack effect or thwart it altogether in the other locations.

Projecting Force

Other weapons available to the warfighter allow the state using them to project force. Projecting force allows for conflict and violence to at times be avoided due to other states understanding the capabilities and not wanting to face them. Understanding how weapons such as stealth bombers and tactical nuclear weapons on submarines act as a deterrence is easy to see. Enemies know that if they conduct open aggression thata response is going to come, and it may be one which it cannot stop or eliminate. Projecting force through the cyber domain is a bit more difficult.

A bomb dropped from an undetected stealth plane is likely to be as effective on its target the hundredth time as it was the first time, and therefore displaying, using, or acknowledging the capability doesn’t necessarily impede its effectiveness as a deterrent to future aggressions. Once a cyber-attack is used and then responsibly and openly admitted as part of warfare by the perpetrating state, there may be an intimidation factor associated with that action. The capability though is then likely lost for future use. Once used, and even if not announced, a cyber-attack effect is likely to be noticed. Announcing the use of the cyber-attack effect makes it certain to be noticed. An enemy state may potentially attribute a loss of power to power lines or power plants being physical destroyed or tampered with.

When the cyber-attack is announced, the enemy knows to immediately begin forensic actions against the victim systems to understand what just happened to their cyber systems and to prevent it in the future. Worse, if security products were installed on those systems, international security software vendors may now have their hands on the attack effect tools as well. Security vendors often share signatures, and announcing a cyber-attack effect such as what may have turned power off in a city means that the attack tool and potentially the access tool that enabled it are now known to the entire world and automatically caught. If there was not enough disparity between the attack tool used and announced and other prepositioned attack effect tools in other locations, this type of signature might catch them as well and ruin worldwide operations by the perpetrating state.

Clearly self-attribution has far-reaching implications, but specific to the projection of power, it can be self-destructive. Current and future enemies may now respect that the perpetrating state is capable of creating and delivering such attacks and that is in itself projection of force. However, that same effect is likely to never realistically be used again due to the dynamic nature of the cyber domain. The real strategic decision that must be made then is, does acknowledging the use of cyber-attack effects and the potential projection of power that comes along with it outweigh the other impacts such self-attribution may bring about?

Summary

In this chapter we covered the concept of self-attribution. In doing so we analyzed the already understood attribution process from the perspective of the perpetrating state. Unintentional self-attribution of varying degrees and of varying activities, including intelligence gathering, battlefield preparation, and covert action, was discussed. The consequences of such self-attribution at different levels was also covered. Lastly the concept of purposeful self-attribution and how it is part of cyberwarfare and can be used to project and impact power was detailed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.12.205