Modern technology and globalization have made it possible for a single attacker to wage war against a company and even a country, and win! Technological advances make it possible for attackers to continuously develop and improve tactics. This results in everchanging threats which are made all the more pernicious by the interconnectivity we've grown into.
Moreover, technologies have led to extremely sophisticated and powerful criminal networks that are hard to identify and uncover even when operating under our noses. To thwart such attacks and threats, huge amounts of resources would have to be dedicated to security by the government, but those resources aren't there. The gap is therefore bridged more and more by the private sector.
Criminal organizations come in many forms and can take unlimited actions that aren't always accurately forecastable. This is where learning to think like them comes into play. Looking at your organization through their mental filter can show you not only how you are vulnerable, but where. Indicators of attack (IOA) focus on detecting the intent of what an attacker is trying to accomplish, regardless of the malware, tools or exploits they use. Indicator of Compromise-based (IOC-based) detection approach does not identify the rising threats from malware-free intrusions or even zero-day exploits. This is where an IOA-based approach, pioneered by CrowdStrike, becomes useful (https://www.crowdstrike.com/cybersecurity-101/indicators-of-compromise/ioa-vs-ioc/
).
Indicators of Attack are actions or a series of actions that an attacker must execute in order to succeed. A spear phish is a good example in order to illustrate the idea of an IOA.
A successful phishing email must persuade the target to follow a link or open a document that will, in turn, infect their machine and initial compromise takes place. They often aim to maintain persistence and to make contact with a command and control site, awaiting further instructions.
IOAs are concerned with the execution of these steps, the intent of the adversary and the outcomes the attacker is pursuing. They are not focused on the specific tools used to accomplish the objectives.
My position is not that IOA's should be used in place of IOC's. I am of the opinion both are valuable. However, IOA's are especially valuable when trying to determine why your business will be attacked instead of only how. No advance knowledge of the tools or malware (aka: Indicators of Compromise) is required, and so many points of view can be offered and listened to.
My understanding of the evolution of cyber security is that for a long time, attackers went for data on networks and servers, so we protected them as best we could; then data and attackers went to the endpoints, so we protected them the best we could; then the data, and so the attackers, too, went to Software as a Service (SaaS) and we authenticated them, but protection was limited. Threaded throughout is the social engineering aspect of attacks, too, which has played a part steadily throughout the history of cyberattacks. And only relatively recently have we really tried to build out our defenses there. There are two categories of security: technical and psychological. Because of this dichotomy, cybersecurity's primary concern with technical features often leaves us all at risk. It's precisely why the community needs more discussion and thought around AMs for defensive and offensive measures.
As was egregiously apparent with the recent Bitcoin-Twitter scam, a win for an attacker doesn't have to be brilliantly technical to have adverse effects for hundreds of millions of people: ubiquitous and mainstream technology is easily weaponized through AMs. The attack itself saw prominent Twitter users, with the blue verification checkmark next to their names, tweet “double your Bitcoin” offers, promising their followers they'd double donations made to the included links and send them back. For example, former President Barack Obama's account tweeted: “I am giving back to my community due to Covid-19! All Bitcoin sent to my address below will be sent back doubled. If you send $1,000, I will send back $2,000!” The tweet has since been deleted. Elon Musk, Jeff Bezos, and Bill Gates were among many prominent US figures targeted by the scam.
In this case, Twitter employees were the targets and, as we know, if you aren't an attacker or thinking like one, it can be hard to stop the outcome.
The other thing that is needed if you are to protect yourself is teaching your employees what to look out for—what an attack feels like and how they can defend themselves even if they don't know they are in a position or situation where they need to. It's so simple, but it's also worrying due to repercussions if you fail: if your employees are unprepared to deal with current and growing threats, you do not have a shot at effective security. The threat landscape is always changing, advancing, and growing, and employees have to be prepared for this. It doesn't mean everyone has to be highly strung and forever on edge. But knowing what makes the company attractive, understanding how attackers operate, and giving your employees the power to treat security as something more than a concept is essential. Employees are typically the ones on the front lines when security incidents occur. However, many of them come into contact with their organization's cybersecurity policies through reminders and restrictions. Those who don't know about the policies, who haven't been able to commit them to memory, or who don't recognize attacks and remedies by reading what to do, are caught off-guard, ill-equipped, and vulnerable.
Eliminating this issue requires a commitment to the resources, personnel, and time to support an in-house or outside team to determine how vulnerable your organization is. This team will then be required to show you, without fancy frills, what your landscape looks like. This approach also requires corporate humility, which boils down to implementing changes based on results. This is part of a simple formula that will keep you safe as a company:
That's it. That explains how the most secure companies do the impossible and remain ahead of attackers. Innovative companies use this formula to change their position from defensive to offensive. Resilient companies use it to become stronger. But all companies require it. We are all at risk—the owners, employees, and service users—if this is not being done. Companies and government alike must always be able to identify dangerous shortcomings and react to any glaring limitations quickly. If you can't, you aren't being proactive. You aren't invested in security—yours, the customers', or the employees'. The best way for us, as a society, to achieve higher levels of security is to share information: share it with the authorities, with the security community, and with each other—business to business communication on what attack types and trends you are seeing is essential if we are to advance our position in terms of security.
If you're hiring a red team, pentester, social engineering team, or AMs expert, and if scope and the rules of engagement are significantly restricted, you will not receive effective testing.
If you are part of an in-house red team and you're restricted, you will struggle to effect real change, but you can aim to do so in baby steps. You might consider documenting your thoughts on where the company's security faults and vulnerabilities lie and getting them to an executive for potential future use and leverage (and in case there is a breach, and all eyes turn to you…).
If you are in charge of a red team in-house and cannot see the full spectrum of benefit when employing them at their fullest potential, you might consider looking further into this.
If you are in charge of a red team for hire but cannot see the benefit of looking at the world both offensively and defensively, you probably don't have an effective red team.
In any instance of a red team, looking at attack trends will not suffice. They are good to know about and to test, but your job is to think like an attacker, looking at environments in isolation and working out how to best exploit them. Then you must look at those same environments and determine how best to protect them. There is no one-size-fits-all in security, and every business, organization, and institution is vulnerable to attack, admittedly to varying degrees. Red teaming seeks to uncover these vulnerabilities through a sharp AMs.
Survivorship bias is when you aren't working with all of the information needed to make a complete analysis. We tend to focus on the information we have and fail to consider the information we don't have. An example of this is illustrated in a story from World War II: During WWII a mathematician named Abraham Wald helped the US military determine where to add reinforcing armor on bomber planes. Reinforcing the whole plane would render it too heavy, so weight was added only where absolutely needed. Data and direction was collected and taken from returning planes based on where they had taken damage (from bullets, shrapnel, etc.), essentially mapping out where the damage tended to be. This is an example of full-blown survivorship and Wald realized this. The data collected could only account for the planes that made it back, and not for the planes that were shot down and never returned. The areas a plane could get shot, but still return, did not need additional armor to fly. This is essential to understand as a business and an ethical attacker.
As an ethical attacker employing AMs, you cannot lean into over-appreciating successes and underappreciate failures. Success stories are easy to find while failures are usually ignored or lost to time. You cannot look only at what made you successful as an attacker and fail to notice what aptly countered you. If you do, you will fail to grow, and you will fail to help your client see their whole organization. As a whole industry, we cannot endlessly turn our attention to the most successful ethical attackers. We must also be aware of why attacks fail so we can then analyze the situation and assess if it is truly secure or if the means of defeat lay elsewhere.
Businesses must also resist survivor bias. If you survive an attack, it is not your triumphant defenses that need bolstering, it is those that failed. Less obviously, as a business the culture cannot shift to believing it survived an attack because it is completely superior to those that didn't. Those that didn't may offer you more insight than the other way round. In the simplest simple terms, as a business that outperformed the rest, that concludes, based on their attributes, without looking more broadly at the whole dataset, including those with similar characteristics that failed to perform as well, mistakes and vulnerabilities will occur.
Finally, whilst successful businesses can give advice on what to do, businesses who failed in terms of security can give advice on what not to do (which is just as valuable). This is also where I return to the criticality of sharing information between businesses and organizations: understanding where one business was successful or unsuccessful can lead to helpful data and extrapolations that can help the whole of business.
Unfortunately, a cybersecurity policy does not equal cybersecurity. In May 2018, research firm Clutch found that almost half of employees don't pay much attention to their employers' cybersecurity policies (see https://clutch.co/it-services/resources/how-employees-engage-company-cybersecurity-policies
). One of the biggest reasons internal cybersecurity practices are often ineffective is that they are overwhelming. If your policies are too complex, they will ensure people take shortcuts, thereby functionally circumventing them completely. This is where companies fail people. It is also where regulations fail businesses and people. A policy should be aimed toward giving anyone reading it a chance at understanding it.
Behavioral security tells us that defense begins in the brain. Let the policies reflect this. They should be comprehensible and reasonable, and they should not falter from their message: no matter what, adhere to the process.
Finally, if you are in charge of a red team, social engineering team, or pentest team, you cannot instill within your team members what they should think. That is not your job, nor should you want it to be. I don't even think you should tell them what to do directionally when in the planning phase—let the environment be open to all suggestions, and let the person who offered the idea talk it all the way through. If it falls dead in the room, great. Move on to the next, but do not make that individual feel bad. That suggestion might spark another idea or help narrow down attack vectors. Have your team learn how to form their own brand of attacker mindset. Only when each person has a strong AMs in place can they learn how to defend properly, because in doing so, they'll be able to assess a business and its defenses far more critically, describing blind spots previously unknowable or invisible upon first inspection.
If you are going to defend your company against an attack, you must first know who the enemy is by knowing what they want and what will make it easy for that (or difficult) within your environment.
Protection is no easy feat with external attacks and insider threats and two categories of employees aiding a security event (the neutral and the lucrative). A relentless and dangerous balance exists between offense and defense, deepened in its insidiousness when an attack is conducted in a stealthy manner. When the offense has the advantage, there will always be engagement. When it costs more to attack, or when the chances of an attack defeating the defenses is low, there will be less engagement and less success on the attacker's side. This is your ultimate aim. Show attackers that you are not “easy pickings”; use effective measures they can't plan for ahead of time. Be hard to defeat—use AMs to assess yourself. Defend your business one level higher than you think it needs.
Being antifragile is basically benefiting from volatility and shock. Being robust is not the same as being antifragile. Something that is robust will survive, but it will not benefit from harm. It will simply act as though there was no trauma at all. Being antifragile is being able to self-improve based on stressors and volatility.
In Antifragile: Things That Gain from Disorder (Random House Publishing Group, 2014) author Nassim Nicholas Taleb coins the word antifragile. He gives an example of its definition, stating that “logically, the exact opposite of a ‘fragile’ parcel would be a package on which one has written ‘please mishandle’ or ‘please handle carelessly.’ Its contents would not just be unbreakable but would benefit from shocks and a wide array of trauma” (p. 32).
The antifragility of something is determined by how fragile its parts are. Paradoxically, the more fragile the parts of a system are, the more antifragile that system can become; the parts that are fragile direct the antifragility of the future system. This is best thought of as trial and error. Taleb advocates for adding stress on purpose (in your life; in your organization)—not too much, as we've discussed before, because too much is detrimental. Exposure to a small dose of stress will, over time, make us and our companies immune to additional, larger quantities of stress.
An example of antifragility is the economy: its constituent parts, from a one-person business to the biggest bank on Earth (as of this writing Industrial & Commercial Bank of China), are all vulnerable to fragility. But when one fails, the others learn from those mistakes and are able to use those findings well into the future and become stronger. The economy is antifragile, whereas its constituent parts are all fragile.
In contrast, tranquility is not good for survival; shocks and the unforeseen come with valuable information. Making a system tranquil will not aid its survival, as it will lag behind and lose its potential for growth.
Bottom line: antifragility fuels progress and advances society. Failure of some things is okay so long as it is for the greater good and we learn from it, thus becoming antifragile.
This concept applies to your business as well; you do not want to mask, be blind to or ignore the gaps in your security. You should want to be antifragile; add stress to your organization in a semi-controlled way, thus allowing for growth and gaining from disorder. Keeping this process under your control—and out of the control of a malicious attacker—means being able to identify what's vulnerable and what's sensitive and then safeguarding it with everything at your company's disposal—technology solutions as well as people and process solutions. After all, this is exactly what an ethical attacker sees and acts upon: who and what is vulnerable and what is sensitive. It's, of course, what a malicious attacker sees and acts upon, too.
The Internet is undoubtedly the largest public data network. It enables and facilitates both personal and business communications the world over. But although it can be used for good, it can be used for bad as well. The Internet provides many advantages but comes with many security threats. Having an Internet connection alters your security risk profile. For instance, an offshore platform doesn't have to be connected to its on-land counterparts. It's done to streamline some of the operations needed to run a platform. The platform's, as well as the company's, risk profile changes dramatically in light of this.
Your business can undergo the full spectrum of crises, from a data breach to an asset theft. On top of this, the threat landscape is evolving and new technologies are constantly being rolled out. Transformation is often disguised as evolution, like the “cloud.” Even with this, you must have the ability to rapidly respond and decisively resolve crises, providing the most effective deterrents and setting the stage for future operations where possible. Should deterrence fail, it is imperative that you be able to defeat attacks of any kind. Especially important is the ability to deter or defeat simultaneous or nearly simultaneous attacks, even if they are happening at a distance but occurring in overlapping time frames—which means the whole organization must be on the same page, treating security as an absolute. Training and being able to recognize events for what they are is critical.
The ability to rapidly defeat initial attack advances means you must be prepared to conduct several smaller-scale contingency operations so that you can stabilize a situation. All of this proves that simply having a policy isn't enough. Communication and careful training, companywide, is called for, and in light of escalating security breaches, there is a need for decisive, mitigating action that is swift and effectual.
You will have to recalibrate your security approach from mainly technology-based defenses, including processes and education, to become proactive. It takes laser focus, commitment, and a sharp, modern leadership to do so in a way that sticks. This level of communication and ultimately foresight within your organization is the only way to change habits and culture.
Cybercrime is constantly evolving, and the growing increase in the number of threats that use social engineering techniques is a cause for concern in several businesses. All it takes is one user to click on a malicious link, and a firm's network can be brought to a grinding halt. Cyberthreats have increased in large numbers, and the transaction and compromise time has decreased.
It should be noted that, although sophisticated attackers might know much of the information contained in this book, most attackers only know it in essence, which is adequate, but not enough to effect real change in terms of security. Having an in-depth understanding of AMs will allow us, always, to be ahead of those less careful, less diligent attackers. This is a massive benefit to our clients, who depend on us to give them more than a step-by-step account of the actions we take to circumvent their defenses. To best protect them, we should be able to give our clients a comprehensive understanding of their whole landscape as we perceive it, not only how we bypassed some of their defenses arbitrarily. As a business, you should expect this. For businesses reading, employing AMs affords you exactly this.
Security as a whole needs to be broken down into the pieces recognizable to the cultural and technical backdrop you operate within. There is no one-size-fits-all for security. You must analyze each piece of your landscape and tooling, identify any faults, and perform regular maintenance. When it has all been sewn back together, the hope is that it equals more than the sum of its parts. You will experience unintended consequences of securing your business in this way. However, even if your changes do not seem good at first, it is critical to remember that antifragility comes through volatility: it's best you have put that into action rather than an adversary.
America, possibly tied with the UK and Iran, is a sophisticated cyber superpower. America is good at offense, but we are also the most targeted. We have many systems of interest to profit from, to steal from and to spy on, so we are targeted frequently. We are one of the most vulnerable from this perspective, too, because we are one of the most connected—everything is connected, from our refrigerators to our water treatment facilities. Because of this, we face many challenges: one is to remain skeptical and honest in evaluating our utility as attackers—meaning we must also not become complacent and self-assured that we are ahead of the real bad guys or that because we can identify what tactics and strategies have been applied by them in the past we know what they will do in the future.
As companies engaging AMs experts, you face a similar problem. Business as a whole can struggle in identifying their own shortcomings and in realistically understanding how and why they will become targets. Corporate AMs is recognizing this and enlisting the help of experts. AMs specialist–proficient, unbiased red teams, pentesters and social engineers with the ability to look at the organization as a whole do make an irrefutable difference—specifically when they are correctly scoped, competently structured, and encouraged to carry out their objectives without improper influence or constraint. To reiterate: AMs is a way of thinking and acting that puts ethical attackers in the mind and shoes of the unethical. In doing so, we, as companies and industries, get to benefit from their work in the short term (in getting an honest assessment of our threat landscape, an alternative analysis of our vulnerabilities and security gaps and the chance to fix our shortcomings) and in the long term (through sharing information and being able to better recognize trends, similarities, and how threats evolve).
Your business is subject to the full spectrum of crises—a spectrum you as a business should understand but of which the complexities may allude you. AMs requires a distinct way of thinking and operating—employing a self-aware, curious, creative, confident, agile mentality and maintaining the ability to communicate and explain your organization through their eyes. Their methods cannot become predictable or culturally ingrained, nor will they become Complacent with your security and threat landscape. To help your business combat the full spectrum of threats, ethical attackers employing AMs will be disruptive, but this is the business's opportunity to become antifragile. Holistically, I believe this is our function aligned with each businesses. An additive would be sharing information within the community and subcommunities to identify trends and patterns more quickly and effectively, speaking for both the attacker in this case and the company. Assuredly, all data is sensitive data. Through employing the use of AMs, you will be able to assess more clearly and accurately what can be used against you and how.
Security is a tough task for many organizations. Most aren't in the business of security, which makes it hard to think through its lens. Organizations typically do not build security programs designed to be robust; they build them with defensive technology and test them offensively to the best of their ability. This approach can often fail to consider the users of that technology, their understanding of security and policy. Additionally, not all attacks use technology. Social engineering is the practice of using influence, deception and manipulation to breach security—it's human versus human. This is not solved if defenses amount to firewalls, intrusion detection systems, patch management, and compliance checks. Attackers, both ethical and malicious, count on this blind spot and use it to their advantage.
As an attacker, we are always acting as the adversary for the greater good of security. We think objectively about the environment we are aiming to secure, unencumbered by its cultural biases, internal assumptions, and the information that exists only in its literature, such as handbooks and mission statements, and not in its everyday workings. We know a skewed view of anything will gain us nothing.
Use AMs to your advantage as an organization. Use our curiosity and persistence to gain a new, informed perspective. Use our wily ways of processing information, even that which seems innocuous, to get smarter on how actual attackers size you up and plan their attacks. Employ our specific brand of mental agility to show you how we can adapt to your cultural norms to move around undetected. Finally, use our self-awareness: we know what we can leverage and through that, you can become a self-aware company that knows your current limits and your greatest strengths.
Finally, it has become increasingly clear to me that behavioral and cultural revolution in the realm of security and policy is imminent and, ultimately, business security will not be adequate unless the focus is centered on people.
Employ tactical and combative methods internally through the attacker mindset to identify security gaps + (b) be willing to change, employing corporate humility, to mitigate vulnerabilities and security gaps.
If you aren't implementing AMs in some way to benefit your security posture, you aren't taking your own security, your employees' security, or your customers' security seriously.
You will not care about the money you should've spent on this type of security when you either cannot make money anymore or you have to pay a huge fine.