CHAPTER 19

SOCIAL ENGINEERING AND LOW-TECH ATTACKS

Karthik Raman, Susan Baumes, Kevin Beets, and Carl Ness

19.1 INTRODUCTION

19.2 BACKGROUND AND HISTORY

19.2.1 Frank Abagnale

19.2.2 Kevin Mitnick and the Media

19.2.3 Frequency of Use

19.2.4 Social Engineering as a Portion of an Attack

19.3 SOCIAL ENGINEERING METHODS

19.3.1 Impersonation

19.3.2 Seduction

19.3.3 Low-Tech Attacks

19.3.4 Network and Voice Methods

19.3.5 Reverse Social Engineering

19.4 PSYCHOLOGY AND SOCIAL PSYCHOLOGY OF SOCIAL ENGINEERING

19.4.1 Psychology

19.4.2 Social Psychology

19.4.3 Social Engineer Profile

19.5 DANGERS OF SOCIAL ENGINEERING AND ITS IMPACT ON BUSINESSES

19.5.1 Consequences

19.5.2 Case Study Examples from Business

19.5.3 Success Rate

19.5.4 Small Businesses versus Large Organizations

19.5.5 Trends

19.6 DETECTION

19.6.1 People

19.6.2 Audit Controls

19.6.3 Technology for Detection

19.7 RESPONSE

19.8 DEFENSE AND MITIGATION

19.8.1 Training and Awareness

19.8.2 Technology for Prevention

19.8.3 Physical Security

19.9 CONCLUSION

19.10 FURTHER READING

19.11 NOTES

19.1 INTRODUCTION.

According to Greek mythology, the Greeks defeated the Trojans in the Trojan War with the help of a wooden statue. After fighting a decade-long war in vain, the Greeks withdrew from their stronghold on the beach. Outside the gates of Troy, they left a giant wooden horse. The statue confused the Trojan soldiers, but it was brought within the fortified walls of Troy. Inside the statue hid several Greek soldiers. When darkness fell, these soldiers emerged from the statue and opened the gates of Troy. The Greek army entered Troy and took the soldiers and citizens of Troy by surprise. After the attack, Greece won the war quickly; they could not have done so with standard warfare tactics.

The Trojan horse built by the Greeks was effective because it used deception to achieve the desired result: penetrating the enemy's established defenses. The Trojan horse accomplished what those in information security label social engineering.

Social engineering may be defined as obtaining information or resources from victims using coercion or deceit. During a social engineering attack, attackers do not scan networks, crack passwords using brute force, or exploit software vulnerabilities. Rather, social engineers operate in the social world by manipulating the trust or gullibility of human beings.

Not all social engineering and low-tech attacks will give attackers all the information they are seeking at once. Social engineers will collect small pieces of information that seem innocuous to the individuals that divulge them. Social engineers may gather these snippets of information in a random order but then assemble them to launch attacks that can be devastating to an organization's information security, resources, finances, reputation, or competitive advantage.

The purpose of a social engineering attack can be as varied as the attack method employed. The result, however, is generally the same: a loss of intellectual property, money, business advantage, credibility, or all of the above.

Closely related, and often used in conjunction with social engineering attacks, are low-tech attacks. Low-tech attacks do not rely on technology and are out carried via physical mechanisms against organizations or individuals.

Social engineering attacks have been and will remain successful due to the weakest security link in an organization: the people.

This chapter presents the history of social engineering and low-tech attacks, its methods, the social science behind it, and its business impact. In addition, it covers detection and mitigation policies for managers and information security officers to defend against social engineering and low-tech attacks.

19.2 BACKGROUND AND HISTORY.

Social engineering is not a new tactic. The term has its foundations in political history when a person or group manipulated a group of people, large or small, in an attempt to persuade or manipulate social attitudes or beliefs. Often this method was used by governments or political parties.1 To this day, the term carries a negative connotation because of its roots in Nazi-controlled Germany, where, for example, eugenics experiments in the 1930s included isolating groups of people with difficulties in social relationships.2 Some researchers also consider the realm of social engineering to cover everything from advertising and modern media to political action groups.

However, in information technology circles today, “social engineering” is a term describing security penetration or circumvention techniques.

19.2.1 Frank Abagnale.

Frank William Abagnale Jr. was able to impersonate authority figures successfully; he represented himself at various times as physician, pilot, attorney, and teacher. He also used social engineering techniques to persuade and manipulate innocent and good-natured individuals to help him carry out many of his frauds. Many of Abagnale's techniques were highly successful and well engineered and some of them were dramatized in the popular movie Catch Me If You Can directed by Steven Spielberg. Starting in 1974, he began working to help organizations recognize and defend against such attacks through his speaking engagements and consulting business.3

19.2.2 Kevin Mitnick and the Media.

One of the best-known social engineers is Kevin Mitnick. Mitnick is now a computer security consultant. In his youth, he was a criminal hacker and specialized in using social engineering. Mitnick has written several books discussing his observations and techniques as a computer hacker. There is much discussion on the Internet and among security professionals about Mitnick's credibility as a security consultant, his true intentions as an ex-hacker, and the degree to which he has been reformed. Even today, Mitnick maintains that social engineering is the most powerful tool in the hacker's toolbox.4

Mitnick's notoriety has been amplified in the media, dating back to his arrest, but he is not alone. Many other criminals or hackers, real or fictional, have been glorified in the media; many times these individuals used social engineering as a tool of attack. Many fictional characters have been portrayed as being masters of social engineering and hacking, leading to a career in federal agencies such as the FBI. While the information assurance community actively tries to discourage such glorification of hacking and social engineering, they continue to be popular topics in Hollywood and the media. One of the most popular examples is the 1995 movie Hackers, in which the criminal-hacker hero tricks security guards at a television station into revealing the station's modem number and then alters the programming to show an episode of Outer Limits instead of the scheduled program.5

19.2.3 Frequency of Use.

Social engineering attacks have become more prevalent because they are successful. One of the most visible reminders of this is the large number of phishing and pharming attacks discussed in Sections 19.3.4.1 and 19.3.4.2. In brief, phishing is the use of fraudulent e-mails to trick people into visiting Web sites where their confidential information can be stolen; pharming is the use of fraudulent Web sites to attract gullible victims who reveal confidential information. For example, during the period August 2006 through August 2007, the number of unique phishing sites detected per month ranged from 10, 091 (August 2006) to 55, 643 (April 2007), and each site survived for an average of 3.3 days.6 Even though these social engineering techniques are well known and well covered by the media, they are frequently used by criminals. Phishing, for example—which started a few years ago as a simple eBay scam—has grown to include almost innumerable banking, PayPal, and credit card scams. Some AOL users have even reported receiving multiple fake phishing scam e-mails in one day.

19.2.4 Social Engineering as a Portion of an Attack.

Social engineering, when used by itself as a stand-alone attack tool, is very effective. But this tool can also be used as part of a larger attack or even as a subset of a technical attack. Social engineering is commonly used at the beginning of a larger, more substantial attack.7

Although one social engineering attempt may have been averted, another might not be completely mitigated. If an initial attack was unsuccessful because the attacker did not have sufficient or accurate information to carry out the attack, the attacker may revert once again to social engineering and low-tech attack strategies to gather additional information for another different, possibly successful, attack.

Even though an organization may have the most advanced firewall, intrusion detection system, and risk management tools available, an attacker may use social engineering to circumvent these technical defenses. The attacker will not need to challenge these enforcement points if able simply to extract a valid user name and password from an unsuspecting employee. Once the attacker has this information, it may be enough to carry out a devastating attack on an organization's information systems—either by a massive increase in network traffic by individual acts of malfeasance such as illegal transfer of funds.

Computer criminals have been able to use pure social engineering and low-tech attack techniques without relying on technology to cause large-scale damage. An action as simple as using discarded, potentially damaging information out of a Dumpster can cause substantial damage to an organization's image. This act might be part of a larger campaign or operation by a group of individuals seeking political or social action against an organization. An example of this was recently chronicled by Home Box Office (HBO). It described a group of citizens who routinely gathered trash from county election centers and city halls in an effort to collect evidence that they could present to the media to prove voter fraud. Documents that were improperly disposed of have had extremely negative consequences for election officials.8

19.3 SOCIAL ENGINEERING METHODS.

Social engineering attacks can take many different forms, and expert social engineers are capable of changing their methods of attack very quickly in order to succeed. The underlying principle of most attacks is pretexting, which is defined as “the collection of information…under false pretenses.”9 Two distinct methodologies are used during most social engineering attacks: impersonation and seduction.

Targets of social engineering attacks also vary widely. The targets of each attack can be characterized as either high or low profile. In general, high-profile targets (e.g., CEOs) may have more information to provide, but in turn may be more suspicious of questions than other employees. A low-profile target (e.g., nonmanagers or associates) might not be as suspicious, but might not have all of the information required for the social engineer's attack. Depending on the situation, a social engineer will choose a high- or low-profile target for impersonation or persuasion.

A social engineering attack may be an attack of opportunity, with the victim randomly chosen. In other, well-planned attacks, a particular target may be identified as the victim.

19.3.1 Impersonation.

Impersonation is defined as “the act, or an instance of pretending to be another,” and it is one of the most popular methods that social engineers employ. Social engineers use several well-known impersonation attacks targeting all levels of employees in any area of an organization.

Help desk employees and systems administrators are commonly impersonated during social engineering attacks. For example, most organizations have help desks for IT related issues. Employees, in general, follow the instructions from help desk personnel, simply because they are trusted and usually more knowledgeable about technology. Social engineers understand this trust and will exploit it to steal information. The attacker tries to impersonate help desk personnel, contact unsuspecting employees, and ask for and receive information.

Help desk personnel can also be the victims of social engineering attacks, with the social engineer impersonating a user in need of technical assistance. An attack against the AOL help desk is an example of a successful social engineering attack against help desk personnel. The attacker phoned technical support regarding a problem. The call led to a lighthearted conversation during which the attacker offhandedly remarked that he had a car for sale at an attractive price. The victim (AOL employee) requested to see a picture of the car. The attacker e-mailed the employee a picture that, in fact, contained a piece of malicious code. The employee opened the picture, unintentionally ran the exploit, and subsequently allowed external connections to the internal AOL network. Approximately 200 AOL accounts were compromised by this attack.10

There have also been instances where social engineers impersonate corporate officers or managers. In a recent case, the payroll giant ADP released personnel and brokerage account information on hundreds of thousands of customers. The attacker impersonated a corporate officer from a public company and received the information from customer-service personnel using standard procedures authorized by the Securities and Exchange Commission that allow “public companies to get names and addresses of shareholders from brokers, as long as the shareholder has not objected to the disclosure of such information.”11 This case highlights the need for controls and education for all levels of employees. The victim in this case might have been concerned about displeasing a powerful member of a client organization and may have feared retribution had he or she insisted on authentication of the caller's identity.

It is important to note that attackers will also impersonate regular employees or other authorized personnel of an organization, such as service staff or consultants, by dressing and speaking appropriately and blending into the organization's environment. For a social engineer, there are no boundaries for impersonation; they may try to gain information via physical access or using any number of ruses, including impersonation of:

  • Temporary employees (e.g., contractors or auditors)
  • Utility or telecommunications company employees
  • Janitorial or maintenance employees
  • New employees
  • Delivery personnel

Physical access could facilitate the gathering of information depending on whether the organization has proper controls in place. Motivated social engineers may even attempt to work at a service company in order to have easier access to the victim.

19.3.2 Seduction.

The word “seduce” is defined as “to lead away from duty, accepted principles, or proper conduct.”12 In general, a social engineering attack using seduction will take longer to complete than an impersonation attack. The attacker, using seduction, will identify a target and will form a bond with that individual, through social settings, online, or through another mechanism. In some instances, social engineers will study their victims over a period of time to learn their habits, likes, dislikes, or emotional weaknesses. It is during this relationship that information may be divulged to the attacker.

For example, a social engineer who wishes to gain access to a building may befriend a security guard of that organization. After some time has passed and the relationship has progressed, the attacker may request a tour of the facility. The security guard, wanting to impress the new friend, may allow a tour. The social engineer, once inside, can plant clandestine listening devices, look for user names or passwords, and read documents left in the open.

19.3.3 Low-Tech Attacks.

Low-tech attacks are invaluable to an attacker as part of reconnaissance, information theft, and surveying for easy targets. On occasion, these methods may even reward the attacker with a very large amount of useful information in a very short amount of time. Low-tech attack methods may seem simple or improbable, but the methods described are often overlooked by security managers. They are not urban legends—they have been used in the past and continue to be utilized today.

19.3.3.1 Dumpster Diving.

In the context of social engineering, “Dumpster diving” is the social engineer's act of searching through an organization's garbage in an attempt to find documents, hardware, software, or anything that could be of value to meet the goals of the attacker. Dumpster diving is a popular social engineering technique because it is easy and often successful. Oracle hired detectives to purchase Microsoft's trash during Microsoft's antitrust trial. (The detectives were unsuccessful.)13 Social engineers do not need to deceive anyone to perform the attack. In many cases, the materials disposed may sit in open containers for weeks. Dumpster diving is most often carried out at night when no one is around, as there is less risk of being caught. To avoid detection, Dumpster divers have been known to dress in dark clothing or janitorial uniforms.

All organizations must understand the legal ramifications of Dumpster diving. Local and state laws may vary widely and be murky; however, it is generally understood that no organization or individual should have any expectation of privacy relating to materials in refuse containers left on a public right-of-way for pickup. It may be legal for an attacker to remove and take ownership of anything left in a Dumpster unless it is on private property with clearly marked No Trespassing signs.

19.3.3.2 Theft.

The age-old crime of theft is another popular social engineering technique. Social engineers may pick up anything they can get their hands on, literally and leverage the information obtained to carry out other attacks. Targets of theft include, but are not limited to, printed materials, CD-ROMs, USB flash drives, backup media, and laptops. Thieves may obtain objects on or off company premises. While on company premises, they may look for objects they can grab and quickly conceal. An attacker may bring in empty laptop cases, backpacks, or even large purses or trash sacks. Most employees are likely to be aware of theft techniques outside of the organization, such as in restaurants, airports, and other public places. Employees may not realize that a social engineer or other criminal may actually be an insider.

19.3.3.3 Leveraging Social Settings.

Social engineers may use social settings to gain information because people relax in social settings and may believe that information security practices are for the workplace only. A social engineer may use a social setting, such as a bar, to take advantage of drinking employees to gain information. The attacker may actively engage a target or passively eavesdrop on a conversation. Although this type of attack may seem far-fetched, there are many situations where the attacker may be in the right place at the right time to gain knowledge that can be later used as part of a larger attack. People in social settings are less likely to have their defenses up and security on their minds.

Restaurants, corporate functions, impromptu meetings outside of a company building, or loud phone conversations are all areas or situations where eavesdropping can occur. Many times, employees will work on commuter trains or conduct business in other public areas. A social engineer can exploit an organization's mobile workforce to gain information. Employees should be cognizant of who is around them while performing any work-related task. It is usually inappropriate to discuss business or to use a cell phone or a laptop within sight or hearing of others.

19.3.3.4 Exploiting Curiosity or Naivete.

Social engineers may trick a victim into unknowingly aiding in an attack by piquing the user's curiosity. For example, an attacker may leave an intriguingly labeled CD-ROM in a break room, hoping that a victim is curious about the contents. When the victim places the CD-ROM into a computer, a malicious program, such as a virus, would automatically execute and spread. This technique has also been executed using USB drives, iPods, and corrupted music CDs.14

19.3.3.5 Bribery.

Bribery is a method of last resort for a social engineer. Bribing an employee is a mechanism to gain information quickly, but it also exposes the social engineer's motives immediately. There is always the chance that the victim will have second thoughts and expose the social engineer or alert management or law enforcement agencies.

Bribery can be dangerous and expensive. A thorough social engineer will investigate potential victims carefully. The social engineer may frequent the same social scenes as the victim to gain information and then engage the target in casual conversation. An important factor is the employee's feeling about the company. By making offhand negative comments about the company based on the reaction of the victim, the attacker may be able to determine whether to progress with the attack or not. Social engineers commonly target disgruntled employees, contractors, and employees about to leave the organization.

Monetary gain is not the only compensatory consideration in bribery. Promises of free access to for-pay resources, tickets to sports events or concerts, and trading of music files are all included under the umbrella of bribery. Literally, any item of value given to an employee should be considered a bribe.15

19.3.3.6 Data Mining and Data Grinding.

Search engines can catalog a surprising amount of information that is sensitive and confidential. Social engineers can create special searches, use search engine application programming interfaces (APIs), and use advanced search capabilities of many search engines to mine information about a company. Another attack vector is the caching feature of many search engines. A search engine may cache a Web page with sensitive information. If an organization requests the cache be removed, there is a delay before the search engine cache is updated, during which time the organization's information is exposed.

Documents published by a company are another source of unintended information disclosure. In a technique known as data grinding, social engineers can use metadata-reader software to extract information such as the author's name, organization, computer name, network name, e-mail address, user ID, and comments from Microsoft Office documents.16 This potentially damaging information is included in most document types.

19.3.3.7 Piggybacking or Tailgating.

A common and very successful method of social engineering is piggybacking or tailgating. The method allows an attacker access to a secured facility by entering together with an authorized person. The victim in these cases is being polite and holding the door for the attacker who enters the facility using the credentials of the victim. Once inside a facility, an attacker is free to roam the building in search of information. If questioned upon entering, an attacker may use the excuse of having forgotten the necessary credentials or of being a new employee. In general, if a piggybacking is going to be attempted, social engineers will dress and act like other members of the organization so they can blend in.

It can be very difficult to demand that employees refrain from allowing piggybacking, but a policy must be in place to demand exactly that. All persons entering a secure facility should be required to fully use identification and authorization mechanisms every time they enter. As discussed in Chapter 50 in this Handbook, role-playing exercises can help employees overcome their reluctance to challenge piggybackers.

19.3.4 Network and Voice Methods.

As the Internet expands, so do social engineering attacks. These attacks differ from traditional social engineering attacks since they require minimal or no human interaction. The basis of the attacks still relies on the trusting nature of humans. Phishing is the most common type of attack seen today, and according to the 2006 FBI Computer Crime Survey, most attacks are derived from e-mail (93 percent).17 Social engineers may capture all the information they are looking for from a single phishing attack. The famous Nigerian 411/419 attack is an Internet-derived combination attack that uses both e-mail and human interaction.18 It promises millions of dollars to someone who will facilitate the movement of funds outside of Nigeria. All that is required is for the gullible victim to provide full information about a bank account to which the funds can be sent.

The methods used for these attacks include phishing, pharming, spim, spit, vishing, and malware. For more technical details about these techniques, see Chapters 16 and 20 in this Handbook.

19.3.4.1 Phishing.

Phishing is one of the most widely used and successful social engineering attacks. It is defined as the “act of sending an e-mail to a user falsely claiming to be an established legitimate enterprise in an attempt to scam the user into surrendering private information.”19 Phishing, as with all social engineering attacks, relies on the trusting nature of people. Naivete about using the Internet also plays a role in the success of this attack.

A phisher will develop an e-mail message and Web site that resemble those of an existing establishment, such as PayPal. The message may explain that there was a problem with an account, a transaction, or something else, and instructs users to visit a hyperlinked Web site and log in with their credentials. The message is e-mailed, primarily using botnets, to many thousands or possibly millions of e-mail addresses. Victims respond by visiting the fraudulent Web site and entering their passwords and account numbers, which are then stolen.

The most common targets of phishing attacks are financial institutions, attacks on whom make up approximately 89 percent of all attacks.20 The large number of phishing e-mails that many people receive indicates that some individuals respond to such e-mails and fall prey to phishers. Recent statistics reveal that this type of social engineering attack has a success rate of approximately 14 percent.21 Newer, more targeted phishing attacks called “spear phishing” are also gaining in popularity. In these scams, spoofed e-mail messages to employees of a specific organization appear to originate from an authority such as particular manager or department and can include bogus requests for user names and passwords.22

19.3.4.2 Pharming.

In pharming attacks, attackers attempt to make victims visit spoofed Web sites to reveal sensitive personal information. Attackers achieve this by manipulating the victim's local or global Domain Name Service directory—known as DNS poisoning. Pharming attacks may fool users more easily than other attacks because there is no indication that an attack is under way. Users may type in the URLs of their banking or credit card Web sites as usual and not notice that they are in fact visiting fraudulent sites. In a 2005 pharming attack, users of the Internet mail service Hushmail were redirected to a fraudulent site where their information was harvested. Hushmail suffered negative publicity from this attack and was forced to update its users daily about its investigation into the attack.23

19.3.4.3 Spim.

Many people and businesses rely on the synchronous communication that instant messaging (IM) offers. Social engineers have noted the increase in the use of IM software and developed spim. Spim is “instant spam or IM spam.”24 A spim attack is very similar to a phishing attack except the vector is IM software instead of e-mail. By utilizing popular instant messaging clients to contact a potential victim, a social engineer is able to bypass technical safeguards that many organizations have deployed, such as e-mail or Web filtering systems. For example, an attacker will develop a fraudulent Web site that resembles a legitimate one and send its link to many IM accounts. Victims will visit the Web site and log in, revealing their credentials to the attacker. The fraudulent site could contain malicious software that infects the victim's computer. Surprisingly, despite the increasing use of IM software, the growth of spim has been slow.25

19.3.4.4 Spit.

Spit, or spam over Internet telephony, is unwanted messaging sent via VoIP (Voice over Internet Protocol). Although there have been warnings since 2004 about the potential problems to be caused by spit, there is at the time of writing (2007) still little evidence that the problem has become significant. However, experts warn that conventional content analysis may not be applicable to spit; some suggest that traffic analysis might track down the origins of high-volume messaging, allowing coordinated blockage of such systems.26

19.3.4.5 Vishing.

Attackers who lure victims with the use of e-mail and the telephone or just the telephone are performing a vishing, voice phishing, attack.27 Social engineers may send an e-mail or call victims in an organization about an issue and request a call back. The number given is either staffed by accomplices or answered by a legitimate-sounding automated system. Victims are prompted to release potentially damaging information, about themselves in the case of identity theft, or about their company.

In another example, an attacker can leverage automated phone answering systems, which allow the caller to input the first few letters of the last name of a contact to reach their extension. The attacker can try multiple extensions and stumble on some that reveal details regarding a person's position, title, or office status (e.g., “I'm out on vacation until April 4”). A social engineer can leverage that information and call other employees to gain additional information or access.

With the proliferation of VoIP, and the call is routed over IP, the call number can be changed easily. This is analogous to how phishing Web sites operate. Since this is primarily a voice-based social engineer attack, it is considered a voice attack instead of a Web attack.

19.3.4.6 Trojans and Viruses.

Malware are programs or files that are harmful or dangerous to the end user. Social engineers frequently use malware, such as Trojans and viruses. Although the programs themselves may not actually attack the system, the basis of the attack still relies on manipulating a victim's trust. For example, a social engineer may send a victim an e-mail with a link to a malicious Web site. If the user visits the malicious Web site, a Trojan is installed on the victim's computer and the malicious program will begin gathering the victim's information. Another example involves a USB drive loaded with auto-executing malware that is left lying around. A victim curious about the contents will plug the drive into a machine and execute the malicious code.28 Any number of vectors can be used, including documents, e-mail messages, Web sites, CDs, and USB drives. The premise is the same for all these types of attacks; unsuspecting users curious about the content will inadvertently install these dangerous programs. See Chapter 16 in this Handbook for a more technical discussion of Trojans, viruses, and other malware.

19.3.5 Reverse Social Engineering.

Reverse social engineering is an effective attack usually executed by an experienced social engineer. It can be described as the knight-in-shining-armor attack. In order for this method to succeed, social engineers must be able to put themselves in a position where they will be the only persons around to fix the problem. A reverse social engineering attack has three distinct parts. First, a social engineer will create a problem (e.g., a user ID issue.) Second, the social engineer will publicize that there is no other person capable of fixing the issue. In the final part of the attack, the social engineer will assist the victim and “fix” the issue. It is during the third segment of the attack that a social engineer will gather information. The success rate of reverse social engineering attacks tends to be high simply because the victim is satisfied that the fabricated problem is fixed.29

For example, the attacker may create a problem by changing the name of a file. The victim searches for but cannot find the file. The attacker will announce an ability to retrieve lost information but will require a user ID and password to gain access to the system. The victim, disturbed by the thought of losing an important document, will divulge the information. Finally, the attacker will “find” the missing file. The victim, pleased that the file has been returned, will probably forget that access credentials have been divulged.30

Reverse social engineering attacks are not limited to human interaction. Many viruses with seemingly benign subject lines use reverse social engineering techniques. For example, the “My Party” virus was first identified in 2002. The virus propagated through e-mail with a subject line “New photos from my party.” Victims, believing that the sender was the author, opened the e-mail and clicked the embedded link. The linked Web site installed a backdoor Trojan that infected victims' machines.31

19.4 PSYCHOLOGY AND SOCIAL PSYCHOLOGY OF SOCIAL ENGINEERING.

This section outlines the science underlying the success of social engineering. Sections 19.4.1 and 19.4.2 use some well-established principles of psychology and social psychology to analyze social engineering from two angles: the psychological perspective of the victim and the social-psychological perspective of the social engineer and victim. Section 19.4.3 explains that there is no single social engineer stereotype. Because the terms used in this section can be found in any undergraduate psychology and social psychology textbooks, academic references have been minimized.

19.4.1 Psychology.

Social engineering succeeds because of the design of human nature: Most people tend to easily trust other people. In this section, examples of social engineering attacks illustrate scientific terms used to characterize social engineering.

A cognitive bias is defined as a mental error caused by humans' simplified information-processing strategies.32 People become victims of social engineering attacks due to cognitive tendencies inherent in all humans. Although cognitive biases are found in everyone, their universal presence does not imply that they are impossible to counter.

Following are some cognitive biases that can explain why people fall prey to social engineering attacks:

  • Choice-supportive bias. People tend to remember an option they chose that had more positive aspects than negative aspects.33 IT help desk operators may provide employee names, extensions, or both, without verifying the identity of callers. Help desk operators remember this practice as being good because most callers are genuine and the callers thank them for providing the information. Social engineers can masquerade as genuine callers to exploit help desk operators' choice-support bias.
  • Confirmation bias. People tend to collect and interpret evidence in a way that confirms their conceptions.34 If an organization has a contract with a custodial service and employees see custodians all wearing the same uniform, then a social engineer wearing the uniform may not be challenged because of the employees' confirmation bias.
  • Exposure effect. People tend to like things that are familiar to them.35 A social engineer may call victims under the pretext of performing a survey for a popular local restaurant and then ask about the organization where the victim is employed. People are comfortable providing that information because they are familiar with the restaurant.
  • Anchoring. People tend to focus on one trait when making decisions.36 If a social engineer has a soothing voice, the victim may focus on that attribute versus the questions being asked.

19.4.2 Social Psychology.

Social psychologists define “schema” as the inherent picture of reality used by humans to make judgments and decisions. From the perspective of social psychology, social engineers exploit the fact that most people's schema includes rules to be trustful of other people and their intentions. People are taught from the very beginning of their socialization that being nice to others is a good thing. In the context of information security, people's tendency to blindly trust others can spell disaster.

A list of common errors that people make and examples of how social engineers will exploit those mistakes to attack an organization follows.

  • Fundamental attribution error. In this common error, people assume that the behaviors of others reflect stable, internal characteristics. Someone committing the fundamental attribution error might see a colleague in a bad mood and think, “She is always moody.” In reality, the colleague might be pleasant in general but be suffering a headache at the time. Social engineers will act pleasant and charming to lead victims to commit the fundamental attribution error, to be impressed that the attackers are nice people in general and so to help them.
  • Salience effect. Given a group of individuals, people tend to guess that the most or least influential person is the one who stands out the most. For example, from a group of 10 people, nine of whom are six feet tall and one who is five feet tall, if asked to guess who the most intelligent person in the group is, an observer might say that it is the five-foot-tall person. Social engineers attempt to blend into their victim's environment to take advantage of the salience effect. They are acutely aware of company lingo, events, and regional accents.
  • Conformity, compliance, and obedience. People respond to the social pressures of conformity, compliance, and obedience by adjusting their behaviors. A social engineer impersonating a high-powered executive demanding admittance into the company premises may persuade a new security guard with the weight of assumed authority. The authority figure's promise of reward or threat of punishment may further influence the security guard's decision to carry out the request of the attacker.

19.4.3 Social Engineer Profile.

The profile of a social engineer is not that of the stereotypical computer hacker often portrayed in the movies or on television.37 Social engineers are most likely not going to be pimple-face teenagers who spend all their time with their computers. A social engineer is often outgoing, confident, and well educated. Social engineers may use their own personality, or adopt a persona that greatly differs from their normal personality. Regardless, they will blend into their environment. A social engineer seeks to be unnoticeable, unremarkable. He or she will dress according to the dress code of the operational environment. Many seem to have excellent social and communication skills. Interestingly, the typical social engineer may be an excellent actor, being able to react quickly and adapt to changing conditions. The attacker's confidence will often mask any nervousness or tension during the social engineering attempt.

Social engineers may also exhibit a dark side. Attackers may have very little regard for the consequences of their actions on the victims. Even though attackers may appear very polite or congenial toward a victim, they actually care very little about the victim. The victim is simply a means to an end, only part of the social engineering attack tool. The social engineer's motivations may vary widely and range from personal financial gain to revenge. There may also be significant external pressure on the attacker from acquaintances or organized crime syndicates.38

For more information about the psychology of computer criminals, see Chapters 12 and 13 in this Handbook.

19.5 DANGERS OF SOCIAL ENGINEERING AND ITS IMPACT ON BUSINESSES.

Clearly, social engineering is a great danger to businesses, large and small. It is important to remember that the ultimate goal of the attacker may very well be the disruption or destruction of the business where the social engineering targets are employed. One must not underestimate the potential impact of a seemingly minor social engineering attack on a business or organization.

19.5.1 Consequences.

The negative consequences of a successful social engineering attack could be disastrous. If you can envision a new social engineering ploy, so can a computer criminal. Much like disaster recovery planning, when trying to quantify and understand the impact of social engineering attacks, all possibilities must be considered. Something as seemingly minor as an internal memo that has not been properly destroyed could have the potential to bankrupt an organization. A simple social engineering attack could grow into a major information security breakdown, based on the information contained in the memo.

The danger is especially high for publicly traded companies that could lose value because of a loss of confidence from investors.39 Within the last few years, many companies have come under financial duress because of a security breach or lapse that drew considerable attention from the press. Social engineering could very well have been part of these security incidents. Many organizations are required by law to have safeguards in place when it comes to data security; many of these requirements have elements that will aid the organization in defending against social engineering and low-tech attacks.

Another serious consequence of a successful social engineering attack is the uncertainty that follows and the difficulty of investigating such an attack. Organizations may never fully discover to what extent a social engineer was able to infiltrate the organization. There are so many different vectors and possibilities of intrusion that it may be impossible to fully understand exactly what or who was compromised during the attack. An especially dangerous and frustrating situation is where an attacker has an insider or accomplice within the organization. Some organizations have never identified those individuals within the organization who have been guilty of aiding an attacker. This uncertainty is difficult to recover from, and to defend against in the future. It is also another situation where a company may have substantial credibility problems and a loss of confidence from its shareholders after an attack.

19.5.2 Case Study Examples from Business.

There are many well-known examples of real-life scenarios from organizations that demonstrate the effectiveness of social engineering. Here are a couple of well-known examples without any specifics that would embarrass the guilty parties. Social engineering attacks are real—they are not simply computer security theory.

Case 1: One very well known social engineering attack in the business world is piggybacking. In this type of social engineering, an attacker will depend on an individual's sense of courtesy. Most people remember from their early school days that they are taught to hold a door open for someone who is behind them. This courtesy is often extended in the workplace, including secure areas such as a datacenter. A potential attacker may attempt to enter a datacenter without proper credentials by following closely behind someone else who is entering a datacenter with proper identification and authorization. The authorized individual will probably exhibit courtesy and hold the door open even for an unknown person.

Result: In this example, the result is that an attacker who is not authorized has gained access to a secure facility because he or she relied on another individual's sense of courtesy. It can be very difficult to demand that employees refrain from allowing piggybacking, but a policy must be in place to demand exactly that. All persons entering a secure facility should be required to fully use identification and authorization mechanisms every time.

Case 2. Another example that has been carried out in different variations involves an attacker using several social engineering techniques to take advantage of several coinciding events to exploit data, information, or equipment from an organization. The attacker often makes several telephone calls to find a specific date where a company official, perhaps the CFO or director of technology, is out of the office. The attacker then shows up at the organization claiming that the company official authorized the attacker to take a certain computer from the company's site. Usually the attacker tries to show a sense of urgency and extends a very confident display of authority. Many times the employee caves in to the attacker's demands without checking the story, and now a very important computer has been taken from the company.

Result. In this case, the organization has lost control and ownership of a computer and its data. If the computer does not have any solid safeguards, such as data encryption mechanisms, the data and the computer could be used for any number of destructive activities. If information about the incident is made public, great damage to the organization's reputation can be done. The data harvested from the computer could also be sold to competitors or used as part of a blackmail scheme.

19.5.3 Success Rate.

Although there may be few solid statistics of the success rate of social engineering, as is the case in most areas of information security, most experts believe the rate to be extremely high. If history has anything to teach the security community through example, social engineering will continue to be a very powerful and successful tool for criminals. Very few organizations are immune to social engineering, not even state, local, and federal governments. If well-trained federal employees are vulnerable, every organization must take the success rate seriously.

The high success rate must also stress the importance of constant education of employees and reevaluation of the organization's efforts to combat social engineering. This is an area where many organizations do not allocate enough resources. The frequency of social engineering attempts also dictates the need for proper, efficient, and swift reporting of suspicious activity targeting individuals or the organization. It is very difficult to defend against social engineering attempts if the organization does not know it is under attack. The possible success of an attack can be substantially reduced with properly trained, supported, and motivated employees.

19.5.4 Small Businesses versus Large Organizations.

The impact and dangers of social engineering and low-tech attacks vary widely between small businesses and large corporations. As discussed, the consequences can be very serious, including the collapse of the organization. Small businesses are often much less prepared and much less equipped to survive a serious breach of security. Conversely, small businesses may have the upper hand versus large organizations because of a substantially smaller, better communicating workforce. It is much easier to communicate and engage everyone within a small company when an attack is attempted or carried out. Small businesses also have an advantage of a much smaller workforce to train; this can result in much better prepared employees. Small business employees are probably more likely to identify people who do not belong or should not be asking for private data. They may also be more likely to deny access or question someone whose story does not seem likely or who is suspicious.

Large organizations can be mired down in bureaucracy, ineffective management, or overly complicated reporting procedures. An attacker could carry out an entire plan before the security team in a large company would receive notification of a social engineering attempt. Many times, an individual is less willing to challenge the credentials of a stranger in a large organization. This is especially true where there are large numbers of employees who feel they may be punished for preventing someone else from doing a proper job. The employee might be more likely to let the attacker pass unquestioned, rather than to risk possible negative ramifications.

There is no doubt, however, that no matter what the size of organization, it is a potential target. Criminals do not always choose easy or obvious targets. Any business, small or large, family-owned or corporate conglomerate, may be a target of an attack that utilizes social engineering.

19.5.5 Trends.

Although it is important for every information security manager always to keep a skeptical eye toward statistics, it is equally important to keep abreast of security threat trends. Social engineering is no exception. It may be difficult for any survey or poll to gather facts about how many attempts were made in any given year, or how many were successful. Many social engineering attempts are probably never detected, let alone reported. When it comes to such a powerful and successful attack mechanism, assume the worst: Criminals are increasingly using it. It is highly unlikely that all organizations will be able to band together and eliminate successful social engineering and low-tech attacks. Criminals create new forms and tactics every day, and people will probably continue to fall for these tactics.

19.6 DETECTION.

Detection of social engineering and low-tech attacks can be difficult. The nature of most types of social engineering attacks is to circumvent technical controls and take advantage of people's willingness to trust and help. In many cases, the detection relies on people's ability to recognize a potential attack and respond appropriately. Further complicating detection is the potential that a social engineering attack may not be a single occurrence, but many smaller events culminating in the release of potentially damaging information, ability to access restricted resources or to hide other activities. One penetration-testing expert estimates that social engineering tactics account for less than 20 percent of the time spent in an attack; the rest of the time is spent in technical exploitation of the information originally gathered through deceit.40

There are three main ways of detecting social engineering attacks: people, audit controls, and technology.

19.6.1 People.

Since people are the vector for the attack, they are, in general, the first line of defense. Organizations need to provide employees with the resources required to help discern a potential attack from a legitimate request. In addition, organizations need to provide information on what to do during and immediately after an attack. For example, during a phone attack, employees should be trained to remember as many details as possible. Items that the employee should try to remember include:

  • Was the attacker male or female?
  • Was a caller ID displayed?
  • Was there noise in the background?
  • Did he or she have an accent?
  • What questions were asked?
  • What answers were provided?

Some social engineers can be detected. Employees should also be aware of people asking many questions, some of which may not make sense. In addition, callers or e-mails requesting names of managers or IT personnel should prompt a phone call to the IT security or investigation department. Finally, employees should be aware that no one would ever call legitimately requesting their password.41

Organizations also need to provide information as to who should be notified during the actual event, immediately afterward, or both (depending on when the employee recognizes the attack). To help in the notification process, an organization can create an incident notification information card and disseminate it across the organization so an employee can determine whom to contact quickly.

19.6.2 Audit Controls.

Auditing e-mail, Internet content, systems logins, and systems changes of an organization can be used to detect social engineering attacks. However, if the audits are not real time, there will be a delay from the time of the attack and a review of audit items. In the instances where notification is not real time, forensic examiners can use the audit information to help piece together the attacks. Awareness teams can also use the log results to help devise additional training for the targeted groups. During real-time auditing, organizations can immediately enact their incident management plans to help limit the potential damage.

19.6.3 Technology for Detection.

Organizations can implement technology to help limit social engineering attacks. Content-filtering software can limit e-mail and Web site traffic. E-mail monitoring tools can be used in bidirectional mode to inspect content in both directions. In addition, e-mail monitoring and content-filtering mechanisms can be used to scan for keywords or phrases that may trigger early warning signals. By scanning and blocking content, organizations may reduce the number of suspicious e-mails entering their networks and reduce the number of suspicious Web sites that employees visit.

Advances in technology research will provide additional protection from social engineering attacks, including the development of the Social Engineering Defense Architecture (SEDA). This architecture attempts to detect social engineering attacks over the phone by identifying a legitimate employee versus a social engineer. The system is referred to as a text-independent voice signature authentication system. The system uses voice recognition technology, which would reduce the risk of a successful help desk attack. In addition to detecting an unauthorized caller, it can detect an insider masquerading as an employee with a higher security classification. The logging included in the architecture would aid a forensic examiner during an investigation.42

Attackers are becoming more sophisticated. Recent phishing attacks are targeting systems, network, and security professionals. In these types of attacks, a phishing e-mail is sent to an organization from a “customer” informing it of a phishing site attempting to steal customer information. Security personnel respond to the e-mail and investigate the site. The site installs malware that allows the attacker to remotely control the machine. This change in attack methodology requires security personnel to be even more suspicious of any seemingly innocuous e-mail telling the organization that there are phishing sites targeting that company.

19.7 RESPONSE.

Responding to social engineering and other low-tech attacks should fit into an organization's incident management process and response. As with all incident management plans, the responses should be well defined, communicated, and tested. It behooves an organization to plan for the inevitable, especially as network and physical attacks are increasingly using multiple vectors in order to be successful.

See Chapter 53 in this Handbook for additional information on monitoring and controlling systems.

19.8 DEFENSE AND MITIGATION.

The prevention of social engineering attacks should be multifaceted, repeatable, and part of an organization's defense-in-depth strategy. Since the very nature of the attack is to bypass or circumvent technical defenses by using people's good nature and willingness to trust other human beings, the steps in preventing such attacks should focus on distinct areas, such as policy, training and awareness, technology, and physical defenses.

Concurrently, the areas will help mitigate the threat of a successful social engineering attack. Each area should have a regular process for review and auditing. Well-written policies provide the baseline of behavior that is acceptable and the potential consequences if they are not followed. In the case of training, it should be integrated into the organization overall security awareness program, included in all employee's evaluations, and tied to bonus pay and compensation. Organizations must train their employees in acceptable behavior and provide them the tools to identify and report potential social engineering attacks. Physical defenses can potentially block an intruder from entering a locked building, but people's sense of camaraderie does make the practices such as piggybacking or tailgating relatively easy. A recent study indicated that smokers returning to a building sometimes allow nonemployees into a secure location. Technological advances are also important but cannot be relied on as the only method of defense.

19.8.1 Training and Awareness.

Organizations need to provide tools and knowledge to employees to help identify potential attacks and react to suspected attacks. Providing awareness is a continual process and should not be only for new hires. Training and awareness, when possible, should include real-life examples so employees can relate to the issue and understand the level of trust that is implied through their access to the facility and the data used for their positions. Employees should understand the responsibilities the company bestows on them. Essentially, employees need to be retrained that it is acceptable to ask why certain information is being requested or to see the badge of a person behind them.

A basic awareness program could include posters, e-mail communications, and laminated instruction cards (hard cards) containing emergency contact numbers or other information. More mature awareness programs can include videos or brown-bag informational lunches. Whenever possible, teaching employees to defend against social engineering attacks should be a live presentation with real-world examples. Ideally, the presentation should be tailored specially for each audience. For example, if the audience is help desk personnel, examples of potential social engineering attacks should be described or demonstrated and discussed.

Employee training can also include instruction to keep file cabinets locked when not in use, lock workstations, and use cable locks, and contain instructions on how to create and remember good passwords.

Please see Chapter 49 in this Handbook for additional information regarding awareness programs.

19.8.2 Technology for Prevention.

Technology is emerging as a defense against certain types of social engineering attacks. The technology enables organizations to identify some social engineering attacks without relying on employees. This proactive identification enables an organization to mitigate the risk. Technology should comprise only one layer of defense and not be relied on as the sole defense.

Technologies such as content-monitoring systems for both e-mail and Web content can help identify phishing attacks. In addition, organizations can install timely security patches and use up-to-date antivirus and antispyware software to help mitigate the risk of viruses, Trojans, and worms. New versions of browsers and browser plug-ins are allowing users to evaluate the trustworthiness of Web sites.43

Ideally, employees should be prevented from downloading and installing unapproved software. However, organizations can employ inventory systems or other methodologies to detect illegal programs on a network. Certain types of systems can prevent malware-infected machines from entering the network.

Desktop and laptop configuration changes can be made to reduce the risk of a successful attack; changes include disabling pop-up windows in browsers, disallowing the automatic installation of Active-X controls, limiting the types of cookies that Web sites can place on local machines, using automatic password-protected screen savers, and finally, using e-mail certificates for authentication.

In the same regard, organizations should review technology processes and verify that they are not inadvertently supplying information to potential social engineers. For example, metadata in documents should be removed before being accessible to outsiders, and regular Web searches should be conducted to ensure former or current employees are not posting information on the Internet.

19.8.3 Physical Security.

Physical security mechanisms can reduce the risk of a successful social engineering attack. All employees should have identification cards that they are required to display at all times. Secured areas within an organization should be locked, have limited access, and be monitored for noncompliance. Door alarms that can detect tailgating or piggybacking can be installed in areas. Cameras or other closed circuit monitoring technology can thwart potential intruders. Security personnel should watch all facility access points. All office doors, desks, file cabinets, and other storage devices should have keys and remain locked when not being accessed. Dumpsters or recycling bins should also have locks that would prevent the removal of documents meant for shredding or incineration.

Desktops, laptops, and other computer hardware should be physically locked in place. Users should be required to have strong passwords, and in the case of laptops or desktops, have automatic screen savers that require a password to unlock. All magnetic media need to have secure storage.

Most organizations have a certain percentage of mobile workforces. Special training should be provided to them to prevent the loss of equipment or information. Training should include information such as:

  • Laptops should remain with the traveler and not checked in luggage.
  • Laptops should be locked in a safe or to a secure surface at all times.
  • Peripheral devices such as USB drives and handheld devices should have strong passwords.
  • Conversations involving confidential information should be prohibited in public.
  • Travelers should be aware of their surroundings, and special considerations regarding electronic communication should be considered while not in the United States and Europe.

Please see Chapter 22 of this Handbook for more information regarding physical security.

19.9 CONCLUSION.

Social engineering attacks are unlike technological computer attacks. The vector of the attack is human, the nature of the attack is to circumvent controls, and the success of the attack depends on people's willingness to trust others. Simply being polite and holding a door open for a person can have profound negative effects for an organization.

Historically, social engineering attacks are not new; they have been effective in launching successful attacks for millennia. Social engineers use many different methods to execute social engineering and low-tech attacks. These methods can involve human contact, no human contact, or a combination of technology and social engineering tactics.

Psychologists and social psychologists have offered a number of reasons why social engineering attacks are successful. They theorize that human nature allows attackers to trick or con reasonable people into providing information. A social engineer's profile does not fit into any single model, and attacks are difficult to detect. Social engineering attacks have grown more prolific, effective, and dangerous to both organizations and individuals.

Even though social engineering attacks are difficult to defend against, there are technical, process, and people defenses that an organization can adopt to minimize the chance of security breaches. As with all information security issues, a defense-in-depth strategy can help mitigate the risks associated with social engineering.

Although the focus of information assurance seems to be on technical attacks, social engineering and low-tech attacks will continue to remain relevant and dangerous threats, worthy of attentive mitigation strategies.

19.10 FURTHER READING

Burchfield, A. “Social Psychology, Cognitive Psychology, Security and the User,” SANS Institute, 2002.

Computer Security Institute. “CSI/FBI Computer Crime and Security Survey: 2006,” www.gocsi.com/forms/fbi/csi_fbLsurvey.jhtml (accessed January 1, 2007).

Dubin, J. “Security Awareness Training: Stay In, or Go Out?” SearchSecurity.com, November 1, 2006, http://searchsecurity.techtarget.com/tip/0,289483,sid14_gci1220543,00.html (accessed March 2, 2007).

Edmead, M. T. “Social Engineering Attacks: What We Can Learn from Kevin Mitnick,” SearchSecurity.com, November 18, 2002, http://searchsecurity.techtarget.com/tip/1,289483,sid14_gci865450,00.html (accessed March 1, 2007).

Gartner Research. “New Gartner Hype Cycle Highlights Five High Impact IT Security Risks,” Gartner Press Release, September 18, 2006, www.gartner.com/it/page.jsp?id=496247 (accessed March 2, 2007).

Gartner Research. “Unmasking Social-Engineering Attacks,” Social Engineering: Exposing the Danger Within 1, No. 1 (February 2002), www.gartner.com/gc/webletter/security/issue1/article1.html (accessed March 1, 2007).

Granger, S. “Social Engineering Fundamentals, Part II: Combat Strategies,” January 9, 2002, www.securityfocus.com/infocus/1533.

Kuper, A., and J. Kuper (eds.). The Social Science Encyclopedia. London: Routledge & Kegan Paul, 1985.

Lifrieri, S. “Computer Hacking without a Computer… The Art of Social Engineering,” Wall Street Technology Association Ticker (January-February 2007), www.wsta.org/publications/articles/0207_article02.html (accessed March 23, 2007).

Microsoft Corporation. “How to Protect Insiders from Social Engineering Threats,” Microsoft Technet, August 18, 2006, www.microsoft.com/technet/security/midsizebusiness/topics/complianceandpolicies/socialengineeringthreats.mspx#EIXAE (accessed March 2, 2007).

Nelson, S. D., and J. W. Simek. “Disgruntled Employees in Your Law Firm: The Enemy Within,” Sensei Enterprises, Inc., 2005, www.senseient.com/default.asp?page=publications/article31.htm (accessed March 22, 2007).

Stone, A. “Stopping the Con: Detecting Electronic Social Engineering Attacks,” www.cisa.umbc.edu/courses/cmsc/444/fall05/studentprojects/stone.ppt (accessed March 1, 2007).

Twitchell, D. P. “Augmenting Detection of Social Engineering Attacks Using Deception Detection Technology,” Proceedings of the International Conference on i-Warfare and Security 2006, pp. 209–210.

Vaas, L. “Microsoft: UAC Can Be Hijacked by Social Engineering,” eWeek.com, February 26, 2007, www.eweek.com/article2/0,1895,2098552,00.asp (accessed March 1, 2007).

Wilson, T. “Five Myths About Black Hats,” DarkReading.com, February 26, 2007, www.darkreading.com/document.asp?doc_id=118169&WTsvl=news2_2 (accessed March 1, 2007).

19.11 NOTES

1. I. Winkler, Spies Among Us: How to Stop the Spies, Terrorists, Hackers, and Criminals You Don't Even Know You Encounter Every Day (Hoboken, NJ: John Wiley & Sons, 2005), p. 111.

2. R. Gellately and N. Stoltzfus (eds.), Social Outsiders in Nazi Germany (Princeton, NJ: Princeton University Press, 2001), http://press.princeton.edu/chapters/s7083.html (accessed November 23, 2007).

3. F. W. Abagnale, Jr., and S. Redding, Catch Me If You Can (Mainstream Publishing, 2005).

4. Three good accounts of the Mitnick case are J. Goodell, The Cyberthief and the Samurai: The True Story of Kevin Mitnick—and the Man Who Hunted Him Down (New York: Dell, 1996); T. Shimomura and J. Markoff, Takedown: The Pursuit and Capture of Kevin Mitnick, America's Most Wanted Computer Outlaw—by the Man Who Did It (New York: Hyperion, 1996); and J. Littman, The Fugitive Game: Online with Kevin Mitnick (Boston: Little, Brown, 1997). Mitnick himself has written several books; perhaps the most appropriate reading in connection with this chapter is K. Mitnick and W. L. Simon, The Art of Deception: Controlling the Human Element of Security (Hoboken, NJ: John Wiley & Sons, 2003).

5. Internet Movie Database, Hackers, www.imdb.com/title/tt0113243 (accessed November 22, 2007).

6. AntiPhishing Working Group, “Phishing Activity Trends: Report for the Month of August, 2007,” AntiPhishing.org, November 19, 2007, www.antiphishing.org/reports/apwg_report_august_2007.pdf (accessed November 23, 2007).

7. Some attacks have taken place over weeks, months, or years. At the time of this writing, there is a recent case of stolen customer data at a popular U.S. retailer. It was revealed during the investigation that the attack actually occurred over many weeks, and possibly months.

8. For more information on this documentary, see www.hbo.com/docs/programs/hackingdemocracy/index.html.

9. B. Koerner, “Pretexting,” About: Identity Theft, http://idtheft.about.com/od/glossaryofterms/g/pretexting.htm (accessed March 24, 2007).

10. Audit My PC.com, “Social Engineering,” www.auditmypc.com/freescan/readingroom/social-engineering.asp (accessed April 3, 2007).

11. D. Arnell, “Payroll Giant Gives Scammer Personal Data of Hundreds of Thousands of Investors,” ABC News, June 26, 2006, http://abcnews.go.com/Technology/story?id=2160425&page=1 (accessed March 17, 2007).

12. The Free Dictionary, www.thefreedictionary.com/seduce (accessed March 24, 2007).

13. A. Gupta, “The Art of Social Engineering” Addison-Wesley, August 23, 2002. www.awprofessional.com/articles/article.asp?p=28802&seqNum=3&rl=1 (accessed March 24, 2007).

14. S. Stasiukonis, “Social Engineering, the USB Way,” darkREADING, June 7, 2006, www.darkreading.com/document.asp?doc_id=95556&WT.svl=column1_1 (accessed November 23, 2007).

15. J. Boroshok, “Social Engineering's New Tricks Present Bigger Dangers,” SearchSecurity.com, http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1196327,00.html (accessed June 29, 2006).

16. Microsoft Corporation, “How to Minimize Metadata in Office Documents,” January 24, 2007, http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q223396 (accessed March 20, 2007).

17. L. A. Gordon, M. P. Loeb, W. Lucyshyn, and R. Richardson, “2006 CSI/FBI Computer Crime and Security Survey,” http://i.cmpnet.com/gocsi/db_area/pdfs/fbi/FBI2006.pdf (accessed March 1, 2007).

18. “The Nigerian Scam Defined,” Nigeria—The 419 Coalition Website, http://home.rica.net/alphae/419coal/ (accessed March 30, 2007).

19. “Phishing,” Definitions, Webopedia, www.webopedia.com/TERM/P/phishing.html (accessed March 30, 2007).

20. AntiPhishing Working Group Report, AntiPhishing.org, January 7, 2007, www.antiphishing.org/reports/apwg_report_january_2007.pdf (accessed March 23, 2007).

21. “More and More People Falling for Phishing Tactics,” BizAsia.com, October 15, 2007, www.bizasia.com/internet_itVf9hc4/more_more_people_falling.htm (accessed March 30, 2007).

22. “Spear Phishing: Highly Targeted Scams,” Microsoft Protect Yourself, September 18, 2006, www.microsoft.com/protect/yourself/phishing/spear.mspx (accessed November 23, 2007).

23. R. Naraine, “Hushmail DNS Attack Blamed on Network Solutions” eWeek.com, April 29,2005, www.eweek.com/article2/0,1759,1791152,00.asp (accessed March 31, 2007).

24. “Spim,” definition, searchexchange.com http://searchexchange.techtarget.com/sDefinition/0,290660,sid43_gci952820,00.html (accessed March 30, 2007).

25. W. Sturgeon, “U.S. Makes First Arrest for Spim,” CNet.com, February 21, 2005, http://news.com.com/U.S.+makes+first+arrest+for+spim/2100-7355_35584574.html (accessed March 30, 2007).

26. A. Plewes, “The Biggest VoIP Security Threats—and How to Stop Them,” silicon.com (March 2007), www.silicon.com/research/specialreports/voipsecurity/0,3800013656,39166479,00.htm (accessed November 22, 2007).

27. B. Koerner, “Vishing,” About: Vishing, http://idtheft.about.com/od/glossaryofterms/g/vishing.htm (accessed March 22, 2007).

28. S. Stasiukonis, “Social Engineering, the USB Way,” Dark Reading Room, June 7, 2006, www.darkreading.com/document.asp?doc_id=95556&WT.svl=column1_1 (accessed March 1, 2007).

29. S. Granger, “Social Engineering Fundamentals, Part I: Hacker Tactics,” Security Focus.com, December 18, 2001, www.securityfocus.com/infocus/1527 (accessed March 30, 2007).

30. Microsoft Corporation, “How to Protect Insiders from Social Engineering Threats,” Microsoft Technet, August 18, 2006, www.microsoft.com/technet/security/midsizebusiness/topics/complianceandpolicies/socialengineeringthreats.mspx#EIXAE (accessed March 30, 2007).

31. M. Singer, “‘My Party’ Worm Is no Party,” siliconvalley.internet.com, January 28, 2002, http://siliconvalley.internet.com/news/article.php/962741 (accessed March 30, 2007).

32. R. J. Heuer, Jr., “What Are Cognitive Biases?” Psychology of Intelligence Analysis, Part Three—Cognitive Biases (1999), www.cia.gov/csi/books/19104/art12.html (accessed March 7, 2007).

33. M. Mathers, E. Shafir, and M. K. Johnson, “Misremembrance of Options Past: Source Monitoring and Choice,” Psychological Science 11, No. 2 (March 2000), http://people.ucsc.edu/~mather/pdffiles/Matheretal2000.pdf (accessed March 28, 2007).

34. J. St. B. T. Evans, J. L. Barston, and P. Pollard, “On the Conflict between Logic and Belief in Syllogistic Reasoning,” Memory and Cognition 11 (1983): 295–306.

35. R. B. Zajonc, “Attitudinal Effects of Mere Exposure,” Journal of Personality and Social Psychology 9, No. 2 (1968): 1–27.

36. A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185 (1974): 1124–1130.

37. T. Wilson, “Eight Faces of a Hacker,” Dark Reading Room, March 29, 2007, www.darkreading.com/document.asp?docid=120800 (accessed March 30, 2007).

38. M. Allen, “Social Engineering, A Means to Violate a Computer System,” SANS Institute Information Security Reading Room, June 2006, www.sans.org/reading_room/whitepapers/engineering/529.php?portal=3595f417b55c62ba6243b24f66416d4b (accessed March 17, 2007).

39. R. Gulati, “The Threat of Social Engineering and Your Defense Against It,” SANS Institute Information Security Reading Room, 2003, www.sans.org/reading_room/whitepapers/engineering/ (accessed March 19, 2007).

40. K. J. Higgins, “Social Engineering Gets Smarter,” Dark Reading Room, June 16, 2006, www.darkreading.com/document.asp?docid=97382 (accessed March 16, 2007).

41. R. Groom, “Top 5 Social Engineering Techniques,” About: Business Security, http://bizsecurity.about.com/od/physicalsecurity/a/topsocialengine.htm (accessed March 17, 2007).

42. M. Hoeschele and M. Rogers, “Detecting Social Engineering,” Advances in Digital Forensics, IFIP International Conference on Digital Forensics, February 2005, Chapter 6, pp. 67–71.

43. M. J. Edwards, “IE 7.0 and Firefox 2.0 Both Have New Antiphishing Technologies,” WindowsITPro, October 26, 2006, www.windowsitpro.com/WindowsSecurity/Article/ArticleID/94026/94026.html (accessed November 23, 2007)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.113.80