Chapter 16. Casting Spells: PC Security Theater

Michael Wood

Fernando Francisco

Storm clouds gather and there is unrest in the land; thieves wander the highway with impunity, monsters hide in every tree along the road, and wizards cast spells while handing travelers amulets for their protection. Believing in the power of the talismans, our hero strides forth, wrapped in his magical invincibility, confident he will be the master of any threat he encounters.

Our hero, however, has been deceived. The pratings of amulet peddlers were repeated endlessly by the untutored peasants around him, but he will soon discover that incantations and alchemy are poor substitutes for a real suit of armor, a sturdy sword by his side, and a good plan in his head.

Although this might seem like the start of a fantasy novel, it parallels the state of today’s computer security.

The problem is not in the quality of the solutions we use to protect our computers; truly, many of today’s security offerings are nothing short of wondrous, developed by dedicated, experienced, and uncommonly talented people. Yet when we look at the overall state of security, the achievements resemble misdirection and magic more than a responsible and effective strategy.

What we need is a new security strategy that makes better use of our current tools and guides the development of new ones. The alchemists and apothecaries of old made many valuable discoveries in chemistry and medicine, but their insights proved effective only when modern views of science were developed. So too with anti-virus and anti-spyware products, firewalls, sandboxes, etc. Unless we adopt a new approach to security, these common tools will prove unsustainable over the near future and irrelevant (for the most part) in the long term.

Our research and development has opened up new possibilities for ensuring security through management of virtual and real (persistent) systems, through the use of artificial intelligence (AI) to detect anomalous behavior, and through accelerated anti-virus development. This chapter looks at the problems that drove us to this solution, and our current work on a product named Returnil.

Growing Attacks, Defenses in Retreat

To understand where we are now, we have to take a trip back to where the mass security market got off track. We don’t have to go as far back as the creation of the first true viruses in the 1970s or even the introduction of new malware designed for the increasingly popular IBM-compatible PCs in the early 1980s, but just to the period during and shortly after the Internet bubble of the late 1990s. A huge amount of expertise went into developing defenses, but the battle still goes to the spammers and the spyware distributors.

On the Conveyor Belt of the Internet

The first link in the chain was the end of distinctions between anti-virus vendors. Up until the mid-1990s, malware spread mostly through the exchange of affected files between people who worked together or knew each other. Therefore, it either did not travel very far or traveled very slowly between geographical locations. This led to the creation of many small, region-specific solution providers.

With the increasing adoption of the Internet and the harvesting of email addresses by malware, it became obvious that anti-virus vendors would need to expand their world view and deal with what was heretofore “someone else’s problem.” While many of the larger providers resisted this change, others who jumped into the breach reaped the rewards of being able to advertise a larger detection capability.

This in turn led to a period of consolidation, where larger companies began merging and acquiring competitors or OEM licensing for the competitors’ technology and research. From a practical sense this was the most straightforward way to quickly obtain large sample databases without having to spend an inordinate amount of time trying to obtain samples the traditional way and reproducing the research that had already been done by the competitor.

The industry was thus transformed by “bigger is more secure” marketing messages, bolstered by the media attention malicious programs began to generate. Some readers will remember the anxiety generated by attacks such as Pakistani Flu (1986), Jerusalem (1987), Morris (1988), Michelangelo (1992), Melissa (1999), VBS/Loveletter—“I Love you” (2000), Code Red (2001), Nimda (2001), Klez (2001), Blaster (2003), Slammer (2003), Sobig (2003), Sober (2003), MyDoom (2004), and so on.[111] Each newly discovered virus would generate ever-increasing gloom in the media, which the anti-virus industry leveraged to their advantage to generate fear and drive more customers to their products.

Another key contributor to cyber fears, interestingly enough, had nothing whatsoever to do with the Internet but has colored everyone’s mindset: September 11, 2001. Whether or not you were personally affected by the tragedy, there can be no doubt that it heightened fears and, as a result, put a greater emphasis on all types of security.

Although consolidation in the consumer security field continues to this day, its imperative has lessened because vendors no longer invest the necessary time and capital to develop unique databases. Now they compete on specialization, claiming to focus on a particular type of attack (banking trojans, gaming trojans, rootkits, spyware, key loggers, spam/phishing, hostile websites, etc.). These distinctions are frequently irrelevant, but provide compelling advertising copy.

Even with the improvements in features, new detection techniques, and added complexity of anti-viral services, you may be shocked to learn that the overall effectiveness of anti-virus solutions has actually dropped over time. Recent studies of random PC populations have reported that 1 in 6 were actively infected with malware, despite running mainstream anti-virus products. This alone demonstrates that existing security technologies, whilst useful, are inadequate.

The same studies show that this problem is becoming worse. The products’ success in detecting the types of attacks for which they’re designed dropped from an average of 50% in 2006 to closer to 30% in 2007.

To be fair, most if not all of these studies are flawed because they use limited sample sets that do not represent the true population of all possible malware samples. One shouldn’t take precise numbers seriously, but the results still indicate an alarming trend. Regardless of the statistical realities, the studies don’t instill a great deal of confidence in the current approaches, do they? Later we’ll look at reasons for the failure of anti-virus and anti-spyware products.

Rewards for Misbehavior

The next link in our chain came directly out of the Internet bubble, when investors threw bucket loads of money at anyone who could pitch an idea that had even a minimal connection to the Internet, not too far removed from strangers who walk up to you on the street and whisper, “Hey bud, I have this incredible idea for making you money....”

One of the survivors of this devastation was the advertising industry, who hit the mother lode when they enthusiastically embraced the Internet. Their Holy Grail was targeted advertising, and they recognized opportunity in the unprecedented ability of digital networks to record and sift through data. They therefore rushed to mine user accounts and track surfing habits to build statistics on behavior, preferences, location, etc. They then sold this information to others to design targeted advertising based on real-world data.

Their techniques have been spectacularly successful, generating mountains of profits that fuel the drive for more ways to make revenues from the collection of information. The first modest targeted ads were pop ups in freeware applications promoting payment options and offering web page ads. Then the bundled advertising components became more sophisticated by tracking users’ surfing habits and any other relevant information they could scrape from the host computer.

It was only a matter of time before the malware writers discovered that there was money to be made from these advertisers who, blinded by profits, wanted their content on the user’s computer no matter how it got there. This incentive provoked malware writers to get serious and apply their knowledge to exploits that fraudulently install advertising components. This enabled them to reap affiliate rewards from legitimate advertisers who were only too willing to throw their money at anything that improved installation numbers. And this new source of revenue, on top of other criminal activities, helped finance the changes that came after, leading to the current and quite effective industrial malware distribution techniques being used today. (See Chapter 4, The Underground Economy of Security Breaches, by Chenxi Wang, and Chapter 6, Securing Online Advertising: Rustlers and Sheriffs in the New Wild West, by Ben Edelman.)

A Mob Response

The convergence of mainstream advertising and malicious exploits was not readily anticipated and took a while to discover. Separate groups of independent researchers and network administrators, noticing the exploits, formed specialized newsgroups and later online forums to discuss methods for blocking and removing them.

Noting that the privacy and security issues raised by these programs resembled the activities that the anti-virus companies claimed to combat, the researchers and administrators pressured them to take action against fraudulently targeted ads. But the established companies balked at this request, regarding the intrusive ad campaigns as merely a rogue commercial activity rather than malicious attacks.

Both angered and spurred into action by the anti-virus industry’s response, activists and enthusiasts in both the privacy and security communities started gathering the scattered information contained in their technical forums with the goal of helping their own members stem the attacks. Information was initially disorganized and of little practical value. Instructions for actually detecting and removing the malware were rare and limited to steps performed manually. Research reports with less complete instructions were slightly more common. The forums also unfortunately contained incorrect and even dangerous advice.

The first step toward a practical implementation of anti-spyware measures was provided by Steve Gibson of GRC.com. Not only did he, along with many others, accomplish much to spread the word about spyware to the public and the media, he also developed the first removal tool, called OptOut, that addressed some of the currently known advertising attacks. Designed primarily as a proof of concept, it had limited effectiveness and served primarily to spur others to take the next step and develop a true commercial privacy solution.

Shortly following the release of OptOut, a developer in Germany who had been following these discussions took the initiative and developed a more robust utility that could be updated regularly like commercial anti-virus programs, rather than being rewritten to address every newly discovered advertising program or threat. He recognized, as did the rest of the community, that the premise underlying the anti-virus approach—an urgent and ongoing search for new attacks and frequent releases of fixes to address them—would ultimately fail. But due to the technical restrictions of the time, researchers accepted this model as the only one they could turn into a practical utility to meet an urgent and growing need.

So a new industry was born to address the threat presented by spyware, ironically benefiting from the same messages of fear (both real and perceived) that the privacy community and the anti-virus services used to force their issues into the public consciousness. The new products garnered more and more media attention, which helped fuel the growth of anti-spyware software from a single company to an integral component of today’s anti-virus and even firewall solutions.

The movement gave a new lease on life to the anti-virus companies and even opened the door to a new round of consolidation. Now there was a new pool of unique databases of malware signatures that anti-virus companies wanted to incorporate into their products. Given the obvious similarities between the viruses and spyware, it was only a matter of time before they came together.

What was wrong with that? Its very success dulled any imperative to develop new paradigms and address the glaring flaws in their products. As long as companies could claim a competitive advantage, expand offerings in appealing ways, and increase revenues, the incentive lagged to engage in developmental research and innovation. Along with the tight competitive environment they operated in, this put the brakes on any real change in strategy.

To summarize our history, despite earnest and often brilliant contributions by increasing numbers of security experts, the anti-virus and anti-spyware industry of the past 15 years has been characterized by a brute-force collection and analysis of existing malware. In a “bigger is better” approach, the products slurp up the contributions of competing products—as well as the innovations of researchers who are critical of the original products—and multiply their complexity to deal with creative new threats. We can now examine why we have reached the limits of this approach, and why it is already insufficient.

The Illusion Revealed

In this section, we’ll review each type of security solution to see how it protects your computer and where it fails.

Strict Scrutiny: Traditional and Updated Anti-Virus Scanning

The first solution is the traditional, signature-based anti-virus/anti-spyware filter. The program compares files against a regularly updated “blacklist” of bad content. If the file is on the list, the filter will block or remove it based on instructions in the blacklist. Anything not on this list is presumed to be good.

The evolution of the blacklist method

The blacklist approach really became obsolete as soon as people connected to the Internet, allowing malware to cross geographic borders and spread in the blink of an eye.

The anti-virus providers undoubtedly realized quite early that the signature approach would at best be a short-term solution, and would eventually fail to provide reliable frontline protection. To provide protection using this method, the manufacturer first has to have a sample of the malware in hand. Then, it has to generate an update to its signatures that will properly identify and remove the targeted content.

The problem is that the researcher can isolate and view the sample only after the malware has been released, sometimes months or even years previously. Rustock.C, one of the most dangerous Windows-based rootkits found to date, is a good example of this, having been in the wild for over a year before it was discovered, analyzed, and added to detection signatures. Even daily updates would not give manufacturers enough time to find, analyze, and distribute defenses against new malware, so users are vulnerable to yet unknown attacks (zero-day exploits).

From this description, it would be legitimate to assume that a researcher is seeing an old version of the malware and that it has had time to make the rounds with other malware developers and “users.” Each malicious attack quickly changes into something completely new or incorporates some of its capabilities into something else.

Furthermore, although anti-virus companies maintain research teams that can number in the hundreds, they are facing an ever-growing backlog of malware identification and signature production. They receive tens of thousands of new suspect items a day from their “honeypot” networks and other sources. In short, the costs of finding malware are rising while the products’ detection rate across the total malware population is falling.

It has long been understood that signature-based scanning systems, whilst effective in the past, are doomed as a single defense by the sheer volume and rate of change demonstrated by malware, especially with the recent introduction of industrialized, organized malware development that has increased its volume upwards of 100% over a single year (2007).

The next solution is a variant of the first, but with the addition of heuristics. This approach broadens the effectiveness of the defensive programs, but has an unfortunate tendency to increase false positives: flagging good files as bad. Rather than make the user more secure, it serves to either desensitize him (crying “Wolf!” one too many times) or heighten his fear. The first leaves the system open to exploit because alerts are ignored. The second leads the user to seek more talismans of protection, as no single anti-take-your-pick scanner can detect 100% of all malware, regardless of what the advertising says.

The whitelist alternative

Another approach is to move away from reliance on blacklisting and focus instead on what is actually good, an approach called whitelisting. It should be simple to do an end run around the anti-virus simply by disallowing anything that is new or unknown.

One way to do this is through the use of anti-executables. As the name implies, these defenses block programs from running or being activated. This strategy lies behind the recurring dialog boxes that annoy Windows Vista users by forcing them to click a button right after requesting some routine administrative activity.

Conventional wisdom would tell you that if a malicious file cannot activate, it cannot infect your system. But proper functioning depends entirely on maintaining a clean whitelist. If the user makes a mistake and allows a malicious file to be added to the list, the game is up and your protection fails.

Even more disturbing is that anti-executables cannot distinguish between a truly good program and a good program that has been hijacked to run malicious content. Once this happens, the game is again up and the user is left unprotected.

Host-based Intrusion Prevention Systems

A variation of this approach is to use Host-based Intrusion Prevention Systems (HIPS). Conceptually, HIPS is a cross between a type of signature-free anti-virus program, a firewall, behavioral control, and sandboxing (described later). Although large organizations usually run powerful monitoring software on dedicated devices, scrutinizing every host on their networks, a HIPS application actually resides on your computer (the host).

What the program looks for is malicious or suspicious behavior based on a set of rules it has learned or been programmed with. Unfortunately, good and bad programs display similar behavior, so the HIPS has to determine the actual intent behind those behaviors. This is its fatal flaw, because the program relies on the user to determine what is good or bad. As soon as the user makes the wrong decision, as with anti-executables, her protection is breached. Meanwhile, these programs are extremely resource hungry and can adversely affect both system performance and user experience.

Anti-virus and anti-spyware scanning is a desperate and shockingly intrusive approach, even when it works well. Users routinely notice and complain about the degradation in performance they cause, particularly at application startup. The degree of integration with the operating system required by the products leads to its own fragilities, and bugs in the software even introduce new attack vectors.

Security is partly a function of simplicity. The more complex a strategy, the more likely it is to fail. We believe the end user does not need multiple scanners, advanced firewalls, process monitors, filters, and blockers—the common elements of modern anti-virus systems—to be secure.

Applying artificial intelligence

At our company, Returnil, we’re trying to streamline the identification of malware through behavioral analytics, artificial intelligence, and machine learning. Returnil runs in conjunction with a virtualization environment, also possessing innovative features, which we’ll describe in a later section.

The analysis is performed by our Advanced Monitor System (AMS), which blocks any process demonstrating behavior that we’ve previously defined as malicious (a form of blacklisting, in other words) and reports processes with unrecognized behavior to the central AI systems. Thus, anything new or with substantially different behavior from before is processed by the AI systems.

Our AI/Knowledge Base puts the processes submitted to it into one of three categories: good, bad, or indeterminate. Indeterminate items are continuously monitored by the AI system learning engines and are either subsequently categorized or, in a minority of cases, passed on to a human researcher to make a definitive determination. False positive and false negative results are continuously directed into feedback loops to improve the AI/Learning engine.

Sandboxing and Virtualization: The New Silver Bullets

As users come to realize the intrusive nature of anti-virus software, they look at current technologies for an escape. They say, “If I can’t kill malware with my anti-virus product, can I at least isolate it and keep it from harming my computer?”

Yes and no. Sandboxing, as the name implies, allows the user to put an unknown or potentially harmful application inside of a “bubble” that keeps it from interacting with files and other sensitive parts of your system while letting security programs watch it closely for malicious behavior. Think of this as putting the application on an island to which you control all access.

This is a secure idea in and of itself, but it fails too often in practice. Both user mistakes and specific design elements in the malware can cause “leakage,” exploiting the limited connections between the isolated “island” and the “mainland” of your computer. Sandboxing can also have a significant effect, just like the other solutions we’ve described so far, on the performance of both the program within the “bubble” and the overall operating system. This effect, however, is not as extreme as with traditional virtualization.

Virtual machines, host and guest

Some have suggested using traditional, full-blown virtualization as a security tool. The concept is simple on its face, but involves a great deal of effort and cost to work properly. In the end, it fails to provide any real protection. Most have heard or know of programs such as VMware that run a simulation of a real computer as a “guest” inside their actual “host” computer. Indeed, these types of programs are used frequently by malware researchers to test malicious content on a variety of different operating systems, so they don’t have to set up and use a real computer that would need to be serviced frequently in order to reset it for each testing session.

This does not, however, mean that the real computer is protected or secure. It is relatively simple for the user to mistakenly drop a virus-infected application onto his real (host) computer, accidentally allow a virus to migrate from the guest operating system to the host system using shared folders, or become infected from the Internet if the guest operating system is allowed a connection.

Virtualization also takes up a significant amount of resources, as you will need to divide things like your available RAM and hard disk space. This can adversely affect performance of the guest system, and performance of the host system if you try running other programs natively on it. In addition, you will still need to use the same security applications you use on your real computer, which will consume even more resources.

Security-specific virtualization

Instead of cobbling a secure environment on top of virtualization technologies meant for other purposes, we have developed a sleek virtualization solution at Returnil that directly addresses the need to isolate critical system files from everyday user behavior. When the user’s system boots, we lock down the “real-world” system from modification and create a virtual system to present to the user.

To the user, the system looks like any other Windows box, and, in theory, the user doesn’t even have to know the system’s running in a sandbox. Incidentally, a side effect of making the user work in a virtual sandbox is that the system as a whole suffers less from fragmentation or wear and tear on the disk.

By default, the virtual system is a fresh copy that bears no changes from previous sessions. But the user can choose to save system folders in the virtual system (e.g., My Computer, Favorites, System logfiles) so that she can access data she really wants to save upon reboot. Furthermore, there is a mechanism for saving a file to the real environment; the security of that procedure is described in the following section.

If the AMS monitoring software described earlier (in the section Applying artificial intelligence) detects bad behavior, it immediately and automatically reboots the virtual system, usually a fast operation that does not greatly inconvenience the user. Even if malicious content is in one of the folders the user wants to save between reboots, changes to the registry that would make this content dangerous are stored only in the virtual system’s registry and therefore wiped out during the reboot. Users are not asked to make decisions they cannot reasonably make about the safety of the programs they run.

Figure 16-1 illustrates the interaction of our monitoring system and virtualization sandbox.

Returnil procedure for trapping malware
Figure 16-1. Returnil procedure for trapping malware

Security of saved files in Returnil

As mentioned, our virtual environment allows users to save files to the real environment. When they do so, we apply a comprehensive scan that uses both traditional signature databases and behavioral analysis to detect malware.

Most anti-virus and anti-spyware programs are continuously scanning huge swaths of the system while the user is trying to get work done: all running processes, the PC registry, and key areas of the disk. At regular intervals, the software spends an hour or more scanning all files. This is a huge and time-consuming task, involving hundreds of thousands of objects. Returnil bypasses all this overhead by using relatively lightweight behavior monitoring on a dynamic basis, and saving exhaustive scans for the occasional file saved to the real environment. Why scan 100,000 objects when you need to scan only one?

Better Practices for Desktop Security

What can computer users do to improve on the current, unsatisfactory security situation?

First, you will need to embrace the fact that there is no silver bullet or all-powerful talisman that will make your computer invulnerable to malware infection. We suggest dual principles to guide you:

  • Security is about assessing and reducing risk, not making intrusions impossible.

  • Simpler solutions tend to be better ones.

A related and well-known principle—defense in depth—suggests using a mix of solutions.

This does not mean you have to resort to the “more is better” approach of resource-hungry, intrusive, and annoying HIPS products or sandboxing. Although these approaches have merit in expert hands for specific situations, we don’t find them appropriate for average users who cannot make the choices they require and don’t really need such über-paranoid configurations.

The real magic comes from making the right decisions and selecting your lineup so that each link in the chain combines to provide a solid whole with minimal overlap. In other words, use an intelligent layered approach focused on the threats you are most likely to encounter. We believe the combination of artificial intelligence and virtualization used at Returnil is a good start. You should also make sure you have reliable backups and emergency recovery procedures.

Though there are a dizzying number of programs out there to choose from, each with its own unique pros and cons, it will not be as difficult as you might think to narrow your choices if you take a logical approach.

The first step is to identify your goals. Then, you can select programs that best fulfill the goals in your environment. You generally need to cover three major categories:

Prevention

This can be accomplished through a firewall, the separation of programs from data storage on different partitions, attention to regular security updates of your software, email filters, Internet content advisors, parental controls, user account control (i.e., granting users limited permissions), executable and driver control, policy restrictions, and safe practices (often called safe surfing or safe HEX). Although “security through obscurity” is disdained among researchers, it can’t hurt to disguise sensitive files and data through unconventional naming conventions.

Detection, analysis, and removal

This is still the place for a solid and consistent anti-malware solution. Stick to a single leading product because they all can now address viruses, trojans, worms, rootkits, spyware, and what are euphemistically called “potentially unwanted or harmful applications” (mostly adware).

Cure

Virtualization, data backup or replication, and emergency recovery provide means for recovering from infections (along with routine user errors, such as deleting key files). The latter is not always well known; it consists of imaging solutions that take a “picture” of your hard drive at certain intervals and can restore that image at a later time, removing any and all malware by simply overwriting the disk.

Conclusion

The computer security industry and the media have combined to create an environment where fear, misleading information, flawed comparisons, and aggressive advertising have cast a hypnotic spell over the public. The industry claims to provide the protection we need, but their promises are not borne out by reality.

Visit any security or privacy discussion forum and you will see that the majority of the “discussions” involve assisting members to identify and remove malicious or advertising content after they thought they were protected. Over time, in fact, such topics have become the overriding mission of these communities. Rather than solving the malware issue, the industry has allowed it to grow to the point where it supplants the previous constructive discourse about general privacy and security issues.

In short, approach PC security the same way you would address personal security. Be aware of your surroundings, make sure your tools are working properly, be cautious about where you travel, and prepare for emergencies. Simplify your lineup and configure your security strategy so that each piece of the puzzle is providing the highest level of protection for its core competence with a minimum of overlap with any other piece.

For example, if you were traveling on vacation, you would not rent a hotel room and then bring a tent with you just in case the roof leaked. Instead, you would get another room or go to a different hotel. Similarly, why would you need to have a firewall with a spyware detector if you already have an anti-malware program running resident in the background? If you judge a solution to be inadequate, you simply change to one that better serves your needs.

Security is not mysterious and does not need to be complicated to be effective. Take a deep breath and use your head. Like our hero at the beginning of this chapter, all you need is a good suit of armor (a simple software firewall, and virtualization and emergency imaging), a sturdy sword at your side (a highly rated anti-virus scanner), and a simple but effective plan for dealing with the threats you are most likely to encounter during your travels.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.119.229