Chapter 13. Keeping Software Up to Date

A wandering minstrel I—

A thing of shreds and patches,

Of ballads, songs and snatches,

And dreamy lullaby!

Nanki-Poo in The Mikado
—W. S. GILBERT AND ARTHUR SULLIVAN

13.1 Holes and Patches

Of all of the tools in the technical workshop, few are as loathed as the security patch. On one hand, they’re a nuisance that tends to introduce entropy into the original code base. On the other hand, patches are utterly necessary. Software is always imperfect; when imperfections manifest themselves as holes, there are few choices but to spackle them, sand them, and paint them. The alternative—the sysadmin equivalent of moving some furniture in front of the hole, if I may continue my metaphor—is not just unattractive, it reduces architectural flexibility and leaves you vulnerable to attackers who are closer to the wall than you are.

Let’s skip the flowery imagery. Any time you have a security bug—and if your system is at all complex, you do—you can either repair it or mitigate it. Mitigations can include putting another access mechanism—a firewall or equivalent—between the hole and would-be attackers; alternately, you can assume that a penetration will occur and prepare for detection and recovery in the usual way, by taking extra backups, creating specialized intrusion detection scripts, and so on. Finally, under extreme circumstances, you can shut down the vulnerable systems until one of the other alternatives become feasible.

Some would opt for a final alternative: ignore the problem and hope that you’re not hit. This isn’t a strategy, it’s wishful thinking. Maybe you won’t be hit, but if you’re not watching carefully you’ll never know if you were right or not.

Deciding between these choices requires complex analyses and calculations, calculations that are always imperfect because the necessary data is unobtainable. Though there have been tries at general quantitative solutions—see [Beattie et al. 2002] for an excellent attempt—the answers are at best probabilistic and at worse meaningless because of inadequate data. Furthermore, the existence of targeted attacks skews the results; you’re no longer dealing with a random function.

There are a number of factors to consider. Some are generic security questions; other are specific to the particular hole.

Attacker motivation Are you being targeted? If so, by whom? Answers of “no” are rarely definitive; answers of “yes” should be taken seriously.

Attacker capability For this specific hole, how much sophistication is needed to launch an attack? If, say, it reduces the complexity of finding a cryptographic key from 2256 operations to 270, you’re probably safe from all but the Andromedans. On the other hand, if there’s a kit out there that merely requires a script kiddie to click “P0wn!”, the risks are considerably higher.

Exploit availability How widespread is exploit code? If the hole was originally reported on public mailing lists like bugtraq and Full Disclosure, it’s a pretty good bet that anyone who wants it, has it. Holes that were closely held by the vendor until patch release are less likely to be exploited initially, but the likelihood goes up with time: attackers study the patches to learn what new things they can do to unpatched systems. Indeed, some people use the phrase “Exploit Wednesday” for the day after Microsoft’s regular monthly “Patch Tuesday” security update [Leffall 2007].

On the other hand, if there’s a report that a 0-day is in active use, that’s a strong indication that you should move very quickly to install the patch when it becomes available and to mitigate its impact until then.

Patch quality How good is the patch? Does it really solve the problem? Does it create new problems? Patches are software; therefore, they can be buggy. Furthermore, since there is often pressure to ship patches quickly, they may undergo less testing than base code.

Security and functionality problems with patches are far from unknown. Sometimes, they don’t fix the problem [Greenberg 2012]; other times, they can introduce new ones [Gueury and Veditz 2009]. Speed of response to a newly announced hole is good, but not if it comes at the expense of quality and hence security [Bellovin 2009b].

Patch timing When has the patch become available? At the start of (your) workday early in the week? 3:00 AM on a holiday weekend? It’s tempting to try to compare that to the attackers’ work schedule, but that’s probably fruitless. If nothing else, the Andromedans’ ritual calendar is derived from the rotation of a distant pulsar, the product of two random twin primes, and the current state of health of Schrödinger’s cat [Trimmer 1980]. More seriously, attackers can be anywhere in the world; if they’re serious attackers (and in particular if they’re targeting you), they’ll strike when they can, and not take the weekend off.

Damage potential What is the potential for damage if a system is hacked? Will sensitive data be compromised? Does the data on system fall within the ambit of mandatory breach notification laws?

Importance of availability How vital is it that the system be available? To whom, under what conditions? Is it mission critical—for an online store, the web site is the business—or is the system just running a background task that is looking for “a message in eleven dimensions hidden deep inside the number pi” [Sagan 1985]? Can you do without it for a while? Which is worse, having it unavailable now or during cleanup and recovery if it’s hacked?

Suppose you can’t patch immediately, either because no patch is available or for any of the other reasons listed above. Now what? How to proceed next is very situation dependent. Apart from the questions above, how you proceed depends on just how much you know about the exploit and how it is used.

Under certain circumstances, the right response might be procedural. For example, if there’s a 0-day PDF exploit in the wild, you might be able to protect yourself by telling members of the organization not to open suspicious PDFs. It might work, but absent further instructions or controls it’s a very risky approach. A lot, though, depends on the precise nature of the attack.

Few people ever knowingly open something that’s boobytrapped. The trick, though, is telling people how to recognize fraudulent messages. Perhaps most people will recognize phony airline ticket receipts, package tracking notices, and the like. Skillfully crafted spear-phishing attachments are much harder to detect, unless all of your employees are the type to peruse Received: lines on inbound email.

Another approach might be purely technical: drop or reject all messages with attached PDF files, or strip such attachments from any inbound messages. This is, of course, a self-inflicted denial of service; to work around it, employees may try to evade the ban by shipping around URLs to cloud-based storage services, having PDF-containing email sent instead to personal accounts and importing the attachments via flash drives, etc. Curiously enough, under these circumstances and threat model—spear-phishing attacks exploiting a 0-day hole in PDF viewers—this behavior may not create a security hole. Consider: if the files in question are from known correspondents, it would take two-way communications to set up the alternate channel. This, though, means that the sender is verified, at least by email address. The spear-phishing incidents we’ve seen thus far involve passive impersonation, not account hijacking or the like. While one can certainly imagine that MI-31 is reading such emails and can adapt accordingly, that would represent a considerable escalation in the typical attack effort.

These, as noted, are work-arounds. Really, though, you want to patch holes as soon as you can. The trick is being able to do so effectively.

13.2 The Problem with Patches

Apart from the issue of whether the patch actually fixes the security problems to which it is addressed, there are two aspects of patches that merit caution. First, of course, patches are software and are thus subject to the “thousand unnatural shocks that [code] is heir to” [Shakespeare 1603]. That is, they themselves can be buggy, insecure, and so on, just as the base code can be. In fact, patches can be worse. When you’re writing new code, you have a relatively clean slate and can design appropriate interfaces to do what (you think) you need to do. By contrast, a patch is a change to a flawed but extant code base; the structure of that code may not let you easily do what you want. Consider a simple example: you realize that a procedure needs to check its inputs more carefully and pass back an error indication if there’s a problem. It sounds simple enough—unless that procedure had no provision for returning any status indication; worse yet, it’s invoked from many places, some of which are not well-suited to error handling. Now what? Any experienced programmer can think of several solutions in less time than it took me to type this; the fact remains, though, that the code won’t be as clean as it could have been had the need been recognized initially. Furthermore, one of the obvious methods—passing back an “impossible” value as its normal output—could cause problems for some of those other pieces of code, especially if “impossible” turns out to be an overstatement.

A second problem, especially serious in large enterprises, is that often, specialized (and perhaps locally written) applications are incompatible with the patch: they relied, implicitly or explicitly, on old, buggy behavior. Your CEO will not be happy if you explain that the corporation can’t function because you pushed out a security patch. For that matter, you won’t be happy, either, if one of the affected applications is the payroll system that writes your paycheck.

The solution, of course, is testing, by the software vendor and by you in your own test lab. Historically, the first round of testing has not always been high quality; more than one patch has caused serious problems [Goodin 2013] or failed to fix the security hole it was intended to close [Greenberg 2012]. Beyond that, vendor testing is more or less by definition inadequate for your environment: the vendor doesn’t know your precise configuration or applications. You have to test things yourself, and make sure that the applications you care about continue to work; that in turn means that you have to have a good-enough test lab and the resources to use it. Even that’s not a guarantee, of course; “Program testing can be used to show the presence of bugs, but never to show their absence” [Dijkstra 1970].

So: patches can be incomplete or buggy, the vendor may not have tested them well enough, it’s a nuisance for you to test them, even that won’t show all problems—and you absolutely have to install them. The bad guys often reverse-engineer patches [Leffall 2007; Naraine 2007] (and the Andromedans certainly do), which means that once the patch is out you’re at increased risk, though from whom and by how much depends on the threat model.

There are related issues surrounding new versions of a product: when should you install them? Any experienced sysadmin has heard and uttered the mantra “never install .0 of anything;” the advice is quite sound for production systems. In the long run, though, you don’t have a choice; vendors don’t want to support old codebases forever and will end of lifetime (EOL) them at some point. Once a product has been EOLed, there will be no more security patches for it, and the one thing worse than installing security patches is not having any patches to be installed. If that’s not enough to force your hand, what will you do when some other upgrade—a new version of your OS, or newer hardware that isn’t supported by your old OS—forces your hand? You can often get away with skipping a version, but at some point you will have to upgrade. The proper response is to plan how to do it (and lobby upper management for the necessary budget and staff), not to deny the necessity.

13.3 How to Patch

Assuming that a decision to install a security patch has been made, there are three important procedural steps:

• Deciding on a per-machine schedule for installation

• Actually installing the patch

• Tracking which machines have or have not had the patch installed

This applies to all patches. The situation for security patches is different, though, since there’s a cost to not installing them other than a noticeable loss of functionality from running on a buggy platform.

Deciding when to install a patch depends on four different factors. The first, of course, is how much confidence you have that it won’t be harmful. If you’re highly confident that it won’t break anything (perhaps because of your testing, or perhaps because the affected modules are not ones that you use), there’s no reason to hold off. Conversely, if the patch affects mission-critical code and you haven’t tested it, you should hold off.

A second critical factor is whether the security hole is actively being exploited. Often, this will be in the technical press, for example, [Goodin 2012a] or on social media or security mailing lists. At other times, you might hear this from colleagues or from some government agency, as happened with a recent Internet Explorer (IE) bug [Rosenblatt 2014]. Obviously, you have to take remedial action very quickly in a case like this.

You have to be very careful about your threat model, though. A report by Microsoft gives some figures on when various new holes are exploited and by whom. They looked at 16 new vulnerabilities found over a two-year period. Only two ever made it into exploit kits used by ordinary criminals, and that was a rather late development—but nine of them were used very early in targeted attacks [Batchelder et al. 2013, p. 9]. In other words, ordinary care in patching, rather than crash programs, will generally suffice, except in rare cases or if you’re being attacked by MI-31.

Sometimes, you may have other defenses that you can rely on until you’re ready to patch your system. In the case of that IE exploit, you may be able to configure your web proxy (which your firewall forces all employees to use) to block external web browsing by IE users. Microsoft’s Enhanced Mitigation Experience Toolkit (EMET)1 is reputed to be highly effective at blocking exploits even after they’re downloaded. Relying on this is a delicate dance, though; you have to be sure that all exploit paths are blocked. You may know, for example, that your mail gateway will detect and delete some particular nasty file, but do you know it won’t be downloaded via the web or carried in on a USB drive?

1. “The Enhanced Mitigation Experience Toolkit,” https://support.microsoft.com/kb/2458544.

Finally, you may know that your organization isn’t at risk. Perhaps there’s a bug in the encryption module used by web browsers and an instant messaging program, but it’s only exploitable in the latter. Your organization doesn’t use instant messaging, so you’re not at risk; you do rely on encrypted web browsing, though, so you don’t want to risk breaking it.

Suppose you’ve decided that a given patch should be installed. In an ideal world, you just tell that to your database-driven sysadmin platform (Section 15.3), good magic happens, and all is well. (If you don’t have such tools and you’re a large organization—well, you should. Go read Chapter 15 and then come back here. I’ll wait.) Decentralized or open organizations have a harder time; they’re more reliant on users doing the right thing, to wit, installing the patches when they’re told to.

The more different platforms in use, the harder job the sysadmin group has. They need the time, test machines, and expertise to evaluate patches from multiple vendors and make the right decisions. The usual (and generally correct) response is segmentation: a small list of fully supported platforms, with others permissible on an at-the-users’-own-risk basis. Consider the following statement by my own university’s IT group:

Currently, Administrative Applications are not certified to run on Windows 8. Additionally, Windows 8 is only an install supported product; CUIT does not support Windows 8 functionality at this time.

They don’t say don’t run Windows 8; they simply say it’s not certified or supported. It may work (and in fact probably does work very well)—but if there is a problem, it’s your problem, not theirs. This is a university, about as open an environment as one can imagine. There’s at least one of every imaginable system here (and some unimaginable ones as well); there’s an obvious limit to what the IT group can do.

In corporations, there is rarely that much flexibility about what computers are used. However, there’s one growing exception: the Bring Your Own Device (BYOD) movement, where employees are allowed to use their own equipment, especially smart phones. The organization is very limited in what it can do about patch installation: the gear is, after all, employee owned. Often, the best that can be done is to insist that people run a company-supplied audit tool, one that ensures that the system is fully up to date on patches and antivirus software before it is allowed to connect to the corporate network. Such software exists, especially for Windows machines.

The final important aspect of patch installation is tracking which machines have or have not been patched. Always-on, in-building desktops and servers are the easy case; they’ll almost always be up to date. Mobile devices, home equipment, and systems that are out for repair are more challenging. Automated sysadmin tools generally handle this without undue trouble, but if you’re not using one you need some other recording and audit mechanism. An unpatched machine, especially a mobile machine that can wander outside the firewall, is a risk to the entire organization. (The August 2001 IETF meeting was in the middle of the Code Red worm outbreak. I looked for attacks originating from the meeting LAN. There were at least a dozen infected laptops there, laptops that were tunneling back to their home networks and/or would be physically connected to it the following week. Code Red should not have penetrated any properly designed and administered firewall, but virtually every corporation had it on the inside of their networks. This is likely one reason why.) If you can’t track installation automatically, the use of auditing tools is probably your best option.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.202.221