Security Advisories and Alerts: Getting the Intel You Need to Stay Safe

You need to ensure a high level of operational security for the many assets within your organization: routers, switches, servers, laptops and desktops (Apple and Microsoft), BlackBerrys, smartphones, and so on. Pervasive threats exist, from a disgruntled employee to corporate espionage, as shown in Figure 3-1. This chapter deals with a select segment, information assurance, and how updates are managed within your organization.

Figure 3-1 Threat Agents

image

Before you respond to security threats, you must be able to identify them. A threat agent exploits a vulnerability in an effort to cause harm to a computer, network, or company, ultimately in an attempt to impact your organization’s capability to do business. Many types of threat agents can take advantage of several types of vulnerabilities. The potential damage listed in Figure 3-1 represents only a sampling of the risks many organizations should address in their risk management programs. Some threats are easier to identify than others. For instance, there may be a coding problem in a newly developed program your company is creating whereby the application uses complex equations to produce results; however, if the equations are incorrect or if the application incorrectly uses the data, this could cause a cascading error as invalid results are passed from one process to another. These types of issues lie within the application’s code and are hard to identify. Other threats, such as user error (intentional or accidental), are much easier to spot. However, you still must do your legwork. You must monitor and audit user activity on a continual basis. You must conduct audits and reviews to discover whether employees are misbehaving. Their malicious activity just doesn’t show up in your email inbox on a Friday morning. You must put actionable policies and procedures in place to be proactive.

After you identify the vulnerabilities and threats, you must consider the results of those vulnerabilities. What are the risks and what is the loss potential of those risks? Following is the definition for loss potential:

“What the company would lose if a threat agent was actually to exploit a vulnerability (Harris, 2008).”

These losses may manifest themselves as corrupted data, destruction of systems, loss of confidential information, that is, corporate espionage, and unproductively of employees. This may result in destruction, alteration, and disclosure (DAD) of sensitive information that could damage a corporate brand. Asset valuation to determine the single loss expectancy (SLE) and annual loss expectancy (ALE) will help tag assets and help an organization classify its value.

Some of those threats (malware, botnets, viruses, Trojan horses, and cyber attacks from hackers or insiders) are more easily combated than others. You fight these nefarious agents by keeping your computing environment up to date and your users educated through consistent user awareness training on current policy, standards, procedures, and guidelines. When a threat is identified, you must respond. The how, the why, and the where to look are all addressed next. Keep reading...this gets good!

Responding to Security Advisories

So, how should you respond when you are notified there is potential risk to your organization? What are the procedures? Whom do you contact? Will the risk jeopardize all, or just a few, of the individuals in your organization? Can you afford the risk? These are all questions that senior management is going to want answers to. You could establish a Group Policy (GPO) in your organization (assuming you are running a Microsoft environment) whereby each system updates itself by going out to the Internet and contacting the Microsoft security updates website and downloading all critical updates. But now you have a different risk, don’t you? This is a tricky topic. It is hard to establish a default answer to how you should respond to security advisories as they come out. I’ll try to provide you with a basic framework you can use within your organization. You may add to, or remove from, it as you want and as it fits within your organization.

The environment or things that you should have in place include the following:

• A dedicated and up-to-date security policy (see Chapter 2, “Security Policies”).

• A chief information security officer (CISO) who is more than a paper tiger. He/she needs to have a budget, authority, and support of the executive team.

• A change control board (CCB) and procedures; these are discussed later in this chapter.

• A test bed or lab consisting of routers, switches, servers, and client workstations. These must match your current and future corporate technical environment.

• A Windows Server Update Services (WSUS) to manage critical updates.

Now that you have your ideal environment, consider five steps that need to take place. This may not fit you and your organization; cut and choose to make it fit. As you cut to streamline the processes, you may cut too much. Adjusting this response framework is similar to cutting hair...just take a little at a time; you can’t put it back after it’s off:

Step 1. Awareness

Step 2. Incident response (protected immediately, or can it wait?)

Step 3. Imposing your will

Step 4 and 5. Test patches and push patches

The steps of this framework contain a lot of information and procedures that we touch on but not go into too much depth because that is outside the scope of this book. Volumes are written on risk management and change management procedures. I just want to educate you on their existence, what these things are, and briefly discuss their importance within your organization.

Step 1: Awareness

To fix any problem, whether it’s personal or business-related, you first must know there is a problem. For example, when I was a young airman in the USAF, I got into a mess during an inspection. I was supposed to be following a procedure that I didn’t know existed until the inspector came and asked me about a program I had been running. The excuse I told my commanding officer was, “Sorry sir...I didn’t know.” His response to that was I should have known. It was my responsibility to know. Luckily, despite my failure, our unit did manage to pass the inspection. My point is this—ignorance is never an excuse. If you are in charge of a program, building a house, or running a corporation’s information security team, you need to know what your job is about. You need to be better informed than the bad guys. That is what this portion is about: being aware. Several useful sites are available to help you stay abreast of what’s going on.

The sections that follow describe the security advisories of Cisco, Apple, and Microsoft because each is a leader in their industry: routing, switching/Internet connectivity, servers, and desktop computing. That doesn’t mean that the other organizations are any less informed; a lot of information is available from the Common Vulnerabilities and Exposures (CVE) database to the Defense Information Systems Agency (DISA) Information Assurance web page. (See the “Chapter Review” section.) We encourage you to begin there.

Cisco Security Advisories

The following is from the Cisco website.

“...Cisco releases bundles of IOS Security Advisories on the fourth Wednesday of the month in March and September of each calendar year. This does not restrict us from promptly publishing an individual IOS Security Advisory for a serious vulnerability which is publicly disclosed or for which we are aware of active exploitation. All other non-IOS Cisco security vulnerabilities will continue to be announced per the Cisco standard disclosure policy. You can receive them for free being more proactive just sign up for the email notifications and find the link here: http://www.cisco.com/en/US/products/products_security_advisories_listing.html.”

“...Starting in January 2011, Cisco will be providing additional information, available through the Cisco Bug Toolkit, on all bugs reviewed by the Cisco Product Security Incident Response Team (PSIRT)....”

Apple Security Advisories

The following is from Apple’s website:

Apple provides multiple ways for the end-user/administrator to keep on top of vast amount of security updates required. For the protection of their customers, Apple does not disclose, discuss or confirm security issues until a full investigation has occurred and any necessary patches or releases are available. Apple usually distributes information about security issues in several ways: 1) its products, 2) through the Apple security website, and 3) a mailing list.

1. General product information

General information about Apple products is made available at Apple’s website (http://www.apple.com). This information includes product documentation, as well as technical papers, hints, tips, and questions and answers. Notifications developed by Apple are signed with the Apple Product Security PGP key. Apple encourages their customers to verify the signature to ensure that the document was indeed written by Apple staff and has not been changed.

You can verify the signature by going to the following website, https://www.apple.com/support/security/pgp/

2. Updates

Check the Apple Security Updates page for released updates by going to http://support.apple.com/kb/HT1222.

3. Mailing list

The Security-Announce mailing list provides another way Apple provides customer support and information to its customers allowing them to obtain product security information from Apple. You can subscribe via http://lists.apple.com/, also available via RSS Feed at http://rss.lists.apple.com/.

Microsoft Security Bulletins

Microsoft’s security-focused website (www.microsoft.com/security/default.aspx) provides links to all things security whether you are a causal home user or a developer. The section you are probably most familiar with is the page for downloading the critical updates. But there is so much more. Microsoft products provide a means to let the end user know when a critical update is ready for downloading and installation by means of Windows Update service. Windows Update is a free service built in to Windows. It is designed to help you keep your computer more secure, reliable, compatible with devices, and able to run new features that might enhance your computing experience. Windows Update enables you to easily get what your computer needs, such as the following:

• The latest security updates to protect against malware and other potentially unwanted software

• Updates that improve reliability and performance

• Upgrades to Windows features

• Drivers from Microsoft partners

Although Windows Update needs to check your computer to determine which updates it needs, it does not collect your personal information. Windows Update simply checks to see what software and hardware is installed so that it knows what updates you need. For home users this is a valid option. Home users don’t have specific development environments that must be maintained; however, I do not suggest you, as the security guru of your organization, allow Microsoft to run rampant and have your systems updated in a kind of all-sizes-fit-all solution. An update may break a third-party app used corporatewide, and you won’t know which one it was.

Microsoft also provides an email or IM to those individuals who have subscribed to their notification system so that whenever major security updates are released, the subscribers are instantly notified.

NIST Security Documents

The following is from the NIST website:

The National Institute of Standards and Technology (NIST) maintain a Computer Security Division website (http://csrc.nist.gov/) that keeps up with the latest trending topics within the information security world.

Information security is an integral element of sound management. Information and computer systems are critical assets that support the mission of an organization. Protecting them can be as important as protecting other organizational resources, such as money, physical assets, or employees. However, including security considerations in the management of information and computers does not completely eliminate the possibility that these assets will be harmed.

Step 2: Incident Response

When you become aware of updates to firmware and OS software, you then need to be able to respond timely to them. Some third-party applications notify the user of a new update for things such as Java, Adobe Reader, or Adobe Flash player. Other software, mainly for your operating systems, have a system that informs the end user and the system administrator of critical updates that, if not installed in a timely manner, could jeopardize the integrity and security of your systems. This is where that test bed comes into play as mentioned earlier.

In an ideal environment, the system administrator would have a WSUS server that would download all the updates to it; then the system administrator could push the patches to the client PCs on the test bed before pushing it to the general populace PCs of the organization. Pushing an untested patch to the CEO and crippling his/her email even for two hours makes for a very bad day. The old mantra of “test test test” is still valid today!

This isn’t just about how you as a chief information security officer and your team (if you are lucky enough to have one) handle an incident, but also how well your people handle an incident. Does John from accounting forward you the email he just opened and his screen went black, or does he disconnect from the network and call you? These are the types of things you need to consider when you are putting together an incident response plan. Most likely John is not going to unplug his computer from the network, in case you were wondering.

Providing a framework for incident response is a challenge because for obvious reasons, we cannot presume to know what your organization is about, or how it falls within your internal guidelines, but the openness of the process is the feature that makes it most useful to individuals searching for effective security practices. As a result, we have come up with the following compromise that we hope proves effective:

To begin, you need to do the following:

1. Establish roles and responsibilities.

2. Define what a security incident is.

3. Establish procedures for reporting a security incident.

4. Establish guidelines for reporting an incident to an outside agency (such as DISA, NSA, or the Department of Homeland Security).

5. Establish procedures for responding to a security incident.

Establishing Roles and Responsibilities

This section will help you define who has ultimate governance of your organization’s security policy—typically, the CEO, CIO, CISO, or CSO, or some other overarching office within the organization. The security policy should establish a security response team (SRT) and who those people are and how to reach them in case of an incident. The Roles and Responsibility section should also list the managers, the system administrators, and the users, and the responsibilities of each, for example:

Users: All employees and other systems users are responsible for reporting security incidents. They must immediately notify their manager or LAN administrator. If the manager or LAN administrator is not available, the user must immediately report the incident to the network service center and notify the CSO or RSO.

Defining a Security Incident

A good but fairly general definition of an incident is “...the act of violating an explicit or implied security policy.” Unfortunately, this definition relies on the existence of a security policy that, although generally understood, varies among organizations.

For the federal government, an incident, defined by NIST Special Publication 800-61, is a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard computer security practices. You can find federal incident reporting guidelines, including definitions and reporting timeframes at http://www.us-cert.gov/federal/reportingRequirements.html.

In general, types of activity commonly recognized as being in violation of a typical security policy include but are not limited to the following:

• Attempts (either failed or successful) to gain unauthorized access to a system or its data, including PII-related incidents (link to the following description)

• Unwanted disruption or denial of service

• The unauthorized use of a system for processing or storing data

• Changes to system hardware, firmware, or software characteristics without the owner’s knowledge, instruction, or consent

Security incidents might involve suspected threats to persons, attempted systems intrusions, unauthorized release of Privacy Act information, theft of government or personal property, or any other suspicious situation. This policy provides the procedure for reporting those incidents. It should be broken down into three or more subsections:

1. Information systems security incidents

2. Physical security incidents

3. Misuse or abuse

1. Information Systems Security Incidents This policy subsection can be divided between malicious software called malware (which consists of viruses, worms, Trojans, spyware, bad adware, botnets, and most rootkits) or systems intrusion.

Malicious code can be a virus, worm, or Trojan horse, and all are designed to do damage to data.

A virus is a specifically programmed set of instructions intended to destroy, alter, or cause loss of data. It can spread from one program to another, from one system to another, or from one computer to another. A typical computer virus copies itself into the operating software and executes instructions to erase, alter, or destroy data.

Worms are similar to viruses in that they make copies of themselves, but differ in that they need not attach to particular files or sectors. After a worm executes, it seeks other systems to infect, and then copies its code to them.

Trojan horses are not viruses. However, they are programs that contain destructive payloads, which pretend to be legitimate programs. They are spread when the user executes the program.

Botnets are a collection of infected computers (bots) that have been taken over by hackers and are used to perform malicious tasks or functions. A computer becomes a bot when it downloads a file (for example, an email attachment) that has bot software embedded in it. A botnet is considered a botnet if it takes action on the client via IRC channels without the hackers logging in to the client’s computer. A botnet consists of many threats contained in one. The typical botnet consists of a bot server (usually an IRC server) and one or more botclients.


Note

An Internet Relay Chat Channel (IRC Channel) is a form of real-time Internet text messaging (chat) or synchronous conferencing used primarily for group communication in discussion forums (channels), but also enables one-to-one communication via private message and chat and data transfer, including file sharing.

IRC was created in 1988. Client software is now available for every major operating system that supports Internet access. As of April 2011, the top 100 IRC networks served more than half a million users at a time, with hundreds of thousands of channels operating on a total of approximately 1500 servers out of roughly 3200 servers worldwide.


You might have controls in place to protect your data from alteration, destruction, and disclosure; however, there still might be attempts to gain access to your systems. Systems intrusions can take various forms. They may include denial of Internet or email services, unauthorized control or modification of web pages, vulnerability scanning, password cracking, sniffing, social engineering to gain system access, and others. All suspected systems intrusions, or attempts, must be immediately reported to management.

2. Physical Security Incidents Your organization’s physical security program should be designed to protect personnel and facilities, materials, equipment, and information against threats both natural and man-made. Your corporate internal policies and instructions should contain the policies for your employees and procedures for reporting incidents. Some examples of what we are talking about can be as simple as posting the fire evacuation route and designating areas where employees are to meet to as critical as dealing with theft and threats. Most organizations consider these physical security controls guns, guards, and gates.

3. Allegations of Fraud, Abuse, or Misuse Many state laws, industry standards, and federal statutes require you to protect the integrity, privacy, and confidentiality of all personal data and ensure the integrity of your customers. We’ve gone over just a few in Chapter 2. You, as the security professional within your organization, should take precautions by using policies, procedures, standards, guidelines, and controls to ensure that personal data entrusted to your organization is not misused and that the programs are safe from abuse by the public and the employees who administer them.

You should list the applicable manuals or regulations you use and ensure your people are aware of their responsibilities as users, employees, and contractors. Also make known to the employees how to report suspected fraud cases to their supervisors or through the fraud hotline.

Establishing Procedures for Reporting a Security Incident

Employees, contractors, and members of the public may report any suspicious incidents involving information systems. You need to have procedures in place that define

• Whom to notify

• What to report

Typically, you would report the incident up the chain until it gets to security. (That is, you go from the end user to the immediate supervisor, and then to the IT coordinator and information assurance officer sitting in the security office.) Security will do a risk assessment and make decisions as to its response. You might have the end users disconnect their systems from the network to prevent any further contamination, or you might just have them turn off their systems immediately. Either way the risk is mitigated until you and your team can get a full assessment of what has happened.

The what to report is an easier list:

1. Employee name, number, and email address

2. Alternative point-of-contact (POC)

3. Location of the affected machine

4. Hostname and IP address of affected machine

5. Data or information at risk

6. Hostname and IP address of source of the attack (if known)

7. Any other information you can provide that assists in analyzing the incident

Establishing Guidelines for Reporting an Incident to an Outside Agency: What Are You Required to Report?

For the federal government, an incident, defined by NIST Special Publication 800-61, is a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard computer security practices. You can find federal incident reporting guidelines, including definitions and reporting timeframes at www.us-cert.gov/federal/reportingRequirements.html.

If you are a member of a federal agency and a security breach occurs in the information systems realm, you are required to report security incidents to FedCIRC. These reports are used by FedCIRC to build a governmentwide snapshot of attacks against government cyber resources and to assist in developing a governmentwide response to those incidents.

This URL takes you to the new federal online incident reporting website: http://www.us-cert.gov/federal/.

Step 3: Imposing Your Will

Every company should have a policy indicating how changes take place within an organization, who can make those changes, how they are approved, and how the changes are documented and communicated to employees. Heavily regulated industries, such as finance, pharmaceuticals, and energy, have strict guidelines about what can be done, exactly what times, and under which conditions.

Historically, change management was a software development term that referred to a committee that made decisions about whether proposed changes to a software project should be implemented. It is composed of a selection of personnel that make up the decision-making head of the corporations, (that is, department heads) and this monster is called a change control board (CCB) The CCB is made up of project stakeholders or their representatives. The authority of the change control board may vary from project to project, but decisions reached by the CCB are often accepted as final and binding. A typical CCB consists of the development manager, the test lead, and a product manager.

Change management aims to ensure that standardized methods and procedures are used for efficient handling of all changes. The main goals of change management include the following:

• Minimal disruption of services

• Reduction in back-out activities

• Economic utilization of resources involved in the change

Change management would typically be composed of the raising and recording of changes; assessing the impact, cost, benefit, and risk of proposed changes; developing business justification and obtaining approval; managing and coordinating change implementation; monitoring and reporting on implementation; and reviewing and closing change requests.

Change management is responsible for managing change process involving the following:

• Hardware

• Communications equipment and software

• System software

• All documentation and procedures associated with the running, support, and maintenance of live systems

Any proposed change must be approved in the change management process. Although change management makes the process happen, the decision authority is the change advisory board (CAB), which is made up for the most part of people from other functions within the organization.

Now let’s put that into perspective and focus on security and incident response and patch management. It is the same concept whereby any changes you are bringing forth into the organization should meet this CCB so that it is understood what is being patched, and why, and what happens if the patch and upgrade should fail. Typically, this CCB would consist of a member of the SRT, the CISO, and department heads.

Security Response Team (SRT)

The SRT is responsible for making decisions on tactical and strategic security issues within the enterprise as a whole and should be tied to one or more business units. The group should be made up of people from all over the organization so that they can view risks and the effects of security decisions on individual departments and the organization as a whole. The CEO should head this team, and the CFO, CIO, CISO, department managers, and chief internal auditor (if applicable) should all have a seat. This team should meet quarterly, at a minimum, and have a well-defined agenda. Its responsibilities include the following:

• Defining the acceptable risk level for the organization

• Developing security objectives and strategies

• Determining priorities of security initiatives based on business needs

• Reviewing risk assessment and auditing reports

• Monitoring the business impact of security risks

• Reviewing major security breaches and incidents

• Approving any major change to the security policy and program

It should also have a clearly defined vision statement in place set up to work with and support the organizational intent of the business.


Note

When comparing CSO versus a CISO, the CSO and the chief information security officer might have overlapping responsibilities. It is up to the organization to define the roles and whether one or both will be used. The CSO role usually has a farther reaching list of responsibilities compared to that of the CISO. The CISO is typically focused more on the hands-on technical aspect and has an IT background, whereas the CSO is more focused on the business risks, including physical security. In an organization, the CISO reports directly to the CSO.


Steps 4 and 5: Handling Network Software Updates (Best Practices)

How software updates are tested and applied to your working environment is a matter of balancing the need for security and the need for functionality. Many organizations allow the users (in a Windows environment—and let’s face it, most office environments are Microsoft based) to use Microsoft’s built-in security update tool, which in theory is a good idea but in practice might not be the smartest thing to do. Microsoft’s Security Update tool sees the users’ environment as living in a vacuum. It sees you have a certain operating system, Microsoft Office products, maybe a database, maybe Visio, and so on, and then it pulls down the most up-to-date patches for that environment. It cares less about testing the patch on your system than your end user does. Apple’s latest OS, Mac OS X, has a tool whereby it checks weekly for updates or patches and enables you to check which ones you want to have installed. This is better but not perfect. You might have a system running a bit of third-party-middleware that cannot be upgraded for whatever reason. Your end users download the latest patch, update their system, break it, and leave it more vulnerable than when it started.

To complicate matters even more, you might have virtual machines in your server farm. How can updating the host affect the virtual machines?

You need to have a plan, and this plan needs to consist of testing, planning a CCB, and then pushing the patches and having a rollback plan if the patch fails.

Some generic best practices apply to all updates regardless of whether they are service packs, hotfixes, or security patches. Then there are some specific items that need to be performed depending on what kind of update it is (security patch, hotfix, or service pack). First, let’s define what these are aside from a general acknowledgment that they are updates to products to resolve a known issue or workaround.


Note

Refer to Best Practices for Applying Service Packs, Hotfixes, and Security Patches, by Risk Rosato.


Service Pack: A collection of updates, fixes, and enhancements to a software program delivered in the form of a single installable package. Many companies, such as Microsoft or Autodesk, typically release a service pack when the number of individual patches to a given program reaches a certain limit.

Hotfix: A single, cumulative package that includes one or more files used to address a problem in a software product. In a Microsoft Windows context, hotfixes are small patches designed to address specific issues, accessibility service, and freshly discovered security exploits and other concerns of vulnerability. Other companies define it differently; the game company Blizzard Entertainment has a different definition for the term hotfix in its game World of Warcraft:

“...a hotfix is a change made to the game deemed critical enough that it cannot be held off until a regular content patch. Hotfixes require only a server-side change with no download and can be implemented with no downtime, or a short restart of the realms.”

Security Update: A change applied to an asset to correct the weakness described by a certain vulnerability. This corrective action will prevent successful exploitation and remove or mitigate a threat’s capability to exploit a specific vulnerability in an asset. If that sounds like a definition, that’s because it is. It’s taken right from the CISSP study material. What this means in a less convoluted manner is that security updates or patches do just what their name implies. They are the primary source for fixing security vulnerabilities in software. Microsoft releases all its security patches once a month; other companies have different dates throughout the month.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.222.185