Technology Evaluation
185
to where critical data and applications reside. An example of an area that
is often reviewed is user ID administration, and terminated employees in
particular. In the case of terminated employees, network access is often
the only access that is removed. As part of the testing on the network
operating system, some of the areas that might be tested for include:
Authentication process
Audit settings
Settings related to auditing
Disabling workstations after certain number of unauthorized log-on
attempts
Machines logging off automatically after a period of inactivity
The items above are only a small sample of items that might be tested.
The actual test items will depend on what is deemed critical to the local
area network. Based on what is reviewed and the factors listed in the
previous section, you can determine whether it is appropriate to use a
tool, do manual testing, or use some combination thereof.
Critical servers —
With critical servers, where mission-critical data and
applications are potentially stored, it is a given that detailed testing will
be performed on them. With these servers, the impact to the business
would probably be very significant if they went down or if confidential
information stored on them was compromised. The testing done on critical
servers focuses on a number of areas including (but not limited to):
Access control —
The question here is simply “Who has access to this
machine and why do they have it?” Because these are critical machines,
it is important to ensure that access is limited to those individuals who
require it.
What services are running? —
The rule of thumb is to deny all services
and allow only those required for operational reasons. Unnecessary
services running provide a way for attackers to get to the machine.
Testing for unnecessary services is normally done by a tool, which
generates a report of all services running. This report is reviewed with
client personnel to determine which services are truly necessary; all
other services are turned off.
Security patch levels —
With all of the new vulnerabilities coming out
on a frequent basis, keeping up to date on security patches is critical.
Manual and automated methods for checking patch levels exist. Some
tools actually identify what patches are missing and then automatically
install them without user intervention.
Best-practice hardening standards —
Critical technologies often have
best-practice hardening standards developed either by the vendor or by
the security community. These standards are relatively easy to find on
the Internet and are a great resource when testing technology as part
of a security assessment. Other excellent resources that can be used
are the SANS Top 20 and the Open Web Application Security Project
(OWASP)
application-related vulnerability list. Both of these resources
are freely available on the Internet and widely known. These standards
AU1706_book.fm Page 185 Tuesday, August 17, 2004 11:02 AM
186
A Practical Guide to Security Assessments
are checklists that have to be modified for the environment similar to
the more process-oriented checklists in the appendices of this book.
Firewalls
— Firewalls are a key component of the overall security archi-
tecture and will almost certainly be tested in any security assessment.
Firewalls are put up to filter traffic that is coming into and out of the
network. Firewall testing has three components:
Firewall placement —
There is a review of the overall network topology
and the placement of the firewall to determine whether it is optimally
placed. In addition, access control to the firewall is also reviewed to
determine who has access to the firewall.
Rule base configuration —
The firewall rule base is reviewed with the
client to determine whether the rules reflect the client’s business
requirements with the rule of thumb being “deny everything and allow
only what is required.With firewalls, VPN (virtual private network)
functionality is often used, and if that is the case, VPN configuration
is also reviewed.
Logging and monitoring —
The logging feature is reviewed with the
client to determine what is logged and the frequency of the log review.
The testing related to the firewall can require significant time with client
personnel due to the interactive nature of the test.
Intrusion detection systems (IDS) —
Along with firewalls, intrusion detec-
tion systems are another significant piece of the overall security architecture.
Traditional intrusion detection systems are essentially alarms — i.e., they
tell you something is happening but you then must validate and research
the issue so you can react. Intrusion detection systems can be a significant
amount of work to manage. The testing of intrusion detection systems
comprises four main areas:
Architecture —
The IDS architecture will be reviewed in conjunction
with the network topology to determine whether the IDS sensors are
optimally placed — i.e., are the IDS sensors protecting the critical
hosts and network segments?
Signature update maintenance —
For an IDS to continue to be effective,
attack signatures should be updated on a regular basis. The process for
doing these updates will be reviewed.
Incident handling process —
IDS primarily serves as an alarm (it can
be configured to take action depending on the software), so the process
for responding to these alarms is important. The alert classifications
as well as the process for responding to alarms are reviewed.
Optimization process —
One of the most significant issues with IDS
is false alarms — i.e., illegitimate alerts. The way to minimize false
alarms is to review the IDS logs on an ongoing basis, determine which
alerts are false alarms, and make the necessary adjustments so that
those false alarms are minimized.
The different intrusion detection systems are similar in some ways but
each has its own nuances for how alerts are done. The particular system
needs to be thoroughly researched before reviewing the IDS.
AU1706_book.fm Page 186 Tuesday, August 17, 2004 11:02 AM
Technology Evaluation
187
ANALYZE INFORMATION COLLECTED
AND DOCUMENT FINDINGS
The analysis done during this phase is based on data collected during the interviews
with technology owners and the actual hands-on testing you did. The hands-on
testing results consist of notes from manual testing and reports from tests using the
various tools. Many of these reports generated from tools have vulnerability infor-
mation as well as recommendations for fixing the vulnerabilities, which can be used
in developing recommendations for the final report (Figure 7.4).
When analyzing the information collected in this phase, you should consult with
known guidelines such as vendor-recommended best practices and other technical
standards. You should also look at this information in conjunction with the infor-
mation from the last phase. Much of the information collected from the business
side and the technology side should make sense together. One of the things to think
about is whether the technology is adequately supporting the business. If certain
technologies went down, could the technology recover fast enough to support the
business?
Example:
Business process owners say that they are extremely dependent on e-mail.
When asked how long they could function with e-mail unavailable, the answers ranged
from four to eight hours. The business process owners also said that not having e-mail
would significantly impact productivity. The technology owners, however, say that
depending on the problem, e-mail can be down for up to 16 hours. On average, if there
is a “severe” security incident, e-mail will be down for at least 12 hours. The target
time of 12 hours will not be acceptable to business process owners. This is an example
of a finding that could be in the final report.
In identifying vulnerabilities, keep in mind that some vulnerabilities will exist
because they are necessary for the company to do business. For example, certain
ports on the firewall may be open because this is required for certain applications
the client is running. To validate that they are vulnerabilities and to develop an
appropriate recommendation, you must go over the findings from the reports and
your manual testing. In this case of open ports on the firewall, you can identify this
as a vulnerability but you must validate it with the client. If the ports are required
to be open, you might think of other ways to meet the business requirement so that
the port associated with the vulnerability could be closed. In other words, if there
is a way to close that particular port that will allow the client to still conduct business
with no impact, that should be recommended. If not, some other recommendation
will be required that will minimize the risk from having the port opened. Remember
that the business comes first and any recommendation you make should have minimal
impact on it.
As with the last phase, the findings resulting from the analysis of this data should
be documented straight into the final report for efficiency’s sake. At this point, you
should be able to document the findings and the associated risks. To the extent that
you have a recommendation (or several recommendations) in mind, you should
document that as well. These details will all be ironed out in the next and last phase
when you do the risk assessment, quantify the risks, and develop recommendations.
AU1706_book.fm Page 187 Tuesday, August 17, 2004 11:02 AM
188
A Practical Guide to Security Assessments
FIGURE 7.4
Analyze information collected and document findings.
Evaluate
Technology
Environment
General review of
technology and
related
documentation
Develop question
sets for technology
reviews
Meet with
technology owners
and conduct hands
on testing
Analyze
information
collected and
document
findings
Status meeting
with client
AU1706_book.fm Page 188 Tuesday, August 17, 2004 11:02 AM
Technology Evaluation
189
As you move into the next phase, the data collected should be kept safe for two
reasons. First, if you are questioned about a particular finding, you might need to
refer back to your notes and tool-generated reports. Second, the system-related
information you collected in this phase probably has specific details about the client’s
systems that can be deemed highly sensitive. In some cases, the client’s own employ-
ees may not be privy to some of the detailed system information you have in your
possession. If this information were to fall into the wrong hands, it could be used
to compromise the client’s systems. Therefore, it is critical to guard any data col-
lected in this phase as well as any data collected during the assessment.
STATUS MEETING WITH CLIENT
This status meeting with the client (Figure 7.5) is an important one because it is the
first time that the client will see all of the findings from the assessment. To facilitate
this meeting, you should present the client with the findings section of the report
that you have completed so far. The client will have the opportunity to tell you
whether there are any findings where other mitigating factors should be considered
or whether some of the findings are simply not valid. On the other hand, based on
what the client sees, you might be directed to something else to look at, and as long
as it is in scope and worthwhile, you should do so. Because this status meeting will
focus on the technology review that was performed in this phase, there should be
appropriate representatives from the client side. If you think that certain people
should be at this status meeting, you should request it. The value of this particular
status meeting is that you can gain buy-in from the client regarding the findings.
This is important because you do not want to spend time developing recommenda-
tions for findings that may not be legitimate in the first place. In addition, when you
present the final report, the client will not be surprised by its content.
POTENTIAL CONCERNS DURING THIS PHASE
As with the other phases, the technology review has a few potential pitfalls that are
worth being aware of, including:
Client approval for any testing done on their systems — Ensure that you
obtain approval for any testing done on the client’s systems. For account-
ability purposes, this approval is absolutely critical. Avoid the situation
where the approval is vague. Ideally, the client should at least acknowledge
the testing procedure.
Spend testing time wisely — Security assessment methodology is based
on risk. Accordingly, you should spend time and effort on systems accord-
ing to risk. Time is limited for both you and the client technology repre-
sentatives who will facilitate your testing. Therefore, you want to make
the most of your time and use it wisely.
Findings need to be specific — As you develop your findings from a
technical perspective, keep in mind that the audience for these findings
AU1706_book.fm Page 189 Tuesday, August 17, 2004 11:02 AM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.134.81.206