Security Architecture

Basic concepts related to security architecture include the Trusted Computing Base (TCB), open and closed systems, protection rings, security modes, and recovery procedures.

Trusted Computing Base (TCB)

A Trusted Computing Base (TCB) is the entire complement of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy. A security perimeter is the boundary that separates the TCB from the rest of the system.

instantanswer.eps A Trusted Computing Base (TCB) is the total combination of protection mechanisms within a computer system (including hardware, firmware, and software) that’s responsible for enforcing a security policy.

Access control is the ability to permit or deny the use of an object (a passive entity, such as a system or file) by a subject (an active entity, such as an individual or a process).

instantanswer.eps Access control is the ability to permit or deny the use of an object (a system or file) by a subject (an individual or a process).

A reference monitor is a system component that enforces access controls on an object. Stated another way, a reference monitor is an abstract machine that mediates all access to an object by a subject.

instantanswer.eps A reference monitor is a system component that enforces access controls on an object.

A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base that implements the reference monitor concept. Three requirements of a security kernel are that it must

check.png Mediate all access

check.png Be protected from modification

check.png Be verified as correct

instantanswer.eps A security kernel is the combination of hardware, firmware, and software elements in a Trusted Computing Base (TCB) that implements the reference monitor concept.

Open and closed systems

An open system is a vendor-independent system that complies with a published and accepted standard. This compliance with open standards promotes interoperability between systems and components made by different vendors. Additionally, open systems can be independently reviewed and evaluated, which facilitates identification of bugs and vulnerabilities and the rapid development of solutions and updates. Examples of open systems include the Linux operating system, the Open Office desktop productivity system, and the Apache web server.

A closed system uses proprietary hardware and/or software that may not be compatible with other systems or components. Source code for software in a closed system isn’t normally available to customers or researchers. Examples of closed systems include the Microsoft Windows operating system, Oracle database management system, and Apple iTunes.

Protection rings

cross-reference.eps The concept of protection rings implements multiple concentric domains with increasing levels of trust near the center. The most privileged ring is identified as Ring 0 and normally includes the operating system’s security kernel. Additional system components are placed in the appropriate concentric ring according to the principle of least privilege. (For more on this topic, read Chapter 10.) The MIT MULTICS operating system implements the concept of protection rings in its architecture.

Security modes

A system’s security mode of operation describes how a system handles stored information at various classification levels. Several security modes of operation, based on the classification level of information being processed on a system and the clearance level of authorized users, have been defined. These designations are typically used for U.S. military and government systems, and include

check.png Dedicated: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system and a valid need-to-know.

check.png System High: All authorized users must have a clearance level equal to or higher than the highest level of information processed on the system, but a valid need-to-know isn’t necessarily required.

check.png Multilevel: Information at different classification levels is stored or processed on a trusted computer system (a system that employs all necessary hardware and software assurance measures and meets the specified requirements for reliability and security). Authorized users must have an appropriate clearance level, and access restrictions are enforced by the system accordingly.

check.png Limited access: Authorized users aren’t required to have a security clearance, but the highest level of information on the system is Sensitive but Unclassified (SBU).

instantanswer.eps A trusted computer system is a system with a Trusted Computing Base (TCB).

Security modes of operation generally come into play in environments that contain highly sensitive information, such as government and military environments. Most private and education systems run in multilevel mode, meaning they contain information at all sensitivity levels.

cross-reference.eps See Chapter 6 for more on security clearance levels.

Recovery procedures

A hardware or software failure can potentially compromise a system’s security mechanisms. Security designs that protect a system during a hardware or software failure include

check.png Fault-tolerant systems: These systems continue to operate after the failure of a computer or network component. The system must be capable of detecting and correcting — or circumventing — a fault.

check.png Fail-safe systems: When a hardware or software failure is detected, program execution is terminated, and the system is protected from compromise.

check.png Fail-soft (resilient) systems: When a hardware or software failure is detected, certain noncritical processing is terminated, and the computer or network continues to function in a degraded mode.

check.png Failover systems: When a hardware or software failure is detected, the system automatically transfers processing to a component, such as a clustered server.

cross-reference.eps See Chapter 10 for more information on resilient techniques including clustering, high availability, and fault tolerance.

Vulnerabilities in security architectures

Unless detected (and corrected) by an experienced security analyst, many weaknesses may be present in a system and permit exploitation, attack, or malfunction. We discuss the most important problems in the following list:

check.png Covert channels: Unknown, hidden communications that take place within the medium of a legitimate communications channel.

check.png Rootkits: By their very nature, rootkits are designed to subvert system architecture by inserting themselves into an environment in a way that makes it difficult or impossible to detect. For instance, some rootkits run as a hypervisor and change the computer’s operating system into a guest, which changes the basic nature of the system in a powerful but subtle way. We wouldn’t normally discuss malware in a chapter on computer and security architecture, but rootkits are a game-changer that warrants mention: They use various techniques to hide themselves from the target system.

check.png Race conditions: Software code in multiprocessing and multiuser systems, unless very carefully designed and tested, can result in critical errors that are difficult to find. A race condition is a flaw in a system where the output or result of an activity in the system is unexpectedly tied to the timing of other events. The term race condition comes from the idea of two events or signals that are racing to influence an activity.

The most common race condition is the time-of-check-to-time-of-use bug caused by changes in a system between the checking of a condition and the use of the results of that check. For example, two programs that both try to open a file for exclusive use may both open the file, even though only one should be able to.

check.png State attacks: Web-based applications use session management to distinguish users from one another. The mechanisms used by the web application to establish sessions must be able to resist attack. Primarily, the algorithms used to create session identifiers must not permit an attacker from being able to steal session identifiers, or guess other users’ session identifiers. A successful attack would result in an attacker taking over another user’s session, which can lead to the compromise of confidential data, fraud, and monetary theft.

check.png Emanations: The unintentional emissions of electromagnetic or acoustic energy from a system can be intercepted by others and possibly used to illicitly obtain information from the system. A common form of undesired emanations is radiated energy from CRT (cathode-ray tube) computer monitors. A third party can discover what data is being displayed on a CRT by intercepting radiation emanating from the display adaptor or monitor from as far as several hundred meters. A third party can also eavesdrop on a network if it has one or more un-terminated coaxial cables in its cable plant.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.196.146