image
11
Technologies
In this chapter you will
•  Learn basic technologies employed to provide for information security
•  Discover the basic approaches to authentication and identity management
•  Examine security models used to implement security in systems
•  Explore the types of adversaries associated with software security
image
Technologies are one of the driving forces behind software. New technology is rare without a software element. And software uses specific technology to achieve its security objectives. This chapter will examine some of the basic technologies used to enable security functionality in software.
Authentication and Identity Management
Authentication is an identity verification process that attempts to determine whether users are who they say they are. Identity management is the comprehensive set of services related to managing the use of identities as part of an access control solution. Strictly speaking, the identity process is one where a user establishes their identity. Authentication is the act of verifying the supplied credentials against the set established during the identity process. The term identity management (IDM) refers to the set of policies, processes, and technologies for managing digital identity information. The term identity and access management (IAM) is another term associated with the comprehensive set of policies, processes, and technologies for managing digital identity information.
Identity Management
Identity management is a set of processes associated with the identity lifecycle, including the provisioning, management, and deprovisioning of identities. The provisioning step involves the creation of a digital identity from an actual identity. The source of the actual identity can be a person, a process, an entity, or virtually anything. The identity process binds some form of secret to the digital identity so that at future times, the identity can be verified. The secret that is used to verify identity is an item deserving specific attention as part of the development process. Protecting the secret, yet making it usable, are foundational elements associated with the activity.
In a scalable system, management of identities and the associated activities needs to be automated. A large number of identity functions need to be handled in a comprehensive fashion. Changes to identities, the addition and removal of roles, changes to rights and privileges associated with roles or identities—all of these items need to be done securely and logged appropriately. The complexity of the requirements makes the use of existing enterprise systems an attractive option when appropriate.
image
image   
EXAM TIP  Security controls are audited annually under Sarbanes-Oxley (SOX) section 404, and IAM controls are certainly security controls. Designing and building IAM controls to support this operational issue is a good business practice.
Identity management can be accomplished through third-party programs that enhance the operating system offerings in this area. The standard operating system implementation leaves much to be desired in management of user-selected provisioning or changes to identity metadata. Many enterprises have third-party enterprise-class IDM systems that provide services such as password resets, password synchronization, single sign-on, and multiple identity methods.
Having automated password resets can free up significant help desk time and provide faster service to users who have forgotten their password. Automated password reset systems require a reasonable set of challenges to verify that the person requesting the reset is authorized to do so. Then, the reset must occur in a way that does not expose the old password. E-mail resets via a uniform resource locator (URL) are one common method employed for the reset operation. Users can have multiple passwords on different systems, complicating the user activity. Password synchronization systems allow a user to synchronize a set of passwords across connected, but different, identity systems, making it easier for users to access the systems. Single sign-on is an industrial-strength version of synchronization. Users enter their credentials into one system, and it connects to the other systems, authenticating based on the entered credentials.
image
image   
NOTE   Passwords and other verification credentials, personal identification number (PIN), passphrases, token values, etc., are secrets and should never be accessible by anyone, including system administrators. Cryptography allows secrets to remain secret and still be used. If a system can e-mail you your password, it is not stored properly; disclosure should be impossible.
Authentication
Authentication is the process of verifying that a user is who they claim to be and applying the correct values in the access control system. The level of required integration is high, from the storage systems that store the credentials and the access control information to the transparent handling of the information establishing or denying the validity of a credential match. When referring to authentication, one is referring to the process of verification of an identity. When one refers to the authentication system, one is typically referring to the underlying operating system aspect, not the third-party application that sits on top.
Authentication systems come in a variety of sizes and types. Several different elements can be used as secrets as part of the authentication process. Passwords, tokens, biometrics, smart cards—the list can be long. The types can be categorized as something you know, something you have, or something you are. The application of one or more of these factors simultaneously for identity verification is a standard process in virtually all computing systems.
The underlying mechanism has some best-practice safeguards that should be included in a system. Mechanisms such as an escalating time lock-out after a given number of successive failures, logging of all attempts (both successful and failed), and integration with the authorization system once authentication is successful are common protections. Password/token reset, account recovery, periodic changes, and password strength issues are just some of the myriad of functionalities that need to be encapsulated in an authentication system.
Numerous technologies are in use for authentication and authorization. Federated ID systems allow users to connect to systems through known systems. The ability to use your Facebook account to log in to another system is a useful convenience for many users. A known user experience (UX) interface and simple-to-use method for users to have a single sign-on environment, federated ID systems can use best-practice authentication systems. There are two main parties in these systems: a relying party (RP) and an identity provider (IdP). The user wishes access to an RP and has credentials established on an IdP. As shown in Figure 11-1, a trust relationship exists between the RP and the IdP, and through data transfers between these parties, access can be granted.
image
image
Figure 11-1   RP and IdP relationships
Two of the more prevalent systems are OAuth and OpenID. OpenID was created for federated authentication, specifically to allow a third party to authenticate your users for you by using accounts that users already have. The OpenID protocol enables websites or applications (consumers) to grant access to their own applications by using another service or application (provider) for authentication. This can be done without requiring users to maintain a separate account/profile with the consumers.
OAuth was created to eliminate the need for users to share their passwords with third-party applications. The OAuth protocol enables websites or applications (consumers) to access protected resources from a web service (service provider) via an application programming interface (API), without requiring users to disclose their service provider credentials to the consumers. Figure 11-2 highlights some of the differences and similarities between OpenID and OAuth.
image
image
Figure 11-2   OpenID vs. OAuth
Both OpenID (for authentication) and OAuth (for authorization) accomplish many of the same things. Each protocol provides a different set of features, which are required by their primary objective, but essentially, they are interchangeable. At their core, both protocols have an assertion verification method. They differ in that OpenID is limited to the “this is who I am” assertion, while OAuth provides an “access token” that can be exchanged for any supported assertion via an API.
Credential Management
There are numerous methods of authentication, and each has its own set of credentials that require management. The identifying information that is provided by a user as part of their claim to be an authorized user is sensitive data and requires significant protection. The identifying information is frequently referred to as credentials. These credentials can be in the form of a passed secret, typically a password. Other common forms include digital strings that are held by hardware tokens or devices, biometrics, and certificates. Each of these forms has advantages and disadvantages.
Each set of credentials, regardless of the source, requires safekeeping on the part of the receiving entity. Managing of these credentials includes tasks such as credential generation, storage, synchronization, reset, and revocation. Because of the sensitive nature of manipulating credentials, all of these activities should be logged.
X.509 Credentials
X.509 refers to a series of standards associated with the manipulation of certificates used to transfer asymmetric keys between parties in a verifiable manner. A digital certificate binds an individual’s identity to a public key, and it contains all the information a receiver needs to be assured of the identity of the public key owner. After a registration authority (RA) verifies an individual’s identity, the certificate authority (CA) generates the digital certificate. The digital certificate can contain the information necessary to facilitate authentication.
image
X.509 Digital Certificate Fields
The following fields are included within an X.509 digital certificate:
•  Version number   Identifies the version of the X.509 standard that was followed to create the certificate; indicates the format and fields that can be used.
•  Serial number   Provides a unique number identifying this one specific certificate issued by a particular CA.
•  Signature algorithm   Specifies the hashing and digital signature algorithms used to digitally sign the certificate.
•  Issuer   Identifies the CA that generated and digitally signed the certificate.
•  Validity   Specifies the dates through which the certificate is valid for use.
•  Subject   Specifies the owner of the certificate.
•  Public key   Identifies the public key being bound to the certified subject; also identifies the algorithm used to create the private/public key pair.
•  Certificate usage   Specifies the approved use of the certificate, which dictates intended use of this public key.
•  Extensions   Allow additional data to be encoded into the certificate to expand its functionality. Companies can customize the use of certificates within their environments by using these extensions. X.509 version 3 has expanded the extension possibilities.
image
Certificates are created and formatted based on the X.509 standard, which outlines the necessary fields of a certificate and the possible values that can be inserted into the fields. As of this writing, X.509 version 3 is the most current version of the standard. X.509 is a standard of the International Telecommunication Union (www.itu.int). The IETF’s Public-Key Infrastructure (X.509), or PKIX, working group has adapted the X.509 standard to the more flexible organization of the Internet, as specified in RFC 3280, and is commonly referred to as PKIX for Public Key Infrastructure X.509.
The public key infrastructure (PKI) associated with certificates enables the passing and verification of these digital elements between firms. Because certificates are cryptographically signed, elements within them are protected from unauthorized alteration and can have their source verified. Building out a complete PKI infrastructure is a complex endeavor, requiring many different levels of protection to ensure that only authorized entities are permitted to make changes to the certification.
Setting up a functioning and secure PKI solution involves many parts, including certificate authorities, registration authorities, and certificate revocation mechanisms, either Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP).
Figure 11-3 shows the actual values of the different certificate fields for a particular certificate in Internet Explorer. The version of this certificate is V3 (X.509 v3), and the serial number is also listed—this number is unique for each certificate that is created by a specific CA. The CA used the SHA1 hashing algorithm to create the message digest value, and it then signed it using the CA’s private key, which used the RSA algorithm.
image
image
Figure 11-3   Digital certificate
X.509 certificates provide a wide range of benefits to any application that needs to work with public key cryptography. Certificates provide a standard means of passing keys, a standard that is accepted by virtually every provider and consumer of public keys. This makes X.509 a widely used and proven technology.
Single Sign-On
Single sign-on (SSO) makes it possible for a user, after authentication, to have his credentials reused on other applications without the user re-entering the secret. To achieve this, it is necessary to store the credentials outside of the application and then reuse the credentials against another system. There are a number of ways that this can be accomplished, but two of the most popular and accepted methods for sharing authentication information are Kerberos and Security Assertion Markup Language (SAML). The OpenID protocol has proven to be a well-vetted and secure protocol for SSO. However, as with all technologies, security vulnerabilities can still occur due to misuse or misunderstanding of the technology.
The key concept with respect to SSO is federation. In a federated authentication system, users can log in to one site and access another or affiliated site without re-entering credentials. The primary objective of federation is user convenience. Authentication is all about trust, and federated trust is difficult to establish. SSO can be challenging to implement and because of trust issues, it is not an authentication panacea. As in all risk-based transactions, a balance must be achieved between the objectives and the risks. SSO-based systems can create single-point-of-failure scenarios, so for certain high-risk implementations, their use is not recommended.
Flow Control (Proxies, Firewalls, Middleware)
In information processing systems, information flows between nodes, between processes, and between applications. The movement of information across a system or series of systems has security consequences. Sensitive information must be protected, with access provided to authorized parties and protected from unauthorized ones. The movement of information must be channeled correctly and protected along the way. There are technologies, firewalls, proxies, and queues that can be utilized to facilitate proper information transfer.
Firewalls
Firewalls act as policy enforcement devices, determining whether to pass or block communications based on a variety of factors. Network-level firewalls operate using the information associated with networking to determine who can communicate with whom. Next-generation firewalls provide significantly greater granularity in communication decisions. Firewalls operate on a packet level, and can be either stateless or stateful. Basic network firewalls operate on a packet-by-packet basis and use addressing information to make decisions. In doing so, they are stateless, not carrying information from packet to packet as part of the decision process. Advanced firewalls can analyze multiple packets and utilize information from the protocols being carried to make more granular decisions. Did the packet come in response to a request from inside the network? Is the packet carrying information across web channels, port 80, using authorized or unauthorized applications? This level of stateful packet inspection, although difficult to scale, can be useful in providing significant levels of communication protection.
Firewalls are basically devices that, at the end of the day, are supposed to allow the desired communications and block undesired communications. Malicious attempts to manipulate a system via communication channels can be detected and blocked using a firewall. Firewalls can work with intrusion detection systems, acting as the enforcer in response to another system’s inputs. One of the limitations of firewalls is governed by network architecture. When numerous paths exist for traffic to flow between points, determining where to place devices such as firewalls becomes increasingly difficult and at times nearly impossible. Again, as with all things security, balance becomes a guiding principle.
Proxies
Proxies are similar to firewalls in that they can mediate traffic flows. They differ in that they act as middlemen, somewhat like a post office box. Traffic from untrusted sources is terminated at a proxy, where the traffic is received and to some degree processed. If the traffic meets the correct rules, it can then be forwarded on to the intended system. Proxies come in a wide range of capabilities, from simple to very complex, both in their rule-processing capabilities and additional functionalities. One of these functionalities is caching—a temporary local storage of web information that is frequently used and seldom changed, like images. In this role, a proxy acts as a security device and a performance-enhancing device.
Application Firewalls
Application firewalls are becoming more popular, acting as application-specific gateways between users, and potential users, and web-based applications. Acting as a firewall proxy, web application firewalls can monitor traffic in both directions, client to server and server to client, watching for anomalies. Web application firewalls act as guards against both malicious intruders and misbehaving applications. Should an outsider attempt to perform actions that are not authorized to an application, the web application firewall can block the requests from getting to the application. Should the application experience some failure, resulting in, say, large-scale data transfers when only small data transfers are the norm, again, the web application firewall can block the data from leaving the enterprise.
image
image   
EXAM TIP  One of the requirements of the PCI Data Security Standard is for web applications to either have a web application firewall between the server and users or to perform application code reviews.
Queuing Technology
Message transport from sender to receiver can be done either synchronously or asynchronously, and either have guaranteed transport or best effort. Internet protocols can manage the guarantee/best effort part, but a separate mechanism is needed if asynchronous travel is permissible. Asynchronous transport can alleviate network congestion during periods where traffic flows are high and can assist in the prevention of losing traffic due to bandwidth restrictions. Queuing technologies in the form of message queues can provide a guaranteed mechanism of asynchronous transport, solving many short-term network congestion issues. There are numerous vendors in the message queue space, including Microsoft, Oracle, and IBM.
Logging
An important element in any security system is the presence of security logs. Logs enable personnel to examine information from a wide variety of sources after the fact, providing information about what actions transpired, with which accounts, on which servers, and with what specific outcomes. Many compliance programs require some form of logging and log management. The challenges in designing log programs are what to log and where to store it.
What needs to be logged is a function of several criteria. First, numerous compliance programs—HIPAA, SOX, PCI DSS, EOC, and others—have logging requirements, and these need to be met. The next criterion is one associated with incident response. What information would investigators want or need to know to research failures and issues? This is a question for the development team—what is available that can be logged that would provide useful information for investigators, either to the cause of the issue or impact?
The “where to log it” question also has several options, each with advantages and disadvantages. Local logging can be simple and quick for the development team. But it has the disadvantage of being yet another log to secure and integrate into the enterprise log management system. Logs by themselves are not terribly useful. What makes individual logs useful is the combination of events across other logs, detailing the activities of a particular user at a given point in time. This requires a coordination function, one that is supported by many third-party software vendors through their security information and event management (SIEM) tool offerings. These tools provide a rich analytical environment to sift through and find correlations in large datasets of security information.
Syslog
Syslog is an Internet Engineering Task Force (IETF)–approved protocol for log messaging. It was designed and built around UNIX and provides a UNIX-centric format for sending log information across an IP network. Although in its native form, it uses User Datagram Protocol (UDP) and transmits information in the clear, wrappers are available that provide Transport Layer Security (TLS)-based security and TCP-based communication guarantees. While syslog is the de facto standard for logging management in Linux and UNIX environments, there is no equivalent in the Microsoft sphere of influence. Microsoft systems log locally, and there are some Microsoft solutions for aggregating logs to a central server, but these solutions are not as mature as syslog. Part of the reason for this is the myriad of third-party logging and log management solutions that provide superior business-level analytical packages that are focused on log data.
Data Loss Prevention
Data is the asset that security ultimately strives to protect. There may be secondary assets, such as equipment, controls, and applications, but these all are in place to protect the data in an organization. Data loss prevention (DLP) technologies exist as a last line of defense. DLP solutions act by screening traffic, looking for traffic that meets profile parameters. The profile may be size of transfer, may be destination, or might be specific data elements that are protected. If any of these elements are detected, then a data exfiltration event is in progress and the connection is terminated.
Simple in theory but complex in implementation, DLP is a valuable tool in a defense-in-depth environment. One of the challenges has to do with detection location. DLP technology needs to be in the actual netflow path involved in the data transfer. In simple networks, this is easy; in large enterprises, this can be very challenging. In enterprises with numerous external connections, it can be complex and expensive. The second challenge is visibility into the data itself. Attackers use encryption to prevent the data streams from being detected. The whole process gets more complicated with the move of services and data into the cloud.
Virtualization
A recent trend for both servers and workstations is the addition of a virtualization layer between the hardware and the operating system. This virtualization layer provides many benefits, allowing multiple operating systems to operate concurrently on the same hardware. Virtualization offers many advantages in the form of operational flexibility. It also offers some security advantages. If a browser surfing the Web downloads harmful content, the virtual machine can be deleted at the end of the session, preventing the spread of any malware to the other operating systems. The major providers of virtualization software are VMware, Microsoft, Oracle, and Xen.
Virtualization can provide many benefits to an organization, and these benefits are causing the rapid move to virtualization across many enterprises. These benefits include
•  Reduced cost of servers resulting from server consolidation
•  Improved operational efficiencies from administrative ease of certain tasks
•  Improved portability and isolation of applications, data, and platforms
•  Operational agility to scale environments, i.e., cloud computing
Virtual machines (VMs) are becoming a mainstream platform in many enterprises because of their advantages. Understanding the ramifications of a VM environment on an application can be important for a development team if there is any chance that the application would ever be deployed in one.
Digital Rights Management
Digital rights management (DRM) is the series of technologies employed so that content owners can exert control over digital content on their systems. The objective of digital rights management is the protection of intellectual property in the digital world, where flawless copies can easily be made and the very media lends itself to a wide range of options with respect to modifications and changes.
DRM is not just about copy protection, but also about usage rights, authenticity, and integrity. DRM can allow a file to be shared, but not edited or printed. DRM can restrict content to a specific piece of hardware. There are three entities in the DRM relationship: users, contents, and rights. There is a formal language associated with DRM and machine adoption, known as Rights Expression Language (REL). REL is XML based and designed to convey rights in a machine-readable form. The function of the REL is to define the license and to describe the terms of the permissions or restrictions they imply for how the related content may then be used by a system. There are several well-known RELs:
•  ccREL   An RDF schema used by the Creative Commons project and the GNU project to express their general public license (GPL) in machine-readable form.
•  ODRL   Open Digital Rights Language, an open standard for an XML-based REL.
•  MPEG-21   Part 5 of this MPEG standard includes an REL.
•  XrML   eXtensible rights Markup Language. XrML began based on work at Xerox in the 1990s.
Digital rights management has a mixed reputation due to several problems associated with its implementation. DRM ties software rights to hardware systems, two different platforms with widely varying lifetimes. There have been cases of music players and e-books where the vendors have changed schemes, making previously purchased content no longer available once the hardware changes. This has brought frustration to people whose equipment has failed, only to learn that this cost them their content as well. The copy protection scheme cannot determine the difference between an illegal copy and a legitimate one, such as a backup. Most of these issues are policy issues on the part of the content owner, not the licensee, and it will take time for the marketplace to sort out these and other issues.
Trusted Computing
Trusted computing (TC) is a term used to describe technology developed and promoted by the Trusted Computing Group. This technology is designed to ensure that the computer behaves in a consistent and expected manner. One of the key elements in the TC effort is the Trusted Platform Module (TPM), a hardware interface for security operations.
TCB
The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security. The concept of a TCB has its roots in the early 1970s, when computer security researchers examined systems and found that much of the system could misbehave and not result in security incidents. With the basis of security being defined in an organization’s security policy, one risk is that given certain strict interpretations of security, a consultant could find or justify anything as affecting security.
With this concern in the open, the principal issue from days gone by is that of privilege escalation. If any element of the computer system has the ability to effect an increase in privilege without it being authorized, then this would be a violation of the security, and this part of the system would be part of the TCB. The idea of a TCB is not just theoretical conjecture, but indeed creates the foundation for security principles such as complete mediation.
TPM
The Trusted Platform Module is a hardware implementation of a set of cryptographic functions on a computer’s motherboard. The intent of the TPM is to provide a base level of security that is deeper than the operating system and virtually tamperproof from the software side of the machine. The TPM can hold an encryption key that is not accessible to the system except through the TPM chip. This assists in securing the system, but has also drawn controversy from some quarters concerned that the methodology could be used to secure the machine from its owner. There also are concerns that the TPM chip could be used to regulate what software runs on the system.
Figure 11-4 illustrates the several different features in the TPM hardware available for use by the computing platform. It has a series of cryptographic functions: a hash generator, an RSA key generator, and a cryptographically appropriate random number generator, which work together with an encryption/decryption signature engine to perform the basic cryptographic functions securely on the silicon. The chip also features storage areas with a manufacturer’s key, the endorsement key, and a series of other keys and operational registers.
image
image
Figure 11-4   TPM hardware functions
Malware
Malware is a term used to describe software that has malicious intent. By its very nature, malware performs actions that would not be authorized if the user was aware and capable of determining whether or not they should happen. In today’s world of sophisticated malware, most malware provides no outward sign of its nefarious nature. In fact, PDF files containing malware may indeed contain legitimate business information, with the malware portion hidden from view.
Malware is a complex issue, with many different forms, and, in many cases, multiple forms working together to achieve a specific objective. For instance, a spear phishing campaign begins first with a hack into the HR or PR system, where a routine communication based on a PDF file is hijacked. This PDF is loaded with malware, specifically designed to hook machines. This malware is then left for the company to distribute, and as people in the company receive the document and open it, they become infected. This initial infection has one sole purpose, and that is to download an agent onto the box, and this agent will work to develop a persistent connection to outside machines, creating an even stealthier foothold. None of this will be observable by the user. All the user will see is the contents of the PDF, which is, in its part, a legitimate business communication. This process will not be discovered by antivirus protections because it was custom written and the signature will not be in the wild.
Once a foothold on the target system is created, specialty malware used to scan systems for secrets can be employed. Specialty malware that encrypts and packages data in an attempt to avoid detection by DLP solutions can be used. Malware today is a series of advanced programs that are used by criminals and nation-states with the purpose of stealing money and intellectual property. And it is being done with an infrastructure designed to be nearly undetectable in practice.
Code Signing
Code signing is the application of digital signature technology to computer code. Code signing technology can provide a software user with several pieces of useful information. First, it establishes who the provider or author of the software is. Second, it can deliver information as to the integrity level of the code—has it been altered since signing? The digital signature technology behind code signing is mature and well developed. A complete discussion of code signing is found in Chapter 14.
Database Security
Databases are technologies used to store and manipulate data. Relational databases store data in tables and have a wide array of tools that can be used to access, manipulate, and store data. Databases have a variety of security mechanisms to assist in creating the appropriate level of security. This includes elements for confidentiality, integrity, and availability. The details of designing a database environment for security are beyond the scope of the CSSLP practitioner, but it is still important to understand the capabilities.
Encryption can be employed to provide a level of confidentiality protection for the data being stored in a database. Data structures, such as views, can be created, giving different parties different levels of access to the data stored in the database. Programmatic structures called stored procedures can be created to limit access to only specific elements based on predefined rules. Backup and replication strategies can be employed to provide near-perfect availability and redundancy for critical systems. Taken together, the protections afforded the data in a modern database can be comprehensive and valuable. The key is in defining the types and levels of protection required based on risk.
Encryption
Data stored in a database is a lucrative target for attackers. Just like the vault in a bank, it is where the valuable material is stored, so gaining the correct level of access, for instance, administrative rights, can be an attacker’s dream and a defender’s nightmare. Encrypting data at rest is a preventative control mechanism that can be employed virtually anywhere the data is at rest, including databases. The encryption can be managed via native database management system functions, or it can be done using cryptographic resources external to the database.
image
image   
EXAM TIP  Primary keys are used to index and join tables and, as such, cannot be obfuscated or encrypted. This is a good reason not to use personally identifiable information (PII) and personal health information (PHI) as keys in a database structure.
Numerous factors need to be considered when creating a database encryption strategy. These include, but are not limited to, the following:
•  What is the level of risk classification associated with the data?
•  What is the usage pattern of the data—how is it protected in transit and in use?
•  What is the differing classification across elements of the data—are some more sensitive than others?
•  How is encryption being handled in the enterprise for other projects?
•  What are the available encryption options to the development team?
In a given data record, not all of the information is typically of the same level of sensitivity. If only a few columns out of many are sensitive, then data segregation may provide a means of protecting smaller chunks with greater efficiency. Determining the detailed, data element by data element requirements for protection can provide assistance in determining the correct protection strategy.
image
image   
EXAM TIP  Regulations such as GLBA, HIPAA, and PCI DSS can impose protection requirements around certain data elements, such as personally identifiable information (PII) and personal health information (PHI). It is important for members of the design and development team to understand this to avoid operational issues later in the software lifecycle.
Triggers
Triggers are specific database activities that are automatically executed in response to specific database events. Triggers are a useful tool, as they can automate a lot of interesting items. Changes to a record can trigger a script; adding a record can trigger a script; define any database task and assign a script—this allows a lot of flexibility. Need to log something and include business logic? Triggers can provide the flexibility to automate anything in a database.
Views
Views are programmatically designed extracts of data in a series of tables. Tables can contain all the data, and a view can provide a subset of the information based on some set of business rules. A table could contain a record that provides all the details about a customer: addresses, names, credit card information, etc. Some of this information should be protected—PII and credit card information, for instance. A view can provide a shipping routine only the ship-to columns and not the protected information, and in this way, when using the view, it is not possible to disclose what isn’t there.
Privilege Management
Databases have their own internal access control mechanism, which are similar to ACL-based controls to file systems. Designing the security system for data records, users, and roles requires the same types of processes as designing file system access control mechanisms. The two access control mechanisms can be interconnected, with the database system responding to the enterprise authorization systems, typically through roles defined in the database system.
Programming Language Environment
Software developers use a programming language to encode the specific set of operations in what is referred to as source code. The programming language used for development is seldom the language used in the actual instantiation of the code on the target computer. The source code is converted to the operational code through compilers, interpreters, or a combination of both. The choice of the development language is typically based on a number of criteria, the specific requirements of the application, the skills of the development team, and a host of other issues.
Compilers offer one set of advantages, and interpreters others. Systems built in a hybrid mode use elements of both. Compiled languages involve two subprocesses: compiling and linking. The compiling process converts the source code into a set of processor-specific codes. Linking involves the connecting of various program elements, including libraries, dependency files, and resources. Linking comes in two forms: static and dynamic. Static linking copies all the requirements into the final executable, offering faster execution and ease of distribution. Static linking can lead to bloated file sizes.
Dynamic linking involves placing the names and relative locations of dependencies in the code, with these being resolved at runtime when all elements are loaded into memory. Dynamic linking can create a smaller file, but does create risk from hijacked dependent programs.
Interpreters use an intermediary program to result in the execution of the source code on a target machine. Interpreters provide slower execution, but faster change between revisions, as there is no need for recompiling and relinking. The source code is actually converted by the interpreter into an executable form in a line-by-line fashion at runtime.
A hybrid solution takes advantage of both compiled and interpreted languages. The source code is compiled into an intermediate stage that can be interpreted at runtime. The two major hybrid systems are Java and Microsoft .NET. In Java, the intermediate system is known as Java Virtual Machine (JVM), and in the .NET environment, the intermediate system is the common language runtime (CLR).
CLR
Microsoft’s .NET language system has a wide range of languages in the portfolio. Each of these languages is compiled into what is known as common intermediate language (CIL), also known as Microsoft Intermediate Language (MSIL). One of the advantages of the .NET system is that a given application can be constructed using multiple languages that are compiled into CIL code that is executed using the just-in-time compiler. This compiler, the common language runtime (CLR), executes the CIL on the target machine. The .NET system operates what is known as managed code, an environment that can make certain guarantees about what the code can do. The CLR can insert traps, garbage collection, type safety, index checking, sandboxing, and more. This provides a highly functional and stable execution environment.
JVM
In Java environments, the Java language source code is compiled to an intermediate stage known as byte code. This byte code is similar to processor instruction codes, but is not executable directly. The target machine has a Java Virtual Machine (JVM) that executes the byte code. The Java architecture is referred to as the Java Runtime Environment (JRE), which is composed of the JVM and a set of standard class libraries, the Java Class Library. Together, these elements provide for the managed execution of Java on the target machine.
Compiler Switches
Compiler switches enable the development team to control how the compiler handles certain aspects of program construction. A wide range of options are available, manipulating elements such as memory, stack protection, and exception handling. These flags enable the development team to force certain specific behaviors using the compiler. The /GS flag enables a security cookie on the stack to prevent stack-based overflow attacks. The /SAFEH switch enables a safe exception handling table option that can be checked at runtime. The designation of the compiler switch options to be used in a development effort should be one of the elements defined by the security team and published as security requirements for use in the SDL process.
Sandboxing
Sandboxing is a term for the execution of computer code in an environment designed to isolate the code from direct contact with the target system. Sandboxes are used to execute untrusted code, code from guests, and unverified programs. They work as a form of virtual machine and can mediate a wide range of system interactions, from memory access to network access, access to other programs, the file system, and devices. The level of protection offered by a sandbox depends upon the level of isolation and mediation offered.
Managed vs. Unmanaged Code
Managed code is executed in an intermediate system that can provide a wide range of controls. .NET and Java are examples of managed code, a system with a whole host of protection mechanisms. Sandboxing, garbage collection, index checking, type safe, memory management, and multiplatform capability—these elements provide a lot of benefit to managed code-based systems. Unmanaged code is executed directly on the target operating system. Unmanaged code is always compiled to a specific target system. Unmanaged code can have significant performance advantages. In unmanaged code, memory allocation, type safety, garbage collection, etc., need to be taken care of by the developer. This makes unmanaged code prone to memory leaks like buffer overruns and pointer overrides and increases the risk.
Operating Systems
Operating systems are the collection of software that acts between the application program and the computer hardware resources. Operating systems exist for all platforms, from mainframes, to PCs, to mobile devices. They provide a functional interface to all the services enabled by the hardware. Operating systems create the environment where the applications execute, providing them the resources necessary to function. There are numerous different types of operating systems, each geared for a specific purpose. Systems created for multiple users have operating systems designed for managing multiple user processes, keeping them all separate and managing priorities. Real-time and embedded systems are designed to be simpler and leaner, and their operating systems enable those environments.
Embedded Systems
Embedded systems are dedicated systems where the hardware and software are coupled together to perform a specific purpose. As opposed to general-purpose computers, such as servers and PCs, which can perform a wide range of activities, an embedded system is designed to solve a specific problem. Embedded systems are created to perform a specific task, one where time-sensitive constraints are common. They exist in a wide range of electronics, from watches to audio/video players, to control systems for factories and infrastructure, to vehicles. Embedded systems can be found virtually everywhere.
Control Systems
Control systems are specialized computer systems used for the automated control of equipment. A wide range of types of equipment fall into this category, from programmable logic controllers (PLCs), to remote terminal units (RTUs). These devices are commonly referred to as supervisory control and data acquisition (SCADA) systems when used in the collective form. Control system equipment can be viewed as a form of embedded system, for they are integrated into a physical environment for the sole purpose of providing computer control in that environment.
Firmware
Firmware is the name given to software code held in a device. Firmware is, in essence, wired in software, and by this very nature is difficult to update or change. In many cases, firmware is never updated or changed. Firmware is held in nonvolatile memory, in read-only memory (ROM), in erasable programmable read-only memory (EPROM), or in flash memory. In many embedded systems, the firmware holds the operational code base—the software component of the system. In computers, the firmware acts as a first step in the startup process, providing a means to initiate software loading. In personal computers, the basic input output system (BIOS) is the firmware-based interface between the hardware and operating system. BIOS was replaced by a more advanced version, called the unified extensible firmware interface (UEFI), around 2010 by most computer makers.
Chapter Review
In this chapter, we examined the technologies employed in building security functionality. The technologies used in building security were listed in no particular order, and the list is far from complete. Authentication and identity management technologies, with a focus on federated methods of OpenID and OAuth were presented. The use of certificates and single sign-on were presented as credential management technologies. The flow control technologies, including network firewalls, application firewalls, proxies, and queuing, were presented as manners of managing communications. The use of syslog for logging was presented.
The chapter presented the use of DLP as a defense against data exfiltration and as a means of employing defense in depth. The technology parade continued with virtualization and digital rights management technologies. The components of trusted computing, including TCB and TPM as hardware mechanisms, and code signing as a software mechanism for combating malware were presented.
The enterprise elements of database security, including encryption, triggers, views, and privilege management, were presented. The technology of a programming environment, including JVM and CLRs, was presented, followed by operating systems. The chapter closed with a look at embedded systems, control systems, and firmware technologies.
Quick Tips
•  Authentication is an identity verification process that attempts to determine whether users are who they say they are.
•  Identity management is the comprehensive set of services related to managing the use of identities as part of an access control solution.
•  There are two main parties in these systems: a relying party (RP) and an identity provider (IdP).
•  OpenID was created for federated authentication, specifically to allow a third party to authenticate your users for you by using accounts that users already have.
•  The OpenID protocol enables websites or applications (consumers) to grant access to their own applications by using another service or application (provider) for authentication.
•  X.509 refers to a series of standards associated with the manipulation of certificates used to transfer asymmetric keys between parties in a verifiable manner.
•  Single sign-on makes it possible for a user, after authentication, to have his credentials reused on other applications without the user re-entering the secret.
•  Firewalls act as policy enforcement devices, determining whether to pass or block communications based on a variety of factors.
•  Proxies act as middlemen and are similar to firewalls in that they can mediate traffic flows.
•  Application firewalls use application-level information to make firewall decisions.
•  Syslog is an IETF-approved protocol for log messaging.
•  DLP solutions act by screening traffic, looking for traffic that meets profile parameters.
•  Digital rights management is the series of technologies employed so that content owners can exert control over digital content on their systems.
•  The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security.
•  The Trusted Platform Module is a hardware implementation of a set of cryptographic functions on a computer’s motherboard.
•  Malware is a term used to describe software that has malicious intent.
•  Code signing is the application of digital signature technology to computer code.
•  Compilers convert the source code into a set of processor-specific codes, and linking involves the connecting of various program elements, including libraries, dependency files, and resources.
•  Sandboxing is a term for the execution of computer code in an environment designed to isolate the code from direct contact with the target system.
•  Embedded systems are dedicated systems where the hardware and software are coupled together to perform a specific purpose.
•  Firmware is the name given to software code held in a device.
Questions
To further help you prepare for the CSSLP exam, and to provide you with a feel for your level of preparedness, answer the following questions and then check your answers against the list of correct answers found at the end of the chapter.
  1.  The process of combining functions, libraries, and dependencies into a single operational unit is referred to as:
A.  Compiling
B.  Linking
C.  Interpreting
D.  Integration
  2.  A protocol to enable a website or application (consumers) to grant access to their own applications by using another service or application (provider) for authentication is:
A.  IdP
B.  OSCP
C.  OpenID
D.  SAML
  3.  The _______ protocol enables websites or applications (consumers) to access protected resources from a web service (service provider) via an API, without requiring users to disclose their service provider credentials to the consumers.
A.  OSCP
B.  OpenID
C.  X.509
D.  OAuth
  4.  _______ is a series of standards associated with the manipulation of certificates used to transfer asymmetric keys between parties in a verifiable manner.
A.  X.509
B.  PKIX
C.  OSCP
D.  CRL
  5.  The technology used for the protection of intellectual property in the digital world is referred to as:
A.  Digital certificates
B.  Virtualization
C.  DLP
D.  DRM
  6.  Placing the names and relative locations of dependencies in the code, with these being resolved at runtime when all elements are loaded into memory, is called:
A.  Static linking
B.  Dynamic linking
C.  Compiling
D.  Code signing
  7.  The process of converting source code into a set of processor-specific codes is:
A.  Linking
B.  Compiling
C.  Virtualizing
D.  Interpreting
  8.  An example of a hybrid system with both compiling and interpreting is:
A.  JVM
B.  C++
C.  SQL
D.  TCB
  9.  The process of isolating the executing code from direct contact with the resources of the target system is referred to as:
A.  Trusted computing
B.  MSIL
C.  Managed code
D.  Sandboxing
10.  An advantage of unmanaged code is:
A.  Performance
B.  Security
C.  Library functions
D.  Portability
11.  The following are all elements associated with certificates, except:
A.  RA
B.  OSCP
C.  CA
D.  CLR
12.  A device that moderates traffic and includes caching of content is a(n):
A.  Proxy
B.  Application firewall
C.  Firewall
D.  DLP
13.  One of the biggest challenges in deploying DLP technologies is:
A.  Access control lists
B.  Network proxies
C.  Network speeds
D.  Network architecture
14.  ccREL, ODRL, and XrML are related to:
A.  DLP
B.  DRM
C.  JVM
D.  CRL
15.  A TPM can provide all of the following in hardware except:
A.  A secure area of memory for code execution
B.  A random number generator
C.  An encryption engine
D.  A hash generator
Answers
  1.  B. Linking is the process of combining code elements and resources into an operational program.
  2.  C. OpenID provides for authentication from another service.
  3.  D. OAuth is an API for allowing access without disclosing credentials.
  4.  A. X.509 describes the infrastructure of using certificates for key transfer.
  5.  D. Digital rights management is the set of technologies employed to protect intellectual property in the digital world.
  6.  B. Dynamic linking is resolved at runtime.
  7.  B. Compiling is the conversion of source code to processor-specific codes.
  8.  A. The Java Virtual Machine (JVM) interprets byte code into code for a system.
  9.  D. Sandboxing is the technology used to isolate untrusted code from system resources.
10.  A. Unmanaged code can have a performance advantage over unmanaged code.
11.  D. The common language runtime is a Microsoft-specific hybrid language environment.
12.  A. Proxies can cache content for multiple systems in an environment to improve performance.
13.  D. Network architectures can result in multiple paths out of a network, making DLP placement difficult.
14.  B. They are all rights expression language forms for DRM.
15.  A. TPMs do not offer a secure area of memory for code execution. They have execution modules, but they are specific in function.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.86.172