Chapter 10. Information Systems Acquisition, Development, and Maintenance

Chapter Objectives

After reading this chapter and completing the exercises, you will be able to do the following:

Image Understand the rationale for the systems development lifecycle (SDLC).

Image Recognize the stages of software releases.

Image Appreciate the importance of developing secure code.

Image Be aware of the most common application development security faults.

Image Explain cryptographic components.

Image Develop policies related to systems acquisition, development, and maintenance.

Section 14 of ISO 27002:2013: Information Systems Acquisition, Development, and Maintenance (ISADM) focuses on the security requirements of information systems, application, and code from conception to destruction. This sequence is referred to as the systems development lifecycle (SDLC). Particular emphasis is put on vulnerability management to ensure integrity, cryptographic controls to ensure integrity and confidentiality, and security of system files to ensure confidentiality, integrity, and availability (CIA). The domain constructs apply to in-house, outsourced, and commercially developed systems, applications, and code. Section 10 of ISO 27002:2013: Cryptography focuses on proper and effective use of cryptography to protect the confidentiality, authenticity, and/or integrity of information. Because cryptographic protection mechanisms are closely related to information systems development and maintenance, it is included in this chapter.

Of all the security domains we have discussed so far, this one has the most widespread implications. Most cybercrime is opportunistic, meaning that the criminals take advantage of the system vulnerabilities. Information systems, applications, and code that does not have embedded security controls all expose the organization to undue risk. Consider a company that relies on a web-based application linked to a back-end database. If the code used to create the web-based application was not thoroughly vetted, it may contain vulnerabilities that would allow a hacker to bring down the application with a denial of service (DoS) attack, run code on the server hosting the application, or even trick the database into publishing classified information. These events harm an organization’s reputation, create compliance and legal issues, and significantly impact the bottom line.


FYI: ISO/IEC 27002:2013 and NIST Guidance

Section 10 of ISO 27002:2013, the cryptography domain, focuses on proper and effective use of cryptography to protect the confidentiality, authenticity, and/or integrity of information. Section 14 of ISO 27002:2013, the ISADM domain, focuses on the security requirements of information systems, applications, and code, from conception to destruction.

Corresponding NIST guidance is provided in the following documents:

Image SP 800-23: Guidelines to Federal Organizations on Security Assurance and Acquisition/Use of Tested/Evaluated Products

Image SP 800-57: Recommendations for Key Management—Part 1: General (Revision 3)

Image SP 800-57: Recommendations for Key Management—Part 2: Best Practices for Key Management Organization

Image SP 800-57: Recommendations for Key Management—Part 3: Application-Specific Key Management Guidance

Image SP 800-64: Security Considerations in the System Development Life Cycle

Image SP 800-111: Guide to Storage Encryption Technologies for End User Devices


System Security Requirements

Security should be a priority objective during the design and acquisition phases of any new information system, application, or code development. Attempting to retrofit security is expensive, resource intensive, and all too often does not work. “Productivity requirements” and/or the “rush to market” often preclude a thorough security analysis, which is unfortunate because it has been proven time and time again that early-stage identification of security requirements is both cost effective and efficient. Utilizing a structured development process increases the probability that security objectives will be achieved.

What Is SDLC?

The systems development lifecycle (SDLC) provides a standardized process for all phases of any system development or acquisition effort. As defined by NIST, an SDLC includes five phases: initiation, development/acquisition, implementation, operational, and disposal.

Image During the initiation phase, the need for a system is expressed and the purpose of the system is documented.

Image During the development/acquisition phase, the system is designed, purchased, programmed, developed, or otherwise constructed.

Image The implementation phase includes system testing, modification if necessary, retesting if modified, and finally acceptance.

Image During the operational phase, the system is put into production. The system is almost always modified by the addition of hardware and software and by numerous other events. Monitoring, auditing, and testing should be ongoing.

Image Activities conducted during the disposal phase ensure the orderly termination of the system, safeguarding vital system information, and migrating data processed by the system to a new system.

Each phase includes a minimum set of tasks needed to effectively incorporate security in the system development process. Phases may continue to be repeated throughout a system’s life prior to disposal.

Initiation Phase

During the initiation phase, the organization establishes the need for a system and documents its purpose. Security planning must begin in the initiation phase. The information to be processed, transmitted, or stored is evaluated for CIA security requirements, as well as the security and criticality requirements of the information system. It is essential that all stakeholders have a common understanding of the security considerations. This early involvement will enable the developers or purchasing managers to plan security requirements and associated constraints into the project. It also reminds project leaders that many decisions being made have security implications that should be weighed appropriately, as the project continues. Other tasks that should be addressed in the initiation phase include assignment of roles and responsibilities, identification of compliance requirements, decisions on security metrics and testing, and the systems acceptance process.

Development/Acquisition Phase

During this phase, the system is designed, purchased, programmed, developed, or otherwise constructed. A key security activity in this phase is conducting a risk assessment. In addition, the organization should analyze security requirements, perform functional and security testing, and design the security architecture. Both the ISO standard and NIST emphasize the importance of conducting risk assessments to evaluate the security requirements for new systems and upgrades. The aim is to identify potential risks associated with the project and to use this information to select baseline security controls. The risk assessment process is iterative and needs to be repeated whenever a new functional requirement is introduced. As they are determined, security control requirements become part of the project security plan. Security controls must be tested to ensure they perform as intended.

Implementation Phase

In the implementation phase, the organization configures and enables system security features, tests the functionality of these features, installs or implements the system, and obtains a formal authorization to operate the system. Design reviews and system tests should be performed before placing the system into operation to ensure that it meets all required security specifications. It is important that adequate time be built into the project plan to address any findings, modify the system or software, and retest.

The final task in this phase is authorization. It is the responsibility of the system owner or designee to green light the implementation and allow the system to be placed in production mode. In the federal government, this process is known as certification and accreditation (C&A). OMB Circular A-130 requires the security authorization of an information system to process, store, or transmit information. The authorizing official relies primarily on the completed system security plan, the inherent risk as determined by the risk assessment, and the security test results.

Operational/Maintenance Phase

In this phase, systems and products are in place and operating, enhancements and/or modifications to the system are developed and tested, and hardware and software components are added or replaced. Configuration management and change control processes are essential to ensure that required security controls are maintained. The organization should continuously monitor performance of the system to ensure that it is consistent with pre-established user and security requirements, and that needed system modifications are incorporated. Periodic testing and evaluation of the security controls in an information system must be conducted to ensure continued effectiveness and to identify any new vulnerabilities that may have been introduced or recently discovered. Vulnerabilities identified after implementation cannot simply be ignored. Depending on the severity of the finding, it may be possible to implement compensating controls while “fixes” are being developed. There may be situations that require the system to be taken offline until the vulnerabilities can be mitigated.

Disposal Phase

Often, there is no definitive end or retirement of an information system or code. Systems normally evolve or transition to the next generation because of changing requirements or improvements in technology. System security plans should continually evolve with the system. Much of the environmental, management, and operational information for the original system should still be relevant and useful when the organization develops the security plan for the follow-on system. When the time does come to discard system information, hardware, and software, it must not result in the unauthorized disclosure of protected or confidential data. Disposal activities archiving information, sanitization of media, and disposal of hardware components must be done in accordance with the organization’s destruction and disposal requirements and policies.

What about Commercially Available or Open Source Software?

SDLC principles apply to commercially available software—sometimes referred to as commercial off-the-shelf software (COTS)—and to open source software. The primary difference is that the development is not done in-house. Commercial software should be evaluated to make sure it meets or exceeds the organization’s security requirement. Because software is often released in stages, it is important to be aware of and understand the release stages. Only stable and tested software releases should be deployed on production servers to protect data availability and data integrity. Operating system and application updates should not be deployed until they have been thoroughly tested in a lab environment and declared safe to be released in a production environment. Once installed, all software and applications should be included in internal vulnerability testing.

Software Releases

The alpha phase is the initial release of software for testing. Alpha software can be unstable and can cause crashes or data loss. External availability of alpha software is uncommon in proprietary software. However, open source software, in particular, often has publicly available alpha versions, often distributed as the raw source code of the software. Beta phase indicates that the software is feature complete and the focus is usability testing. A release candidate (RC) is a hybrid of a beta and a final release version. It has the potential to be the final release unless significant issues are identified. General availability or go live is when the software has been made commercially available and is in general distribution. Alpha, beta, and RCs have a tendency to be unstable and unpredictable and are not suitable for a production environment. This unpredictability can have devastating consequences, including data exposures, data loss, data corruption, and unplanned downtime.

Software Updates

During its supported lifetime, software is sometimes updated. Updates are different from security patches. Security patches are designed to address a specific vulnerability and are applied in accordance with the patch management policy. Updates generally include functional enhancements and new features. Updates should be subject to the organization’s change management process and should be thoroughly tested before being implemented in a production environment. This is true for both operating systems and applications. For example, a new system utility might work perfectly with 99% of applications, but what if a critical line-of-business application deployed on the same server falls in the remaining 1%? This can have a disastrous effect on the availability, and potentially on the integrity, of the data. This risk, however minimal it may appear, must not be ignored. Even when an update has been thoroughly tested, organizations still need to prepare for the unforeseen and make sure they have a documented rollback strategy to return to the previous stable state in case problems occur.

If an update requires a system reboot, it should be delayed until the reboot will have the least impact on business productivity. Typically, this means after hours or on weekends, although if a company is international and has users who rely on data located in different time zones, this can get a bit tricky. If an update does not require a system reboot, but will still severely impact the level of system performance, it should also be delayed until it will have the least impact on business productivity.

The Testing Environment

The worst-case scenario for a testing environment is that a company simply does not have one, and is willing to have production servers double as test servers. Best-case scenario, the testing environment is set up as a mirror image of the production environment, software and hardware included. The closer to the production environment the test environment is, the more the test results can be trusted. A cost/benefit analysis that takes into consideration the probability and associated costs of downtime, data loss, and integrity loss will determine how much should be invested in a test or staging environment.

Protecting Test Data

Consider a medical practice with an electronic medical records (EMR) database replete with patient information. Imagine the security measures that have been put in place to make sure the CIA of the data is protected. Because this database is pretty much the lifeblood of this practice and is protected under law, it is to be expected that those security measures are extensive. Live data should never be used in a test environment because it is highly unlikely that the same level of data protection has been implemented, and exposure of protected data would be a serious violation of patient confidentiality and regulatory requirements. Instead, either de-identified data or dummy data should be used. De-identification is the process of removing information that would identify the source or subject of the data. Strategies include deleting or masking the name, social security number, date of birth, and demographics. Dummy data is, in essence, fictional. For example, rather than using actual patient data to test an EMR database, the medical practice would enter fake patient data into the system. That way, the application could be tested with no violation of confidentiality.

Secure Code

The two types of code are insecure code (sometimes referred to as “sloppy code”) and secure code. Insecure code is sometimes the result of an amateurish effort, but more often than not, it reflects a flawed process. Secure code, however, is always the result of a deliberate process that prioritized security from the beginning of the design phase onward.


FYI: Open SAMM

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. The resources provided by SAMM (www.opensamm.org/) will aid in the following:

Image Evaluating an organization’s existing software security practices

Image Building a balanced software security assurance program in well-defined iterations

Image Demonstrating concrete improvements to a security assurance program

Image Defining and measuring security-related activities throughout an organization

SAMM was defined with flexibility in mind such that it can be utilized by small, medium, and large organizations using any style of development. Additionally, this model can be applied organization-wide, for a single line of business, or even for an individual project. Beyond these traits, SAMM was built on the following principles:

Image An organization’s behavior changes slowly over time. A successful software security program should be specified in small iterations that deliver tangible assurance gains while incrementally working toward long-term goals.

Image There is no single recipe that works for all organizations. A software security framework must be flexible and allow organizations to tailor their choices based on their risk tolerance and the way in which they build and use software.

Image Guidance related to security activities must be prescriptive. All the steps in building and assessing an assurance program should be simple, well defined, and measurable. This model also provides roadmap templates for common types of organizations.

Source: OWASP.org (https://www.owasp.org/index.php/Category:Software_Assurance_Maturity_Model)


The Open Web Application Security Project (OWASP)

Deploying secure code is the responsibility of the system owner. A number of secure coding resources are available for system owners, project managers, developers, programmers, and information security professionals. One of the most well respected and widely utilized is OWASP. The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted. Everyone is free to participate in OWASP, and all of its materials are available under a free and open software license. On a three-year cycle, beginning in 2004, OWASP releases the “OWASP Top Ten.” The OWASP Top Ten represents a broad consensus about what the most critical web application security flaws are. The information is applicable to a spectrum of non-web applications, operating systems, and databases. Project members include a variety of security experts from around the world who have shared their expertise to produce this list. The most recent list was published in 2013. The 2013 and 2010 lists both cite injection flaws as the number-one security issue.


FYI: 2013 OWASP Top Ten Application Web Application Security Flaws

The OWASP Top 10 are considerd the ten most critical web application security risks:

A1 – Injection

A2 – Broken Authentication and Session Management

A3 – Cross-site Scripting (XSS)

A4 – Insecure Direct Object Reference

A5 – Security Misconfiguration

A6 – Sensitive Data Exposure

A7 – Missing Function Level Access Control

A8 – Cross-site Request Forgery (CRSF)

A9 – Using Component with Known Vulnerabilities

A10 – Unvalidated Redirects and Forwards

The complete list, which includes detailed explanation and examples, is published on the OWASP website www.owasp.org).


What Is Injection?

The most common web application security flaw is the failure to properly validate input from the client or environment. OWASP defines injection as when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing an unintended command or accessing data without proper authorization. The attacker can be anyone who can send data to the systems, including internal users, external users, and administrators. The attack is simply a data string designed to exploit the code vulnerability. Injection flaws are particularly common in older code. A successful attack can result in data loss, corruption, compromise, or DoS. Preventing injection requires keeping untrusted data separate from commands and queries.

Input Validation

Input validation is the process of validating all the input to an application before using it. This includes correct syntax, length, characters, and ranges. Consider a web page with a simple form that contains fields corresponding to your physical address information, such as Street Name, ZIP Code, and so on. Once you click the “submit” button, the information you entered in the fields is sent to the web server and entered into a back-end database. The objective of input validation is to evaluate the format of entered information and, when appropriate, deny the input. To continue our example, let’s focus on the ZIP Code field. Zip Codes consist of numbers only, and the basic ones only include five digits. Input validation would look at how many and what type of characters are entered in the field. In this case, the first section of the ZIP Code field would require five numeric characters. This limitation would prevent the user from entering more or less than five characters as well as nonnumeric characters. This strategy is known as whitelist or positive validation.

You may wonder, why bother to go through all this? Who cares if a user sends the wrong ZIP Code? Who cares if the information entered in the ZIP Code field includes letters and/or ASCII characters? Hackers care. Hackers attempt to pass code in those fields to see how the database will react. They want to see if they can bring down the application (DoS attack against that application), bring down the server on which it resides (DoS against the server, and therefore against all the applications that reside on that server), or run code on the target server to manipulate or publish sensitive data. Proper input validation is therefore a way to limit the ability of a hacker to try and “abuse” an application system.

Dynamic Data Verification

Many application systems are designed to rely on outside parameter tables for dynamic data. Dynamic data is defined as data that changes as updates become available—for example, an e-commerce application that automatically calculates sales tax based on the ZIP Code entered. The process of checking that the sales tax rate entered is indeed the one that matches the state entered by the customer is another form of input validation. What is tricky in this type of situation is that the information pulled from the outside table is real and legitimate—it is just that it does not apply to the situation at hand. This is a lot harder to track than when the data input is clearly wrong, such as when a letter is entered in a ZIP Code field.

Dynamic data is used by numerous application systems. A simple example of this is the exchange rate for a particular currency. These values continually change, and using the correct value is critical. If the transaction involves a large sum, the difference can translate into a fair amount of money! Data validation extends to verification that the business rule is also correct.

Output Validation

Output validation is the process of validating (and in some cases, masking) the output of a process before it is provided to the recipient. An example would be substituting asterisks for numbers on a credit card receipt. Output validation controls what information is exposed or provided. You need to be aware of output validation, however, especially as it relates to hacker discovery techniques. Hackers look for clues and then use this information as part of the footprinting process. One of the first things a hacker looks to learn about a targeted application is how it reacts to systematic abuse of the interface. A hacker will learn a lot about how the application reacts to errors if the developers did not run output validation tests prior to deployment. They may, for example, learn that a certain application is vulnerable to SQL injection attacks, buffer overflow attacks, and so on. The answer an application gives about an error is potentially a pointer that can lead to vulnerability, and a hacker will try to make that application “talk” to better customize the attack.

Developers test applications by feeding erroneous data into the interface to see how it reacts and what it reveals. This feedback is used to modify the code with the objective of producing a secure application. The more time spent on testing, the less likely hackers will gain the advantage.

Why Is Broken Authentication and Session Management Important?

Number two on the 2013 OWASP list is broken authentication and session management. If session management assets such as user credentials and session IDs are not properly protected, the session can be hijacked or taken over by a malicious intruder. When authentication credentials are stored or transmitted in clear text or when credentials can be guessed or overwritten through weak account management functions (for example, account creation, change password, recover password, weak session IDs), the identity of the authorized user can be impersonated. If session IDs are exposed in the URL, do not time out, or are not invalidated after successful logoff, malicious intruders have the opportunity to continue an authenticated session. A critical security design requirement must be strong authentication and session management controls. A common control for protecting authentication credentials and session IDs is encryption. We discussed authentication in Chapter 9, “Access Control Management.” We will examine encryption and the field of cryptography in the next section of this chapter.

Cryptography

The art and science of writing secret information is called cryptography. The origin of the term involves the Greek words kryptos meaning “hidden” and graphia meaning “writing.” Three distinct goals are associated with cryptography:

Image Confidentiality—Unauthorized parties cannot access the data. Data can be encrypted, which provides confidentiality.

Image Integrity—Assurance is provided that the data was not modified. Data can be hashed, which provides integrity.

Image Authenticity/nonrepudiation—The source of the data is validated. Data can be digitally signed, which ensures authentication/nonrepudiation and integrity.

Data can be encrypted and digitally signed, which provides for confidentiality, authentication, and integrity.

Encryption is the conversion of plain text into what is known as cipher text using an algorithm called a cipher. Cipher text is text that is unreadable by a human or computer. Decryption, the inverse of encryption, is the process of turning cipher text back into readable plain text. Encryption and decryption require the use of a secret key. The key is a value that specifies what part of the algorithm to apply, in what order, and what variables to input. Similar to authentication passwords, it is critical to use a strong key that cannot be discovered and to protect the key from unauthorized access. Protecting the key is generally referred to as key management. We are going to be examining the use of symmetric and asymmetric keys as well as key management later in this chapter.

Ensuring that a message has not been changed in any way during transmission is referred to as message integrity, Hashing is the process of creating a numeric value that represents the original text. A hash function (such as SHA or MD5) takes a variable size input and produces a fixed size output. The output is referred to as a hash value, message digest, or fingerprint. Unlike encryption, hashing is a one-way process, meaning that the hash value is never turned back into plain text. If the original data has not changed, the hash function should always produce the same value. Comparing the values confirms the integrity of the message. Used alone, hashing provides message integrity and not confidentiality or authentication.

A digital signature is a hash value (message digest) that has been encrypted with the sender’s private key. The hash must be decrypted with the corresponding key. This proves the identity of the sender. The hash values are then compared to prove the message integrity. Digital signatures provide authenticity/nonrepudiation and message integrity. Nonrepudiation means that the sender cannot deny that the message came from them.


FYI: The Caesar Cipher

The need for secure communication is certainly not new. The Roman ruler Julius Caesar (100 B.C.–44 B.C.) used a cipher for secret battlefield communication. He substituted each letter of the alphabet with a letter three positions further along. Later, any cipher that used this “displacement” concept for the creation of a cipher alphabet was referred to as a Caesar cipher. Of all the substitution-type ciphers, this Caesar cipher is the simplest to solve because there are only 25 possible combinations.

Standard alphabet:

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Caesar alphabet:

X Y Z A B C D E F G H I J K L M N O P Q R S T U V W

For example, using the Caesar cipher system, the message THE ENEMY IS NEAR would be written as QEB BKBJV FP KBXO.


Why Encrypt?

Encryption protects the confidentiality of data at rest and in transit. There are a wide variety of encryption algorithms, techniques, and products. Encryption can be applied granularly, such as to an individual file, or broadly, such as encrypting all stored or transmitted data. Per NIST, the appropriate encryption solution for a particular situation depends primarily on the type of storage, the amount of information that needs to be protected, the environments where the storage will be located, and the threats that need to be mitigated. The three classes of storage (“at rest”) encryption techniques are full disk encryption, volume and virtual disk encryption, and file/folder encryption. The array of in-transit encryptions protocols and technologies include TLS/SSL (HTTPS), WEP and WPA, VPN, and IPsec. Protecting information in transit safeguards the data as it traverses a wired or wireless network. The current standard specification for encrypting electronic data is the Advanced Encryption Standard (AES). Almost all known attacks against AES’s underlying algorithm are computationally infeasible.

Regulatory Requirements

In addition to being a best practice, the need for encryption is cited in numerous federal regulations, including the Gramm-Leach-Bliley Act (GLBA) and HIPAA/HITECH. At the state level, multiple states (including Massachusetts, Nevada, and Washington) have statues requiring encryption. Massachusetts 201 CMR17 requires encryption of all transmitted records and files containing personal information that will travel across public networks, encryption of all data containing personal information to be transmitted wirelessly, as well as encryption of all personal information stored on laptops or other portable devices. Nevada NRS 603A requires encryption of credit and debit card data as well as encryption of mobile devices and media. Washington HB 2574 requires that personal information, including name combined with social security number, driver’s license number, and financial account information, be encrypted if it is transmitted or stored on the Internet.

What Is a “Key”?

A key is a secret code that is used by a cryptographic algorithm. It provides the instructions that result in the functional output. Cryptographic algorithms themselves are generally known. It is the secrecy of the key that provides for security. The number of possible keys that can be used with an algorithm is known as the keyspace, which is a large set of random values that the algorithm chooses from when it needs to make a key. The larger the keyspace, the more possibilities for different keys. For example, if an algorithm uses a key that is a string of 10 bits, then its key space is the set of all binary strings of length 10, which results in a keyspace size of 210 (or 1,024); a 40-bit key results in 240 possible values; and a 256-bit key results in 2256 possible values. Longer keys are harder to break, but require more computation and processing power. Two factors must be taken into consideration when deciding upon the key length: the desired level of protection and the amount of resources available.

Symmetric Keys

A symmetric key algorithm uses a single secret key, which must be shared in advance and kept private by both the sender and the receiver. Symmetric keys are often referred to as a shared key. Because the keys are shared, symmetric algorithms cannot be used to provide nonrepudiation or authenticity. The most well-known symmetric algorithm is DES. The strength of symmetric keys is that they are computationally efficient. The weakness is that key management is inherently insecure and that it is not scalable, because a unique key set must be used in order to protect the secrecy of the key.

Asymmetric Keys

Asymmetric key cryptography, also as known as public key cryptography, uses two different but mathematically related keys known as public and private keys. Think of public and private keys as two keys to the same lock—one used to lock and the other to unlock. The private key never leaves the owner’s possession. The public key is given out freely. The public key is used to encrypt plain text or to verify a digital signature, whereas the private key is used to decrypt cipher text or to create a digital signature. Asymmetric key technologies allow for efficient, scalable, and secure key distribution; however, they are computationally resource intensive.

What Is PKI?

Public Key Infrastructure (PKI) is the framework and services used to create, distribute, manage, and revoke public keys. PKI is made up of multiple components, including a Certification Authority (CA), a Registration Authority (RA), client nodes, and the digital certificate itself:

Image The Certification Authority (CA) issues and maintains digital certificates.

Image The Registration Authority (RA) performs the administrative functions, including verifying the identity of users and organizations requesting a digital certificate, renewing certificates, and revoking certificates.

Image Client nodes are interfaces for users, devices, and applications to access PKI functions, including the requesting of certificates and other keying material. They may include cryptographic modules, software, and procedures necessary to provide user access to the PKI.

Image A digital certificate is used to associate a public key with an identity. Certificates include the certificate holder’s public key, serial number of the certificate, certificate holder’s distinguished name, certificate validity period, unique name of the certificate issuer, digital signature of the issuer, and signature algorithm identifier.


FYI: Viewing a Digital Certificate

If you are using an Apple Mac operating system, the certificates are stored in the Keychain Access utility. If you are using a Microsoft Windows operating system, digital certificates are stored in the Internet Browser application. To view these certificates in Internet Explorer, go to the Internet Options Content tab and click the Certificates button. To view them in Firefox, go to Options, Advanced, Certificates tab.


Why Protect Cryptographic Keys?

As mentioned earlier in the chapter, the usefulness of a cryptographic system is entirely dependent on the secrecy and management of the key. This is so important that NIST has published a three-part document devoted to cryptographic key management guidance. SP 800-67: Recommendations for Key Management, Part 1: General (Revision 3) provides general guidance and best practices for the management of cryptographic keying material. Part 2: Best Practices for Key Management Organization provides guidance on policy and security planning requirements for U.S. government agencies. Part 3: Application Specific Key Management Guidance provides guidance when using the cryptographic features of current systems. In the Overview of Part 1, NIST describes the importance of key management as follows: “The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If a safe combination is known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded to the keys. All keys need to be protected against modification, and secret and private keys need to be protected against unauthorized disclosure. Key management provides the foundation for the secure generation, storage, distribution, use, and destruction of keys.”

Best practices for key management include the following:

Image The key length should be long enough to provide the necessary level of protection.

Image Keys should be transmitted and stored by secure means.

Image Key values should be random, and the full spectrum of the keyspace should be used.

Image The key’s lifetime should correspond with the sensitivity of the data it is protecting.

Image Keys should be backed up in case of emergency. However, multiple copies of keys increase the chance of disclosure and compromise.

Image Keys should be properly destroyed when their lifetime ends.

Image Keys should never be presented in clear text.

Key management policy and standards should include assigned responsibility for key management, the nature of information to be protected, the classes of threats, the cryptographic protection mechanisms to be used, and the protection requirements for the key and associated processes.

Digital Certificate Compromise

Certification Authorities (CAs) have increasingly become targets for sophisticated cyber attacks. An attacker who breaches a CA to generate and obtain fraudulent certificates can then use the fraudulent certificates to impersonate an individual or organization. In July of 2012, NIST issued an ITL bulletin titled “Preparing for and Responding to Certification Compromise and Fraudulent Certificate Issue.” The bulletin primarily focuses on guidance for Certification and Registration Authorities. The bulletin does, however, include the guidance for any organization impacted by the fraud.

The built-in defense against a fraudulently issued certificate is certificate revocation. When a rouge or fraudulent certificate is identified, the CA will issue and distribute a certificate revocation list. Alternatively, a browser may be configured to use the Online Certificate Status Protocol (OCSP) to obtain revocation status.


FYI: Small Business Note

Encryption keeps valuable data safe. Every organization, irrespective of size, should encrypt the following if there is any chance that legally protected or company confidential data will be stored or transmitted:

Image Mobile devices such as laptops, tablets, smartphones

Image Removable media such as USB drives and backup tapes

Image Internet traffic such as file transfer or email

Image Remote access to the company network

Image Wireless transmission

When creating the “secure key,” make sure to use a long random string of numbers, letters, and special characters.


Summary

Whether they are developed in house, purchased, or open source, companies rely on line-of-business applications. This reliance implies that the availability of those solutions must be protected to avoid severe losses in revenue, the integrity must be protected to avoid unauthorized modification, and the confidentiality must be protected to honor the public trust and maintain compliance with regulatory requirements.

Custom applications should be built with security in mind from the start. Adopting an SDLC methodology that integrates security considerations ensures that this objective is met. The SDLC provides a structured and standardized process for all phases of any system development effort. During the initiation phase, the need for a system is expressed and the purpose of the system is documented. During the development/acquisition phase, the system is designed, purchased, programmed, developed, or otherwise constructed. During the implementation phase, the system is tested, modified if necessary, retested if modified, and finally accepted. During the operational phase, the system is put into production. Monitoring, auditing, and testing should be ongoing. Activities conducted during the disposal phase ensure the orderly termination of the system, safeguarding vital system information, and migrating data processed by the system to a new system.

SDLC principles extend to COTS (commercial off-the-shelf) software as well as open source software. It is important to recognize the stages of software releases. The alpha phase is the initial release of software for testing. Beta phase indicates that the software is feature complete and the focus is usability testing. A release candidate (RC) is a hybrid of a beta and a final release version. General availability or “go live” is when the software has been made commercially available and is in general distribution. Alpha, beta, and RCs should never be implemented in a production environment. Over the course of time, publishers may release updates and security patches. Updates generally include enhancements and new features. Updates should be thoroughly tested before release to a production environment. Even tested applications should have a rollback strategy just in case the unexpected happens. Live data should never be used in a test environment; instead, de-identified or dummy data should be used.

The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted.

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. In 2013, OWASP rated injection flaws as the number-one software and database security issue. Injection is when untrusted data is sent to an interpreter as part of a command or query. Input and output validation minimizes injection vulnerabilities. Input validation is the process of validating all the input to an application before using it. This includes correct syntax, length, characters, and ranges. Output validation is the process of validating (and in some cases, masking) the output of a process before it is provided to the recipient.

Data at rest and in transit may require cryptographic protection. Three distinct goals are associated with cryptography: Data can be encrypted, which provides confidentiality. Data can be hashed, which provides integrity. Data can be digitally signed, which provides authenticity/nonrepudiation and integrity. Also, data can be encrypted and digitally signed, which provides for confidentiality, authentication, and integrity. Encryption is the conversion of plain text into what is known as cipher text using an algorithm called a cipher. Decryption, the inverse of encryption, is the process of turning cipher text back into readable plain text. Hashing is the process of creating a fixed-length value known as a fingerprint that represents the original text. A digital signature is a hash value (also known as a message digest) that has been encrypted with the sender’s private key.

A key is a value that specifies what part of the cryptographic algorithm to apply, in what order, and what variables to input. The keyspace is a large set of random values that the algorithm chooses from when it needs to make a key. Symmetric key algorithms use a single secret key, which must be shared in advance and kept private by both the sender and the receiver. Asymmetric key cryptography, also known as public key cryptography, uses two different but mathematically related keys known as public and private keys. A digital certificate is used to associate a public key with an identity.

A Public Key Infrastructure (PKI) is used to create, distribute, manage, and revoke asymmetric keys. A Certification Authority (CA) issues and maintains digital certificates. A Registration Authority (RA) performs the administrative functions, including verifying the identity of users and organizations requesting a digital certificate, renewing certificates, and revoking certificates. Client nodes are interfaces for users, devices, and applications to access PKI functions, including the requesting of certificates and other keying material. They may include cryptographic modules, software, and procedures necessary to provide user access to the PKI.

Information Systems Acquisition, Development, and Maintenance (ISADM) policies include SDLC, Application Development, and Key Management.

Test Your Skills

Multiple Choice Questions

1. When is the best time to think about security when building an application?

A. Build the application first and then add a layer of security.

B. At inception.

C. Start the application development phase, and when you reach the halfway point, you have enough of a basis to look at to decide where and how to set up the security elements.

D. No security needs to be developed inside of the code itself. It will be handled at the operating system level.

2. Which of the following statements best describes the purpose of the systems development lifecycle (SDLC)?

A. The purpose of the SDLC is to provide a framework for system development efforts.

B. The purpose of the SDLC is to provide a standardized process for system development efforts.

C. The purpose of the SDLC is to assign responsibility.

D. All of the above.

3. In which phase of the SDLC is the need for a system expressed and the purpose of the system documented?

A. The initiation phase

B. The implementation phase

C. The operational phase

D. The disposal phase

4. During which phase of the SDLC is the system accepted?

A. The initiation phase

B. The implementation phase

C. The operational phase

D. The disposal phase

5. Which of the following statements is true?

A. Retrofitting security controls to an application system after implementation is normal; this is when security controls should be added.

B. Retrofitting security controls to an application system after implementation is sometimes necessary based on testing and assessment results.

C. Retrofitting security controls to an application system after implementation is always a bad idea.

D. Retrofitting security controls to an application system after implementation is not necessary because security is handled at the operating system level.

6. Which phase of software release indicates that the software is feature complete?

A. Alpha

B. Beta

C. Release candidate

D. General availability

7. Which phase of software release is the initial release of software for testing?

A. Alpha

B. Beta

C. Release candidate

D. General availability

8. Which of the following statements best describes the difference between a security patch and an update?

A. Patches provide enhancements; updates fix security vulnerabilities.

B. Patches should be tested; updates do not need to be tested.

C. Patches fix security vulnerabilities; updates add features and functionality.

D. Patches cost money; updates are free.

9. The purpose of a rollback strategy is to ______________.

A. make backing up easier

B. return to a previous stable state in case problems occur

C. add functionality

D. protect data

10. Which of the following statements is true?

A. A test environment should always be the exact same as the live environment.

B. A test environment should be as cheap as possible no matter what.

C. A test environment should be as close to the live environment as possible.

D. A test environment should include live data for true emulation of the real-world setup.

11. Which of the following statements best describes when dummy data should be used?

A. Dummy data should be used in the production environment.

B. Dummy data should be used in the testing environment.

C. Dummy data should be used in both test and production environments.

D. Dummy data should not be used in either test or production environments.

12. Which of the following terms best describes the process of removing information that would identify the source or subject?

A. Detoxification

B. Dumbing down

C. Development

D. De-identification

13. Which of the following terms best describes the open framework designed to help organizations implement a strategy for secure software development?

A. OWASP

B. SAMM

C. NIST

D. ISO

14. Which of the following statements best describes an injection attack?

A. An injection attack occurs when untrusted data is sent to an interpreter as part of a command.

B. An injection attack occurs when trusted data is sent to an interpreter as part of a query.

C. An injection attack occurs when untrusted email is sent to a known third party.

D. An injection attack occurs when untrusted data is encapsulated.

15. Input validation is the process of ___________.

A. masking data

B. verifying data syntax

C. hashing input

D. trusting data

16. Which of the following types of data changes as updates become available?

A. Moving data

B. Mobile data

C. Dynamic data

D. Delta data

17. The act of limiting the characters that can be entered in a web form is known as ___________.

A. output validation

B. input validation

C. output testing

D. input testing

18. Which statement best describes a distinguishing feature of cipher text?

A. Cipher text is unreadable by a human.

B. Cipher text is unreadable by a machine.

C. Both A and B.

D. Neither A nor B.

19. Which term best describes the process of transforming plain text to cipher text?

A. Decryption

B. Hashing

C. Validating

D. Encryption

20. Which of the following statements is true?

A. Digital signatures guarantee confidentiality only.

B. Digital signatures guarantee integrity only.

C. Digital signatures guarantee integrity and nonrepudiation.

D. Digital signatures guarantee nonrepudiation only.

21. Hashing is used to ensure message integrity by ____________.

A. comparing hash values

B. encrypting data

C. encapsulating data

D. comparing algorithms and keys

22. When unauthorized data modification occurs, which of the following tenets of security is directly being threatened?

A. Confidentiality

B. Integrity

C. Availability

D. Authentication

23. Which of the following statements about encryption is true?

A. All encryption methods are equal: Just choose one and implement it.

B. The security of the encryption relies on the key.

C. Encryption is not needed for internal applications.

D. Encryption guarantees integrity and availability, but not confidentiality.

24. Which of the following statements about a hash function is true?

A. A hash function takes a variable-length input and turns it into a fixed-length output.

B. A hash function takes a variable-length input and turns it into a variable-length output.

C. A hash function takes a fixed-length input and turns it into a fixed-length output.

D. A hash function takes a fixed-length input and turns it into a variable-length output.

25. Which of the following values represents the number of available values in a 256-bit keyspace?

A. 2 × 2256

B. 2 × 256

C. 2562

D. 2256

26. Which of the following statements is not true about a symmetric key algorithm?

A. Only one key is used.

B. It is computationally efficient.

C. The key must be publicly known.

D. 3DES is widely used.

27. The contents of a __________ include the issuer, subject, valid dates, and public key.

A. digital document

B. digital identity

C. digital thumbprint

D. digital certificate

28. Two different but mathematically related keys are referred to as ___________.

A. public and private keys

B. secret keys

C. shared keys

D. symmetric keys

29. In cryptography, which of the following is not publicly available?

A. Algorithm

B. Public key

C. Digital certificate

D. Symmetric key

30. A hash value that has been encrypted with the sender’s private key is known as a _________.

A. message digest

B. digital signature

C. digital certificate

D. cipher text

Exercises

Exercise 10.1: Building Security into Applications

1. Explain why security requirements should be considered at the beginning stages of a development project.

2. Who is responsible for ensuring that security requirements are defined?

3. In which phases of the SDLC should security be evaluated?

Exercise 10.2: Understanding Input Validation

1. Define input validation.

2. Describe the type of attack that is related to poor input validation.

3. In the following scenario, what should the input validation parameters be?

A class registration web form requires that students enter their current year. The entry options are numbers from 1 to 4 that represent the following: freshmen=1, sophomores=2, juniors=3, and seniors=4.

Exercise 10.3: Researching Software Releases

1. Find an example of commercially available software that is available as either a beta version or a release candidate.

2. Find an example of open source software that is available as either an alpha, beta, or release candidate.

3. For each, does the publisher include a disclaimer or warning?

Exercise 10.4: Learning About Cryptography

1. Access the National Security Agency’s CryptoKids website.

2. Play at least two of the games.

3. Explain what you learned.

Exercise 10.5: Understanding Updates and Systems Maintenance

1. Microsoft bundles feature and function updates and refers to them as “service packs.” Locate a recently released service pack.

2. Does the service pack have a rollback option?

3. Explain why a rollback strategy is important when upgrading an operating system or application.

Projects

Project 10.1: Creating a Secure App

You have obtained financing to design a mobile device app that integrates with your school’s student portal so that students can check easily check their grades from anywhere.

1. Create a list of security concerns. For each concern, indicate if the issue is related to confidentiality, integrity, availability (CIA), or any combination thereof.

2. Create a project plan using the SDLC framework as a guide. Describe your expectations for each phase. Be sure to include roles and responsibilities.

3. Research and recommend an independent security firm to test your application. Explain why you chose them.

Project 10.2: Researching the Open Web Application Security Project (OWASP)

The OWASP Top Ten has become a must-read resource. Go to https://www.owasp.org and access the 2013 Top Ten Web Application report.

1. Read the entire report.

2. Write a memo addressed to Executive Management on why they should read the report. Include in your memo what OWASP means by “It’s About Risks, Not Weaknesses” on page 20.

3. Write a second memo addressed to developers and programmers on why they should read the report. Include in your memo references to other OWASP resources that would be of value to them.

Project 10.3: Researching Digital Certificates

You have been tasked with obtaining an extended validation SSL digital certificate for an online shopping portal.

1. Research and choose an issuing CA. Explain why you chose the specific CA.

2. Describe the process and requirements for obtaining a digital certificate.

3. Who in the organization should be tasked with installing the certificate and why?

1. Who is TURKTRUST?

2. Explain what happened and why this is a potentially dangerous situation.

3. Research this event. Did any other organizations issue advisories?

References

Regulations Cited

16 CFR Part 314: Standards for Safeguarding Customer Information; Final Rule, Federal Register, accessed 05/2013, http://ithandbook.ffiec.gov/media/resources/3337/joisafeguard_customer_info_final_rule.pdf.

“201 Cmr 17.00: Standards for the Protection of Personal Information of Residents of the Commonwealth,” official website of the Office of Consumer Affairs & Business Regulation (OCABR), accessed 05/2013, www.mass.gov/ocabr/docs/idtheft/201cmr1700reg.pdf.

“HIPAA Security Rule,” official website of the Department of Health and Human Services, accessed 05/2013, www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/.

State of Nevada, “Chapter 603A—Security of Personal Information,” accessed 08/2013, www.leg.state.nv.us/NRS/NRS-603A.html.

State of Washington, “HB 2574, An Act Relating to Securing Personal Information Accessible Through the Internet,” accessed 08/2013, apps.leg.wa.gov/documents/billdocs/2007-08/Pdf/Bills/.../2574.pdf.

Other References

“Certificate,” Microsoft Technet, accessed 08/2013, http://technet.microsoft.com/en-us/library/cc700805.aspx.

“Encryption Explained,” Indiana University Information Security & Policy, accessed 08/2013, http://protect.iu.edu/cybersecurity/data/encryption.

Microsoft Corp. “Fraudulent Digital Certificates Could Allow Spoofing,” Microsoft Security Advisory (2798897), January 3, 2013, accessed 08/2013, http://technet.microsoft.com/en-us/security/advisory/2798897.

Kak, Avi, “Lecture 15: Hashing for Message Authentication, Lecture Notes on Computer and Network Security,” Purdue University, April 28, 2013. https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture15.pdf

“Open SAMM: Software Assurance Maturity Model Software Assurance Maturity Model,” OWASP Wiki, Creative Commons (CC) Attribution Share-Alike 3.0 License, accessed 08/2013, www.owasp.org/index.php/Category:Software_Assurance_Maturity_Model.

“OpenSAMM: Software Assurance Maturity Model,” accessed 08/2013, www.opensamm.org/.

“Public-key Cryptography” (redirected from Asymmetric key Algorithm), Wikipedia, accessed 08/2013, http://en.wikipedia.org/wiki/Asymmetric_key_algorithm.

“RFC 6960, X.509 Internet Public Key Infrastructure Online Certificate Status Protocol—OCSP,” June 2013, Internet Engineering Task Force, accessed 08/2013, http://tools.ietf.org/html/rfc6960.

“RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile,” May 2008, Internet Engineering Task Force, accessed 08/2013, http://tools.ietf.org/html/rfc5280.

“Top 10 2013: The Ten Most Critical Web Application Security Risks,” The OWASP Foundation, Creative Commons (CC) Attribution Share-Alike 3.0 License, accessed 08/2013, www.owasp.org/index.php/Category:OWASP_Top_Ten_Project.

“The Case for Email Encryption, Required by Law,” ZixCorp white paper, accessed 08/2013, www.zixcorp.com/.../case.../Case_for_Email_Encry_Required_by_Law .....

Turner, P., W. Polk, and E. Barker. “Preparing for and Responding to Certification Authority Compromise and Fraudulent Certificate Issuance,” NIST, ITL Bulletin, July 2012.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.93.141