Chapter 10

Information Systems Acquisition, Development, and Maintenance

Chapter Objectives

After reading this chapter and completing the exercises, you will be able to do the following:

  • Understand the rationale for the systems development life cycle (SDLC).

  • Recognize the stages of software releases.

  • Appreciate the importance of developing secure code.

  • Be aware of the most common application development security faults.

  • Explain cryptographic components.

  • Develop policies related to systems acquisition, development, and maintenance.

Section 14 of ISO 27002:2013: “Information Systems Acquisition, Development, and Maintenance (ISADM)” focuses on the security requirements of information systems, application, and code from conception to destruction. This sequence is referred to as the systems development life cycle (SDLC). Particular emphasis is put on vulnerability management to ensure integrity, cryptographic controls to ensure integrity and confidentiality, and security of system files to ensure confidentiality, integrity, and availability (CIA). The domain constructs apply to in-house, outsourced, and commercially developed systems, applications, and code. Section 10 of ISO 27002:2013: “Cryptography,” focuses on proper and effective use of cryptography to protect the confidentiality, authenticity, and/or integrity of information. Because cryptographic protection mechanisms are closely related to information systems development and maintenance, it is included in this chapter.

Of all the security domains we have discussed so far, this one has the most widespread implications. Most cybercrime is opportunistic, meaning that the criminals take advantage of the system vulnerabilities. Information systems, applications, and code that does not have embedded security controls all expose the organization to undue risk. Consider a company that relies on a web-based application linked to a back-end database. If the code used to create the web-based application was not thoroughly vetted, it may contain vulnerabilities that would allow a hacker to bring down the application with a denial of service (DoS) attack, run code on the server hosting the application, or even trick the database into publishing classified information. These events harm an organization’s reputation, create compliance and legal issues, and significantly impact the bottom line.

FYI: ISO/IEC 27002:2013 and NIST Guidance

Section 10 of ISO 27002:2013, the cryptography domain, focuses on proper and effective use of cryptography to protect the confidentiality, authenticity, and/or integrity of information. Section 14 of ISO 27002:2013, the ISADM domain, focuses on the security requirements of information systems, applications, and code, from conception to destruction.

Corresponding NIST guidance is provided in the following documents:

  • SP 800-23: “Guidelines to Federal Organizations on Security Assurance and Acquisition/Use of Tested/Evaluated Products”

  • SP 800-57: “Recommendations for Key Management—Part 1: General (Revision 3)”

  • SP 800-57: “Recommendations for Key Management—Part 2: Best Practices for Key Management Organization”

  • SP 800-57: “Recommendations for Key Management—Part 3: Application-Specific Key Management Guidance”

  • SP 800-64: “Security Considerations in the System Development Life Cycle”

  • SP 800-111: “Guide to Storage Encryption Technologies for End User Devices”

System Security Requirements

Security should be a priority objective during the design and acquisition phases of any new information system, application, or code development. Attempting to retrofit security is expensive, resource-intensive, and all too often does not work. Productivity requirements and/or the rush to market often preclude a thorough security analysis, which is unfortunate because it has been proven time and time again that early-stage identification of security requirements is both cost-effective and efficient. Utilizing a structured development process increases the probability that security objectives will be achieved.

What Is SDLC?

The systems development life cycle (SDLC) provides a standardized process for all phases of any system development or acquisition effort. Figure 10-1 shows the SDLC phases defined by NIST in their Special Publication (SP) 800-64 Revision 2, “Security Considerations in the System Development Life Cycle.”

A figure shows the five phases of an SDLC. From left to right the phases read, Initiation, Development/Acquisition, Implementation/Assessment, Operations/Maintenance, and Disposal.

FIGURE 10-1 The Five Phases of SDLC

  • During the initiation phase, the need for a system is expressed, and the purpose of the system is documented.

  • During the development/acquisition phase, the system is designed, purchased, programmed, developed, or otherwise constructed.

  • The implementation/assessment phase includes system testing, modification if necessary, retesting if modified, and finally acceptance.

  • During the operations/maintenance phase, the system is put into production. The system is almost always modified by the addition of hardware and software and by numerous other events. Monitoring, auditing, and testing should be ongoing.

  • Activities conducted during the disposal phase ensure the orderly termination of the system, safeguarding vital system information, and migrating data processed by the system to a new system.

Each phase includes a minimum set of tasks needed to effectively incorporate security in the system development process. Phases may continue to be repeated throughout a system’s life prior to disposal.

Initiation Phase

During the initiation phase, the organization establishes the need for a system and documents its purpose. Security planning must begin in the initiation phase. The information to be processed, transmitted, or stored is evaluated for CIA security requirements, as well as the security and criticality requirements of the information system. It is essential that all stakeholders have a common understanding of the security considerations. This early involvement will enable the developers or purchasing managers to plan security requirements and associated constraints into the project. It also reminds project leaders that many decisions being made have security implications that should be weighed appropriately, as the project continues. Other tasks that should be addressed in the initiation phase include assignment of roles and responsibilities, identification of compliance requirements, decisions on security metrics and testing, and the systems acceptance process.

Development/Acquisition Phase

During this phase, the system is designed, purchased, programmed, developed, or otherwise constructed. A key security activity in this phase is conducting a risk assessment. In addition, the organization should analyze security requirements, perform functional and security testing, and design the security architecture. Both the ISO standard and NIST emphasize the importance of conducting risk assessments to evaluate the security requirements for new systems and upgrades. The aim is to identify potential risks associated with the project and to use this information to select baseline security controls. The risk assessment process is iterative and needs to be repeated whenever a new functional requirement is introduced. As they are determined, security control requirements become part of the project security plan. Security controls must be tested to ensure they perform as intended.

Implementation/Assessment Phase

In the implementation phase, the organization configures and enables system security features, tests the functionality of these features, installs or implements the system, and obtains a formal authorization to operate the system. Design reviews and system tests should be performed before placing the system into operation to ensure that it meets all required security specifications. It is important that adequate time be built into the project plan to address any findings, modify the system or software, and retest.

The final task in this phase is authorization. It is the responsibility of the system owner or designee to green light the implementation and allow the system to be placed in production mode. In the federal government, this process is known as certification and accreditation (C&A). OMB Circular A-130 requires the security authorization of an information system to process, store, or transmit information. The authorizing official relies primarily on the completed system security plan, the inherent risk as determined by the risk assessment, and the security test results.

Operations/Maintenance Phase

In this phase, systems and products are in place and operating, enhancements and/or modifications to the system are developed and tested, and hardware and software components are added or replaced. Configuration management and change control processes are essential to ensure that required security controls are maintained. The organization should continuously monitor performance of the system to ensure that it is consistent with pre-established user and security requirements, and that needed system modifications are incorporated. Periodic testing and evaluation of the security controls in an information system must be conducted to ensure continued effectiveness and to identify any new vulnerabilities that may have been introduced or recently discovered. Vulnerabilities identified after implementation cannot be ignored. Depending on the severity of the finding, it may be possible to implement compensating controls while fixes are being developed. There may be situations that require the system to be taken offline until the vulnerabilities can be mitigated.

Disposal Phase

Often, there is no definitive end or retirement of an information system or code. Systems normally evolve or transition to the next generation because of changing requirements or improvements in technology. System security plans should continually evolve with the system. Much of the environmental, management, and operational information for the original system should still be relevant and useful when the organization develops the security plan for the follow-on system. When the time does come to discard system information, hardware, and software, it must not result in the unauthorized disclosure of protected or confidential data. Disposal activities archiving information, sanitization of media, and disposal of hardware components must be done in accordance with the organization’s destruction and disposal requirements and policies.

In Practice

Systems Development Life Cycle (SDLC) Policy

Synopsis: Ensure a structured and standardized process for all phases of system development/acquisition efforts, which includes security considerations, requirements, and testing.

Policy Statement:

  • The Office of Information Technology is responsible for adopting, implementing, and requiring compliance with an SDLC process and workflow. The SDLC must define initiation, development/acquisition, implementation, operations, and disposal requirements.

  • At each phase, security requirements must be evaluated and, as appropriate, security controls tested.

  • The system owner, in conjunction with the Office of Information Security, is responsible for defining system security requirements.

  • The system owner, in conjunction with the Office of Information Security, is responsible for authorizing production systems prior to implementation.

  • If necessary, independent experts may be brought in to evaluate the project or any component thereof.

What About Commercially Available or Open Source Software?

SDLC principles apply to commercially available software—sometimes referred to as commercial off-the-shelf software (COTS)—and to open source software. The primary difference is that the development is not done in-house. Commercial software should be evaluated to make sure it meets or exceeds the organization’s security requirement. Because software is often released in stages, it is important to be aware of and understand the release stages. Only stable and tested software releases should be deployed on production servers to protect data availability and data integrity. Operating system and application updates should not be deployed until they have been thoroughly tested in a lab environment and declared safe to be released in a production environment. After installation, all software and applications should be included in internal vulnerability testing. Open source software included in in-house applications or any products created by the organization should be registered in a central database for the purpose of licensing requirements and disclosures, as well as to track any vulnerabilities that affect such open source components or software.

Software Releases

The alpha phase is the initial release of software for testing. Alpha software can be unstable and can cause crashes or data loss. External availability of alpha software is uncommon in proprietary software. However, open source software, in particular, often has publicly available alpha versions, often distributed as the raw source code of the software. Beta phase indicates that the software is feature complete and the focus is usability testing. A release candidate (RC) is a hybrid of a beta and a final release version. It has the potential to be the final release unless significant issues are identified. General availability or go live is when the software has been made commercially available and is in general distribution. Alpha, beta, and RCs have a tendency to be unstable and unpredictable and are not suitable for a production environment. This unpredictability can have devastating consequences, including data exposures, data loss, data corruption, and unplanned downtime.

Software Updates

During its supported lifetime, software is sometimes updated. Updates are different from security patches. Security patches are designed to address a specific vulnerability and are applied in accordance with the patch management policy. Updates generally include functional enhancements and new features. Updates should be subject to the organization’s change management process and should be thoroughly tested before being implemented in a production environment. This is true for both operating systems and applications. For example, a new system utility might work perfectly with 99% of applications, but what if a critical line-of-business application deployed on the same server falls in the remaining 1%? This can have a disastrous effect on the availability, and potentially on the integrity, of the data. This risk, however minimal it may appear, must not be ignored. Even when an update has been thoroughly tested, organizations still need to prepare for the unforeseen and make sure they have a documented rollback strategy to return to the previous stable state in case problems occur.

If an update requires a system reboot, it should be delayed until the reboot will have the least impact on business productivity. Typically, this means after hours or on weekends, although if a company is international and has users who rely on data located in different time zones, this can get a bit tricky. If an update does not require a system reboot, but will still severely impact the level of system performance, it should also be delayed until it will have the least impact on business productivity.

Security vulnerability patching for commercial and open source software is one of the most important processes of any organization. An organization may use the following technologies and systems to maintain an appropriate vulnerability management program:

  • Vulnerability management software and scanners (such as Qualys, Nexpose, Nessus, etc.)

  • Software composition analysis tools (such as BlackDuck Hub, Synopsys Protecode (formerly known as AppCheck), FlexNet Code Insight (formerly known as Palamida), SourceClear, etc.)

  • Security vulnerability feeds (such as NIST’s National Vulnerability Database (NVD), VulnDB, etc.)

The Testing Environment

The worst-case scenario for a testing environment is that a company does not have one and is willing to have production servers double as test servers. The best-case scenario is that the testing environment is set up as a mirror image of the production environment, software and hardware included. The closer to the production environment the test environment is, the more the test results can be trusted. A cost/benefit analysis that takes into consideration the probability and associated costs of downtime, data loss, and integrity loss will determine how much should be invested in a test or staging environment.

Protecting Test Data

Consider a medical practice with an electronic medical records (EMR) database replete with patient information. Imagine the security measures that have been put in place to make sure the CIA of the data is protected. Because this database is pretty much the lifeblood of this practice and is protected under law, it is to be expected that those security measures are extensive. Live data should never be used in a test environment because it is highly unlikely that the same level of data protection has been implemented, and exposure of protected data would be a serious violation of patient confidentiality and regulatory requirements. Instead, either de-identified data or dummy data should be used. De-identification is the process of removing information that would identify the source or subject of the data. Strategies include deleting or masking the name, social security number, date of birth, and demographics. Dummy data is, in essence, fictional. For example, rather than using actual patient data to test an EMR database, the medical practice would enter fake patient data into the system. That way, the application could be tested with no violation of confidentiality.

In Practice

System Implementation and Update Policy

Synopsis: Define the requirements for the implementation and maintenance of commercial and open source software.

Policy Statement:

  • Operating systems and applications (collectively referred to as “system”) implementation and updates must follow the company’s change management process.

  • Without exception, alpha, beta, or prerelease applications must not be deployed on production systems.

  • It is the joint responsibility of the Office of Information Security and the Office of Information Technology to test system implementation and updates prior to deployment in the production environment.

  • The Office of Information Technology is responsible for budgeting for and maintaining a test environment that is representative of the production environment.

  • Without exception, data classified as “protected” must not be used in a test environment unless it has been de-identified. It is the responsibility of the Office of Information Security to approve the de-identification schema.

FYI: The Open Software Assurance Maturity Model

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. The resources provided by SAMM (www.opensamm.org/) will aid in the following:

  • Evaluating an organization’s existing software security practices

  • Building a balanced software security assurance program in well-defined iterations

  • Demonstrating concrete improvements to a security assurance program

  • Defining and measuring security-related activities throughout an organization

SAMM was defined with flexibility in mind so that it can be utilized by small, medium, and large organizations using any style of development. Additionally, this model can be applied organizationwide, for a single line of business, or even for an individual project. Beyond these traits, SAMM was built on the following principles:

  • An organization’s behavior changes slowly over time. A successful software security program should be specified in small iterations that deliver tangible assurance gains while incrementally working toward long-term goals.

  • There is no single recipe that works for all organizations. A software security framework must be flexible and allow organizations to tailor their choices based on their risk tolerance and the way in which they build and use software.

  • Guidance related to security activities must be prescriptive. All the steps in building and assessing an assurance program should be simple, well defined, and measurable. This model also provides roadmap templates for common types of organizations.

Secure Code

The two types of code are insecure code (sometimes referred to as “sloppy code”) and secure code. Insecure code is sometimes the result of an amateurish effort, but more often than not, it reflects a flawed process. Secure code, however, is always the result of a deliberate process that prioritized security from the beginning of the design phase onward. It is important to note that software developers and programmers are human and they will always make mistakes. Having a good secure code program and ways to verify and mitigate the creation of insecure code is paramount for any organization. Examples of mitigations and detection mechanisms include source code and static analysis.

The Open Web Application Security Project (OWASP)

Deploying secure code is the responsibility of the system owner. A number of secure coding resources are available for system owners, project managers, developers, programmers, and information security professionals. One of the most well respected and widely utilized is OWASP (owasp.org). The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted. Everyone is free to participate in OWASP, and all its materials are available under a free and open software license. On a three-year cycle, beginning in 2004, OWASP releases the OWASP Top Ten. The OWASP Top Ten (https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project) represents a broad consensus about what the most critical web application security flaws are. The information is applicable to a spectrum of nonweb applications, operating systems, and databases. Project members include a variety of security experts from around the world who have shared their expertise to produce this list.

OWASP also has created source code analysis tools (often referred to as Static Application Security Testing (SAST) tools). These tools are designed to analyze source code and/or compiled versions of code to help find security flaws. The following are examples of these tools and projects:

FYI: The Common Weakness Enumeration

MITRE led the creation of the Common Weakness Enumeration (CWE), which is a community-driven list of common security weaknesses. Its main purpose is to provide common language and a baseline for weakness identification, mitigation, and prevention efforts. Many organizations use CWE to measure and understand the common security problems introduced in their software and hardware and how to mitigate them. You can obtain more information about CWE at https://cwe.mitre.org.

There are numerous types of vulnerabilities.

What Is Injection?

The most common web application security flaw is the failure to properly validate input from the client or environment. OWASP defines injection as when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing an unintended command or accessing data without proper authorization. The attacker can be anyone who can send data to the systems, including internal users, external users, and administrators. The attack is simply a data string designed to exploit the code vulnerability. Injection flaws are particularly common in older code. A successful attack can result in data loss, corruption, compromise, or a denial of service condition. Preventing injection requires keeping untrusted data separate from commands and queries. The following are examples of injection vulnerabilities:

  • Code Injection

  • Command Injection

  • Comment Injection Attack

  • Content Spoofing

  • Cross-site Scripting (XSS)

  • Custom Special Character Injection

  • Function Injection

  • Resource Injection

  • Server-Side Includes (SSI) Injection

  • Special Element Injection

  • SQL Injection

  • XPATH Injection

Input Validation

Input validation is the process of validating all the input to an application before using it. This includes correct syntax, length, characters, and ranges. Consider a web page with a simple form that contains fields corresponding to your physical address information, such as street name, ZIP code, and so on. After you click the Submit button, the information you entered in the fields is sent to the web server and entered into a back-end database. The objective of input validation is to evaluate the format of entered information and, when appropriate, deny the input. To continue our example, let’s focus on the ZIP code field. ZIP codes consist of numbers only, and the basic ones include only five digits. Input validation would look at how many and what type of characters are entered in the field. In this case, the first section of the ZIP code field would require five numeric characters. This limitation would prevent the user from entering more or less than five characters as well as nonnumeric characters. This strategy is known as whitelist or positive validation.

You may wonder, why bother to go through all this? Who cares if a user sends the wrong ZIP code? Who cares if the information entered in the ZIP code field includes letters and/or ASCII characters? Hackers care. Hackers attempt to pass code in those fields to see how the database will react. They want to see if they can bring down the application (DoS attack against that application), bring down the server on which it resides (DoS against the server, and therefore against all the applications that reside on that server), or run code on the target server to manipulate or publish sensitive data. Proper input validation is therefore a way to limit the ability of a hacker to try to abuse an application system.

Dynamic Data Verification

Many application systems are designed to rely on outside parameter tables for dynamic data. Dynamic data is defined as data that changes as updates become available—for example, an e-commerce application that automatically calculates sales tax based on the ZIP code entered. The process of checking that the sales tax rate entered is indeed the one that matches the state entered by the customer is another form of input validation. This is a lot harder to track than when the data input is clearly wrong, such as when a letter is entered in a ZIP code field.

Dynamic data is used by numerous application systems. A simple example is the exchange rate for a particular currency. These values continually change, and using the correct value is critical. If the transaction involves a large sum, the difference can translate into a fair amount of money! Data validation extends to verification that the business rule is also correct.

Output Validation

Output validation is the process of validating (and in some cases, masking) the output of a process before it is provided to the recipient. An example is substituting asterisks for numbers on a credit card receipt. Output validation controls what information is exposed or provided. You need to be aware of output validation, however, especially as it relates to hacker discovery techniques. Hackers look for clues and then use this information as part of the footprinting process. One of the first things a hacker looks to learn about a targeted application is how it reacts to systematic abuse of the interface. A hacker will learn a lot about how the application reacts to errors if the developers did not run output validation tests prior to deployment. They may, for example, learn that a certain application is vulnerable to SQL injection attacks, buffer overflow attacks, and so on. The answer an application gives about an error is potentially a pointer that can lead to vulnerability, and a hacker will try to make that application “talk” to better customize the attack.

Developers test applications by feeding erroneous data into the interface to see how it reacts and what it reveals. This feedback is used to modify the code with the objective of producing a secure application. The more time spent on testing, the less likely hackers will gain the advantage.

Runtime Defenses and Address Randomization

Several runtime defenses and address randomization techniques exist nowadays to prevent threat actors from performing code execution even if a buffer (stack or heap-based) overflow takes place. The most popular technique is address space layout randomization (ASLR). ASRL was created to prevent exploitation of memory corruption vulnerabilities by randomly arranging the address space positions of key data areas of a process. This randomization includes the base of the executable and the positions of the stack, heap and respective libraries.

Another related technique is a position-independent executable (PIE). PIE provides a random base address for the main binary that is being executed. PIE is typically used for network-facing daemons. There is another implementation called the kernel address space layout randomization (KASLR). KASLR’s main purpose is to provide address space randomization to running Linux kernel images by randomizing where the kernel code is placed at boot time.

Why Is Broken Authentication and Session Management Important?

If session management assets such as user credentials and session IDs are not properly protected, the session can be hijacked or taken over by a malicious intruder. When authentication credentials are stored or transmitted in clear text, or when credentials can be guessed or overwritten through weak account management functions (for example, account creation, change password, recover password, weak session IDs), the identity of the authorized user can be impersonated. If session IDs are exposed in the URL, do not time out, or are not invalidated after successful logoff, malicious intruders have the opportunity to continue an authenticated session. A critical security design requirement must be strong authentication and session management controls. A common control for protecting authentication credentials and session IDs is encryption. We discussed authentication in Chapter 9, “Access Control Management.” We examine encryption and the field of cryptography in the next section of this chapter.

In Practice

Application Development Policy

Synopsis: Define code and application development security requirements.

Policy Statement:

  • System owners are responsible for oversight of secure code development.

  • Security requirements must be defined and documented during the application development initiation phase.

  • Code development will be done in accordance with industry best practices.

  • Developers will be provided with adequate training, resources, and time.

  • At the discretion of the system owner and with the approval of the Office of Information Security, third parties may be engaged to design, develop, and test internal applications.

  • All code developed or customized must be tested and validated during development, prior to release, and whenever a change is implemented.

  • The Office of Information Security is responsible for certifying the results of testing and accreditation to move to the next phase.

Cryptography

The art and science of writing secret information is called cryptography. The origin of the term involves the Greek words kryptos, meaning “hidden,” and graphia, meaning “writing.” Three distinct goals are associated with cryptography:

  • Confidentiality: Unauthorized parties cannot access the data. Data can be encrypted, which provides confidentiality.

  • Integrity: Assurance is provided that the data was not modified. Data can be hashed, which provides integrity.

  • Authenticity/nonrepudiation: The source of the data is validated. Data can be digitally signed, which ensures authentication/nonrepudiation and integrity.

Data can be encrypted and digitally signed, which provides for confidentiality, authentication, and integrity.

Encryption is the conversion of plain text into what is known as cipher text, using an algorithm called a cipher. Cipher text is text that is unreadable by a human or computer. Literally hundreds of encryption algorithms are available, and there are likely many more that are proprietary and used for special purposes, such as for governmental use and national security.

Common methods that ciphers use include the following:

  • Substitution: This type of cipher substitutes one character for another.

  • Polyalphabetic: This is similar to substitution, but instead of using a single alphabet, it can use multiple alphabets and switch between them by some trigger character in the encoded message.

  • Transposition: This method uses many different options, including the rearrangement of letters. For example, if we have the message “This is secret,” we could write it out (top to bottom, left to right) as shown in Figure 10-2.

    A sample for the transposition that includes the rearrangement of four letters in three rows is shown. Three rows read: T S S R; H I E E; and I S C T.

    FIGURE 10-2 Transposition Example

We then encrypt it as RETCSIHTSSEI, which involves starting at the top right and going around like a clock, spiraling inward. For someone to know how to encrypt/decrypt this correctly, the correct key is needed.

Decryption, the inverse of encryption, is the process of turning cipher text back into readable plain text. Encryption and decryption require the use of a secret key. The key is a value that specifies what part of the algorithm to apply, in what order, and what variables to input. Similar to authentication passwords, it is critical to use a strong key that cannot be discovered and to protect the key from unauthorized access. Protecting the key is generally referred to as key management. We examine the use of symmetric and asymmetric keys, as well as key management, later in this chapter.

Ensuring that a message has not been changed in any way during transmission is referred to as message integrity. Hashing is the process of creating a numeric value that represents the original text. A hash function (such as SHA or MD5) takes a variable size input and produces a fixed size output. The output is referred to as a hash value, message digest, or fingerprint. Unlike encryption, hashing is a one-way process, meaning that the hash value is never turned back into plain text. If the original data has not changed, the hash function should always produce the same value. Comparing the values confirms the integrity of the message. Used alone, hashing provides message integrity and not confidentiality or authentication.

A digital signature is a hash value (message digest) that has been encrypted with the sender’s private key. The hash must be decrypted with the corresponding key. This proves the identity of the sender. The hash values are then compared to prove the message integrity. Digital signatures provide authenticity/nonrepudiation and message integrity. Nonrepudiation means that the sender cannot deny that the message came from them.

Why Encrypt?

Encryption protects the confidentiality of data at rest and in transit. There are a wide variety of encryption algorithms, techniques, and products. Encryption can be applied granularly, such as to an individual file, or broadly, such as encrypting all stored or transmitted data. Per NIST, the appropriate encryption solution for a particular situation depends primarily on the type of storage, the amount of information that needs to be protected, the environments where the storage will be located, and the threats that need to be mitigated. The three classes of storage (“at rest”) encryption techniques are full disk encryption, volume and virtual disk encryption, and file/folder encryption. The array of in-transit encryption protocols and technologies include TLS/SSL (HTTPS), WPA2, VPN, and IPsec. Protecting information in transit safeguards the data as it traverses a wired or wireless network. The current standard specification for encrypting electronic data is the Advanced Encryption Standard (AES). Almost all known attacks against AES’s underlying algorithm are computationally infeasible.

Regulatory Requirements

In addition to being a best practice, the need for encryption is cited in numerous federal regulations, including the Gramm-Leach-Bliley Act (GLBA) and HIPAA/HITECH. At the state level, multiple states (including Massachusetts, Nevada, and Washington) have statutes requiring encryption. Massachusetts 201 CMR17 requires encryption of all transmitted records and files containing personal information that will travel across public networks, encryption of all data containing personal information to be transmitted wirelessly, as well as encryption of all personal information stored on laptops or other portable devices. Nevada NRS 603A requires encryption of credit and debit card data as well as encryption of mobile devices and media. Washington HB 2574 requires that personal information, including name combined with social security number, driver’s license number, and financial account information, be encrypted if it is transmitted or stored on the Internet.

Another example is the General Data Protection Regulation (GDPR) by the European Commission. One of the GDPR’s main goals is to strengthen and unify data protection for individuals within the European Union (EU), while addressing the export of personal data outside the EU. In short, the primary objective of the GDPR is to give citizens back control of their personal data.

What Is a “Key”?

A key is a secret code that is used by a cryptographic algorithm. It provides the instructions that result in the functional output. Cryptographic algorithms themselves are generally known. It is the secrecy of the key that provides for security. The number of possible keys that can be used with an algorithm is known as the keyspace, which is a large set of random values that the algorithm chooses from when it needs to make a key. The larger the keyspace, the more possibilities for different keys. For example, if an algorithm uses a key that is a string of 10 bits, then its key space is the set of all binary strings of length 10, which results in a keyspace size of 210 (or 1,024); a 40-bit key results in 240 possible values; and a 256-bit key results in 2256 possible values. Longer keys are harder to break, but require more computation and processing power. Two factors must be taken into consideration when deciding upon the key length: the desired level of protection and the amount of resources available.

Symmetric Keys

A symmetric key algorithm uses a single secret key, which must be shared in advance and be kept private by both the sender and the receiver. Symmetric keys are often referred to as shared keys. Because the keys are shared, symmetric algorithms cannot be used to provide nonrepudiation or authenticity. One of the most popular symmetric algorithms of recent years is AES. The strength of symmetric keys is that they are computationally efficient. The weakness is that key management is inherently insecure and that it is not scalable, because a unique key set must be used to protect the secrecy of the key.

Asymmetric Keys

Asymmetric key cryptography, also as known as public key cryptography, uses two different but mathematically related keys known as public and private keys. Think of public and private keys as two keys to the same lock—one used to lock and the other to unlock. The private key never leaves the owner’s possession. The public key is given out freely. The public key is used to encrypt plain text or to verify a digital signature, whereas the private key is used to decrypt cipher text or to create a digital signature. Asymmetric key technologies allow for efficient, scalable, and secure key distribution; however, they are computationally resource-intensive.

What Is PKI?

Public key infrastructure (PKI) is the framework and services used to create, distribute, manage, and revoke public keys. PKI is made up of multiple components, including a certification authority (CA), a registration authority (RA), client nodes, and the digital certificate itself:

  • The certification authority (CA) issues and maintains digital certificates.

  • The registration authority (RA) performs the administrative functions, including verifying the identity of users and organizations requesting a digital certificate, renewing certificates, and revoking certificates.

  • Client nodes are interfaces for users, devices, and applications to access PKI functions, including the requesting of certificates and other keying material. They may include cryptographic modules, software, and procedures necessary to provide user access to the PKI.

  • A digital certificate is used to associate a public key with an identity. Certificates include the certificate holder’s public key, serial number of the certificate, certificate holder’s distinguished name, certificate validity period, unique name of the certificate issuer, digital signature of the issuer, and signature algorithm identifier.

FYI: Viewing a Digital Certificate

If you are using an Apple Mac operating system, the certificates are stored in the Keychain Access utility. If you are using a Microsoft Windows operating system, digital certificates are stored in the Internet Browser application. Figure 10-3 shows the digital certificate for twitter.com, as an example.

A screenshot shows a sample for a Digital Certificate.

FIGURE 10-3 Example of a Digital Certificate

Why Protect Cryptographic Keys?

As mentioned earlier in the chapter, the usefulness of a cryptographic system is entirely dependent on the secrecy and management of the key. This is so important that NIST has published a three-part document devoted to cryptographic key management guidance. SP 800-67: Recommendations for Key Management, Part 1: General (Revision 3) provides general guidance and best practices for the management of cryptographic keying material. Part 2: Best Practices for Key Management Organization provides guidance on policy and security planning requirements for U.S. government agencies. Part 3: Application Specific Key Management Guidance provides guidance when using the cryptographic features of current systems. In the Overview of Part 1, NIST describes the importance of key management as follows: “The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If a safe combination is known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded to the keys. All keys need to be protected against modification, and secret and private keys need to be protected against unauthorized disclosure. Key management provides the foundation for the secure generation, storage, distribution, use, and destruction of keys.”

Best practices for key management include the following:

  • The key length should be long enough to provide the necessary level of protection.

  • Keys should be transmitted and stored by secure means.

  • Key values should be random, and the full spectrum of the keyspace should be used.

  • The key’s lifetime should correspond with the sensitivity of the data it is protecting.

  • Keys should be backed up in case of emergency. However, multiple copies of keys increase the chance of disclosure and compromise.

  • Keys should be properly destroyed when their lifetime ends.

  • Keys should never be presented in clear text.

Key management policy and standards should include assigned responsibility for key management, the nature of information to be protected, the classes of threats, the cryptographic protection mechanisms to be used, and the protection requirements for the key and associated processes.

Digital Certificate Compromise

Certification Authorities (CAs) have increasingly become targets for sophisticated cyber attacks. An attacker who breaches a CA to generate and obtain fraudulent certificates can then use the fraudulent certificates to impersonate an individual or organization. In July of 2012, NIST issued an ITL bulletin titled “Preparing for and Responding to Certification Compromise and Fraudulent Certificate Issue.” The bulletin primarily focuses on guidance for Certification and Registration Authorities. The bulletin does, however, include the guidance for any organization impacted by the fraud.

The built-in defense against a fraudulently issued certificate is certificate revocation. When a rogue or fraudulent certificate is identified, the CA will issue and distribute a certificate revocation list. Alternatively, a browser may be configured to use the Online Certificate Status Protocol (OCSP) to obtain revocation status.

In Practice

Key Management Policy

Synopsis: To assign responsibility for key management and cryptographic standards.

Policy Statement:

  • The Office of Information Security is responsible for key management, including but not limited to algorithm decisions, key length, key security and resiliency, requesting and maintaining digital certificates, and user education. The Office of Information Security will publish cryptographic standards.

  • The Office of Information Technology is responsible for implementation and operational management of cryptographic technologies.

  • Without exception, encryption is required whenever protected or confidential information is transmitted externally. This includes email and file transfer. The encryption mechanism must be NIST-approved.

  • Without exception, all portable media that stores or has the potential to store protected or confidential information must be encrypted. The encryption mechanism must be NIST-approved.

  • Data at rest must be encrypted regardless of media when required by state and/or federal regulation or contractual agreement.

  • At all times, passwords and PINs must be stored and transmitted as cipher text.

FYI: Small Business Note

Encryption keeps valuable data safe. Every organization, irrespective of size, should encrypt the following if there is any chance that legally protected or company confidential data will be stored or transmitted:

  • Mobile devices, such as laptops, tablets, smartphones

  • Removable media, such as USB drives and backup tapes

  • Internet traffic, such as file transfer or email

  • Remote access to the company network

  • Wireless transmission

When creating the secure key, make sure to use a long random string of numbers, letters, and special characters.

Summary

Whether they are developed in-house, purchased, or open source, companies rely on line-of-business applications. This reliance implies that the availability of those solutions must be protected to avoid severe losses in revenue; the integrity must be protected to avoid unauthorized modification; and the confidentiality must be protected to honor the public trust and maintain compliance with regulatory requirements.

Custom applications should be built with security in mind from the start. Adopting an SDLC methodology that integrates security considerations ensures that this objective is met. The SDLC provides a structured and standardized process for all phases of any system development effort. During the initiation phase, the need for a system is expressed and the purpose of the system is documented. During the development/acquisition phase, the system is designed, purchased, programmed, developed, or otherwise constructed. During the implementation phase, the system is tested, modified if necessary, retested if modified, and finally accepted. During the operational phase, the system is put into production. Monitoring, auditing, and testing should be ongoing. Activities conducted during the disposal phase ensure the orderly termination of the system, safeguarding vital system information, and migrating data processed by the system to a new system.

SDLC principles extend to COTS (commercial off-the-shelf) software as well as open source software. It is important to recognize the stages of software releases. The alpha phase is the initial release of software for testing. Beta phase indicates that the software is feature-complete and the focus is usability testing. A release candidate (RC) is a hybrid of a beta and a final release version. General availability or “go live” is when the software has been made commercially available and is in general distribution. Alpha, beta, and RCs should never be implemented in a production environment. Over the course of time, publishers may release updates and security patches. Updates generally include enhancements and new features. Updates should be thoroughly tested before release to a production environment. Even tested applications should have a rollback strategy in case the unexpected happens. Live data should never be used in a test environment; instead, de-identified or dummy data should be used.

The Open Web Application Security Project (OWASP) is an open community dedicated to enabling organizations to develop, purchase, and maintain applications that can be trusted.

The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. Throughout recent years, OWASP rated injection flaws as the number-one software and database security issue. Injection is when untrusted data is sent to an interpreter as part of a command or query. Input and output validation minimizes injection vulnerabilities. Input validation is the process of validating all the input to an application before using it. This includes correct syntax, length, characters, and ranges. Output validation is the process of validating (and in some cases, masking) the output of a process before it is provided to the recipient.

Data at rest and in transit may require cryptographic protection. Three distinct goals are associated with cryptography: Data can be encrypted, which provides confidentiality. Data can be hashed, which provides integrity. Data can be digitally signed, which provides authenticity/nonrepudiation and integrity. Also, data can be encrypted and digitally signed, which provides for confidentiality, authentication, and integrity. Encryption is the conversion of plain text into what is known as cipher text, using an algorithm called a cipher. Decryption, the inverse of encryption, is the process of turning cipher text back into readable plain text. Hashing is the process of creating a fixed-length value known as a fingerprint that represents the original text. A digital signature is a hash value (also known as a message digest) that has been encrypted with the sender’s private key.

A key is a value that specifies what part of the cryptographic algorithm to apply, in what order, and what variables to input. The keyspace is a large set of random values that the algorithm chooses from when it needs to make a key. Symmetric key algorithms use a single secret key, which must be shared in advance and kept private by both the sender and the receiver. Asymmetric key cryptography, also known as public key cryptography, uses two different but mathematically related keys known as public and private keys. A digital certificate is used to associate a public key with an identity.

A public key infrastructure (PKI) is used to create, distribute, manage, and revoke asymmetric keys. A certification authority (CA) issues and maintains digital certificates. A registration authority (RA) performs the administrative functions, including verifying the identity of users and organizations requesting a digital certificate, renewing certificates, and revoking certificates. Client nodes are interfaces for users, devices, and applications to access PKI functions, including the requesting of certificates and other keying material. They may include cryptographic modules, software, and procedures necessary to provide user access to the PKI.

Information Systems Acquisition, Development, and Maintenance (ISADM) policies include SDLC, Application Development, and Key Management.

Test Your Skills

Multiple Choice Questions

1. When is the best time to think about security when building an application?

A. Build the application first and then add a layer of security.

B. From the planning and design phase and through the whole development life cycle.

C. Start the application development phase, and when you reach the halfway point, you have enough of a basis to look at to decide where and how to set up the security elements.

D. No security needs to be developed inside of the code itself. It will be handled at the operating system level.

2. Which of the following statements best describes the purpose of the systems development life cycle (SDLC)?

A. The purpose of the SDLC is to provide a framework for system development efforts.

B. The purpose of the SDLC is to provide a standardized process for system development efforts.

C. The purpose of the SDLC is to assign responsibility.

D. All of the above.

3. In which phase of the SDLC is the need for a system expressed and the purpose of the system documented?

A. The initiation phase

B. The implementation phase

C. The operational phase

D. The disposal phase

4. In which phase of the SDLC should design reviews and system tests be performed to ensure that all required security specifications are met?

A. The initiation phase

B. The implementation phase

C. The operational phase

D. The disposal phase

5. Which of the following statements is true?

A. Retrofitting security controls to an application system after implementation is normal; this is when security controls should be added.

B. Retrofitting security controls to an application system after implementation is sometimes necessary based on testing and assessment results.

C. Retrofitting security controls to an application system after implementation is always a bad idea.

D. Retrofitting security controls to an application system after implementation is not necessary because security is handled at the operating system level.

6. Which phase of software release indicates that the software is feature-complete?

A. Alpha

B. Beta

C. Release candidate

D. General availability

7. Which phase of software release is the initial release of software for testing?

A. Alpha

B. Beta

C. Release candidate

D. General availability

8. Which of the following statements best describes the difference between a security patch and an update?

A. Patches provide enhancements; updates fix security vulnerabilities.

B. Patches should be tested; updates do not need to be tested.

C. Patches fix security vulnerabilities; updates add features and functionality.

D. Patches cost money; updates are free.

9. The purpose of a rollback strategy is to ______________.

A. make backing up easier

B. return to a previous stable state in case problems occur

C. add functionality

D. protect data

10. Which of the following statements is true?

A. A test environment should always be exactly the same as the live environment.

B. A test environment should be as cheap as possible, no matter what.

C. A test environment should be as close to the live environment as possible.

D. A test environment should include live data for true emulation of the real-world setup.

11. Which of the following statements best describes when dummy data should be used?

A. Dummy data should be used in the production environment.

B. Dummy data should be used in the testing environment.

C. Dummy data should be used in both test and production environments.

D. Dummy data should not be used in either test or production environments.

12. Which of the following terms best describes the process of removing information that would identify the source or subject?

A. Detoxification

B. Dumbing down

C. Development

D. De-identification

13. Which of the following terms best describes the open framework designed to help organizations implement a strategy for secure software development?

A. OWASP

B. SAMM

C. NIST

D. ISO

14. Which of the following statements best describes an injection attack?

A. An injection attack occurs when untrusted data is sent to an interpreter as part of a command.

B. An injection attack occurs when trusted data is sent to an interpreter as part of a query.

C. An injection attack occurs when untrusted email is sent to a known third party.

D. An injection attack occurs when untrusted data is encapsulated.

15. Input validation is the process of ___________.

A. masking data

B. verifying data syntax

C. hashing input

D. trusting data

16. Which of the following types of data change as updates become available?

A. Moving data

B. Mobile data

C. Dynamic data

D. Delta data

17. The act of limiting the characters that can be entered into a web form is known as ___________.

A. output validation

B. input validation

C. output testing

D. input testing

18. Which statement best describes a distinguishing feature of cipher text?

A. Cipher text is unreadable by a human.

B. Cipher text is unreadable by a machine.

C. Both A and B.

D. Neither A nor B.

19. Which term best describes the process of transforming plain text to cipher text?

A. Decryption

B. Hashing

C. Validating

D. Encryption

20. Which of the following statements is true?

A. Digital signatures guarantee confidentiality only.

B. Digital signatures guarantee integrity only.

C. Digital signatures guarantee integrity and nonrepudiation.

D. Digital signatures guarantee nonrepudiation only.

21. Hashing is used to ensure message integrity by ____________.

A. comparing hash values

B. encrypting data

C. encapsulating data

D. comparing algorithms and keys

22. When unauthorized data modification occurs, which of the following tenets of security is directly being threatened?

A. Confidentiality

B. Integrity

C. Availability

D. Authentication

23. Which of the following statements about encryption is true?

A. All encryption methods are equal: Just choose one and implement it.

B. The security of the encryption relies on the key.

C. Encryption is not needed for internal applications.

D. Encryption guarantees integrity and availability, but not confidentiality.

24. Which of the following statements about a hash function is true?

A. A hash function takes a variable-length input and turns it into a fixed-length output.

B. A hash function takes a variable-length input and turns it into a variable-length output.

C. A hash function takes a fixed-length input and turns it into a fixed-length output.

D. A hash function takes a fixed-length input and turns it into a variable-length output.

25. Which of the following values represents the number of available values in a 256-bit keyspace?

A. 2 × 2256

B. 2 × 256

C. 2562

D. 2256

26. Which of the following statements is not true about a symmetric key algorithm?

A. Only one key is used.

B. It is computationally efficient.

C. The key must be publicly known.

D. AES is widely used.

27. The contents of a __________ include the issuer, subject, valid dates, and public key.

A. digital document

B. digital identity

C. digital thumbprint

D. digital certificate

28. Two different but mathematically related keys are referred to as ___________.

A. public and private keys

B. secret keys

C. shared keys

D. symmetric keys

29. In cryptography, which of the following is not publicly available?

A. Algorithm

B. Public key

C. Digital certificate

D. Symmetric key

30. A hash value that has been encrypted with the sender’s private key is known as a _________.

A. message digest

B. digital signature

C. digital certificate

D. cipher text

Exercises

Exercise 10.1: Building Security into Applications
  1. Explain why security requirements should be considered at the beginning stages of a development project.

  2. Who is responsible for ensuring that security requirements are defined?

  3. In which phases of the SDLC should security be evaluated?

Exercise 10.2: Understanding Input Validation
  1. Define input validation.

  2. Describe the type of attack that is related to poor input validation.

  3. In the following scenario, what should the input validation parameters be?

    A class registration web form requires that students enter their current year. The entry options are numbers from 1 to 4 that represent the following: freshmen=1, sophomores=2, juniors=3, and seniors=4.

Exercise 10.3: Researching Software Releases
  1. Find an example of commercially available software that is available as either a beta version or a release candidate.

  2. Find an example of open source software that is available as either an alpha, beta, or release candidate.

  3. For each, does the publisher include a disclaimer or warning?

Exercise 10.4: Learning About Cryptography
  1. Access the National Security Agency’s CryptoKids website.

  2. Play at least two of the games.

  3. Explain what you learned.

Exercise 10.5: Understanding Updates and Systems Maintenance
  1. Microsoft bundles feature and function updates and refers to them as “service packs.” Locate a recently released service pack.

  2. Does the service pack have a rollback option?

  3. Explain why a rollback strategy is important when upgrading an operating system or application.

Projects

Project 10.1: Creating a Secure App

You have obtained financing to design a mobile device app that integrates with your school’s student portal so that students can easily check their grades from anywhere.

  1. Create a list of security concerns. For each concern, indicate if the issue is related to confidentiality, integrity, availability (CIA), or any combination thereof.

  2. Create a project plan using the SDLC framework as a guide. Describe your expectations for each phase. Be sure to include roles and responsibilities.

  3. Research and recommend an independent security firm to test your application. Explain why you chose them.

Project 10.2: Researching the Open Web Application Security Project (OWASP)

The OWASP Top Ten has become a must-read resource. Go to https://www.owasp.org and access the current OWASP Top Ten Web Application report.

  1. Read the entire report.

  2. Write a memo addressed to Executive Management on why they should read the report. Include in your memo what OWASP means by “It’s About Risks, Not Weaknesses” on page 20.

  3. Write a second memo addressed to developers and programmers on why they should read the report. Include in your memo references to other OWASP resources that would be of value to them.

Project 10.3: Researching Digital Certificates

You have been tasked with obtaining an extended validation SSL digital certificate for an online shopping portal.

  1. Research and choose an issuing CA. Explain why you chose the specific CA.

  2. Describe the process and requirements for obtaining a digital certificate.

  3. Who in the organization should be tasked with installing the certificate and why?

References

Regulations Cited

“201 Cmr 17.00: Standards for the Protection of Personal Information of Residents of the Commonwealth,” official website of the Office of Consumer Affairs & Business Regulation (OCABR), accessed 04/2017, www.mass.gov/ocabr/docs/idtheft/201cmr1700reg.pdf.

“HIPAA Security Rule,” official website of the Department of Health and Human Services, accessed 04/2017, https://www.hhs.gov/hipaa/for-professionals/security/index.html.

State of Nevada, “Chapter 603A—Security of Personal Information,” accessed 04/2017, https://www.leg.state.nv.us/NRS/NRS-603A.html.

State of Washington, “HB 2574, An Act Relating to Securing Personal Information Accessible Through the Internet,” accessed 04/2017, http://apps.leg.wa.gov/documents/billdocs/2007-08/Pdf/Bills/House%20Bills/2574.pdf.

Other References

“Certificate,” Microsoft Technet, accessed 04/2017, https://technet.microsoft.com/en-us/library/cc700805.aspx.

Santos, Omar, Joseph Muniz, and Stefano De Crescenzo, CCNA Cyber Ops SECFND 210-250 Official Cert Guide, Cisco Press: Indianapolis, 2017.

Kak, Avi, “Lecture 15: Hashing for Message Authentication, Lecture Notes on Computer and Network Security,” Purdue University, accessed 04/2017, https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture15.pdf.

“OpenSAMM: Software Assurance Maturity Model,” accessed 04/2017, www.opensamm.org.

“RFC 6960, X.509 Internet Public Key Infrastructure Online Certificate Status Protocol—OCSP,” June 2013, Internet Engineering Task Force, accessed 04/2017, https://tools.ietf.org/html/rfc6960.

“RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile,” May 2008, Internet Engineering Task Force, accessed 04/2017, https://tools.ietf.org/html/rfc5280.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.63.87