Chapter 10

Software Development Security

IN THIS CHAPTER

check Applying security throughout the software development lifecycle

check Enforcing security controls

check Protecting development environments

check Assessing software security

check Reducing risk by applying safe coding practices

check Sizing up the security impact of off-the-shelf software

You must understand the principles of software security controls, software development, and software vulnerabilities. Software and data are the foundation of information processing; software can’t exist apart from software development. An understanding of the software development process is essential for the creation and maintenance of software that’s appropriate, reliable, and secure. This domain represents 10 percent of the CISSP certification exam.

Understand and Integrate Security in the Software Development Lifecycle

The software development lifecycle (SDLC, also known as the systems development lifecycle and the software development methodology [SDM]) refers to all the steps required to develop software and systems from conception through implementation, support, and (ultimately) retirement. In other words, the entire life of software and systems, from birth to death, and everything in between (like adolescence, going off to college, getting married, and retirement)!

The lifecycle is a development process designed to achieve two objectives: software and systems that perform their intended functions correctly and securely, and a development or integration project that’s completed on time and on budget.

tip As we point out numerous times in this chapter, the term software development lifecycle is giving way to systems development lifecycle. This is because the process applies to more than just software development; it more broadly applies to systems development. This can include networks, servers, database management systems, and more.

Development methodologies

Popular development methodologies include waterfall and Agile, as discussed in the following sections.

Waterfall

In the waterfall model of software (or system) development, each of the stages in the lifecycle progress like a series of waterfalls (see Figure 10-1). Each of the stages is performed sequentially, one at a time. Typically, these stages consist of the following:

  • Conceptual definition. This is a high-level description of the software or system deliverable. It generally contains no details — it’s the sort of description that you want to give to the business and finance people (those folks who fund your projects and keep you employed). You don’t want to scare them with details. And they probably don’t understand them anyway!
  • Functional requirements. These are the required characteristics of the software or system deliverable. (Basically, a list.) Rather than a design, the functional requirements are a collection of things that the software or system must do. Although functional requirements don’t give you design-level material, this description contains more details than the conceptual definition. Functional requirements usually include a test plan, which is a detailed list of software or system functions and features that must be tested. The test plan describes both how each test should be performed and the expected results. Generally, you have at least one test in the test plan for each requirement in the functional requirements. Functional requirements also must contain expected security requirements for the software or system.
  • Functional specifications. These can be considered the software development department’s version of functional requirements. Rather than a list of have-to-have and nice-to-have items, the functional specification is more of a what-it-is (we hope) or a what-we-think-we-can-build statement (to this point, the MoSCoW method can be used to prioritize requirements). Functional specifications aren’t quite a design, but rather a list of characteristics that the developers and engineers think they can create in the real world. From a security perspective, the functional specifications for an operating system or application should contain all the details about authentication, authorization, access control, confidentiality, transaction auditing, integrity, and availability.
  • Design. This is the process of developing the highest-detail designs. In the application software world, design includes entity-relationship diagrams, data-flow diagrams, database schemas, over-the-wire protocols, and more. For networks, this will include the design of local area networks (LANs), wide area networks (WANs), subnets, and the devices that tie them all together and provide needed security.
  • Design review. This is the last step in the design process, in which a group of experts (some are on the design team and some aren’t) examine the detailed designs. Those not on the design team give the design a set of fresh eyes and a chance to catch a design flaw or two.
  • Coding. This is the phase that the software developers and engineers yearn for. Most software developers would prefer to skip all of the prior steps (described in the preceding sections) and start coding right away (and system or network engineers building their things) — even before the formal requirements are known! It’s scary to think about how much of the world’s software was created with coding as the first activity. (Would you fly in an airplane that the machinists built before the designers could produce engineering drawings? Didn’t think so.) Coding and systems development usually include unit testing, which is the process of verifying all the modules and other individual pieces that are built in this phase.
  • Code review. As in the design phase, the coding phase for software development ends with a code review, in which developers examine each other’s program code and get into philosophical arguments about levels of indenting and the correct use of curly braces. Seriously, though, during code review, engineers can discover mistakes that would cost you a lot of money if you had to fix them later in the implementation process or in maintenance mode. There are several good static and dynamic code analysis tools that you can use to automatically identify security vulnerabilities and other errors in software code. Many organizations use these tools to ferret out programming errors that would otherwise result in vulnerabilities that attackers might exploit. You can review code review in Chapter 8!
  • Configuration review. For systems development, such as operating systems and networks, the configuration review phase involves the examination of system or device configuration checks and similar activities. This is an important step that helps to verify that individual components were built properly. This activity helps save time in the long run, because errors found at this stage ensure that subsequent steps will go more smoothly and that errors in subsequent steps will be somewhat easier to troubleshoot because the configuration of individual components will have already taken place.
  • Unit test. When portions of an application or other system have been developed, it’s often possible to test the pieces separately. This is called unit testing. Unit testing allows a developer, engineer, or tester to verify the correct functioning of individual modules in an application or system. Unit testing is usually done during coding and other component development. It doesn’t always show up as a separate step in process diagrams.
  • System test. A system test occurs when all the components of the entire system have been assembled, and the entire system is tested from end to end. The test plan that was developed in the functional requirements step is carried out here. Of course, the system test includes testing all the system’s security functions, because the program’s designers included those functions in the test plan (right?). You can find some great tools to rigorously test for vulnerabilities in software applications, as well as operating systems, database management systems, network devices, and other things. Many organizations consider the use of such tools a necessary step in system tests, so that they can ensure that the system has no exploitable vulnerabilities.
  • Certification & accreditation. Certification is the formal evaluation of the application or system: Every intended feature performs as planned, and the system is declared fully functional. Accreditation means that the powers that be have said that it’s okay to put the system into production. That could mean to offer it for sale, build it and ship it, or whatever “put into production” means in your organization.

    tip (ISC)2 now offers the Certification and Accreditation Professional (CAP) certification. You might consider it, if you want to take accreditation to the next level in your career.

  • Implementation. When all testing and any required certifications and accreditations are completed, the software can be released to production. This usually involves a formal mechanism whereby the software developers create a “release package” to operations. The release package contains the new software, as well as any instructions for operations personnel so that they know how to implement it and verify that it was implemented correctly. An implementation plan usually also includes “backout” instructions used to revert the software (and any other changes) back to its pre-change state.
  • Maintenance. At this point, the software or system is fully functional, in production, and doing what it was designed to do. The maintenance phase is system’s “golden years”. Then, customers start putting in change requests because — well, because that’s what people do! Change management and configuration management are the processes used to maintain control of (and document all changes to) the software or system over its lifetime. Change and configuration management are both discussed later in this chapter!

    warning You need good documentation, in the form of those original specification and design documents, because the developers who wrote this software or built the system have probably moved on to some other cool project, or even another organization … and the new guys and gals are left to maintain it.

image

FIGURE 10-1: A typical system development model takes a project from start to finish.

Agile

Agile development involves a more iterative, less formal approach to software and systems development than more traditional methodologies, such as the waterfall method (discussed in the preceding section). As its name implies, agile development focuses on speed in support of rapidly, and often constantly, evolving business requirements.

The Manifesto for Agile Software Development (www.agilemanifesto.org) describes the underlying philosophy of agile development as follows:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

The manifesto doesn’t disregard the importance of the items on the right (such as processes and tools), but it focuses more on the italicized items on the left.

Specific implementations of agile development take many forms. One common approach is the Scrum methodology. Typical activities and artifacts in the Scrum methodology include

  • Product backlog. This is a prioritized list of customer requirements, commonly known as user stories, that is maintained by the product owner. The product owner is a business (or customer) representative that communicates with the scrum team on behalf of the project stakeholders.
  • User stories. These are the formal requirements written as brief, customer-centric descriptions of the desired feature or function. User stories usually take the form “As a [role], I want to [feature/function] so that I can [purpose]”. For example, “As a customer service representative, I want to be able to view full credit card information so that I can process customer refunds.”

    warning The user story in the preceding example should be raising all sorts of red flags and sounding alarms in your head! This example illustrates why security professionals need to be involved in the development process (particularly when agile development methods are used, in which requirements are developed “on the fly” and may not be well thought out or part of a well-documented and comprehensive security strategy). The user in this example may simply be trying to perform a legitimate job function, and may have a limited understanding of the potential security risks this request introduces. If the developer is not security-focused and doesn’t challenge the requirement, the feature may be delivered as requested. In the developer’s mind, a feature was rapidly developed as requested and delivered to the customer error-free, but major security risks may have been unintentionally and unwittingly made an inherent part of the software! That may mean that someone in security (maybe you!) needs to attend development meetings to be sure risky features aren’t being developed.

  • Sprint planning. During sprint planning, the entire team meets during the first two hours and selects the product backlog items they believe they can deliver during the upcoming sprint (also known as an iteration), typically a two-week time boxed cycle. During the next two hours of the sprint planning meeting (or event), the development team breaks down the product backlog items (selected during the first two hours) into discrete tasks and plans the work that will be required during the sprint (including who will do what).
  • Daily standup. The team members hold a daily 15-minute standup (or scrum) meeting throughout the two-week sprint during which each team member answers the following three questions:

    • What did I accomplish yesterday?
    • What will I accomplish today?
    • What obstacles or issues exist that may prevent me from meeting the sprint goal?

    The daily standup is run by the scrum master, who is responsible for tracking (and reporting) the sprint’s progress and resolving any obstacles or issues that are identified during the daily standup.

  • Sprint review and retrospective. At the end of each two-week sprint, the team holds a sprint review meeting (typically, two hours) with the product owner and stakeholders to

    • Present (or demonstrate) the work that was completed during the sprint.
    • Review any work that was planned, but not completed, during the sprint.

    The sprint retrospective is typically a 90-minute meeting. The team identifies what went well during the sprint, and what can be improved in the next sprint.

warning The preceding scrum process is a very high-level overview of one possible Scrum methodology. There are as many iterations of agile software development methods as there are iterations of software development! For a more complete discussion of Agile and Scrum methodologies, we recommend Agile Project Management For Dummies and Scrum For Dummies, both by Mark Layton! Another thing you can do is perform an Internet search on “pigs and chickens” to learn about the folklore behind the Scrum methodology. You’ll probably find it interesting. Make sure you find the accompanying joke about the pig and the chicken who discussed opening a restaurant together.

Security concerns to be addressed within any agile development process can include a lack of formal documentation or comprehensive planning. In more traditional development approaches, such as waterfall, extensive upfront planning is done before any actual development work begins. This planning can include creating formal test acceptance criteria, security standards, design and interface specifications, detailed frameworks and modeling, and certification and accreditation requirements. The general lack of such formal documentation and planning in the agile methodology isn’t a security issue itself, but it does mean that security needs to be “front of mind” for everyone involved in the agile development process throughout the lifecycle of the project.

Maturity models

Organizations that need to understand the quality of their software and systems development processes and practices can benchmark their SDLC by measuring its maturity. There are models available for measuring software and systems development maturity, including:

  • Capability Maturity Model Integration (CMMI). By far the most popular model for measuring software development maturity, the CMMI is required by many U.S. government agencies and contractors. The model defines five levels of maturity:

    • Initial. Processes are chaotic and unpredictable, poorly controlled, and reactive.
    • Managed. Processes are characterized for projects, but are still reactive.
    • Defined. Processes are defined (written down) and more proactive.
    • Quantitatively managed. Processes are defined and measured.
    • Optimizing. Processes are measured and improved.

    Information about the CMMI is available at http://isaca.org.

  • Software Assurance Maturity Model (SAMM). This model is an open framework that is geared towards organizations that want to ensure that development projects include security features.

    More information about SAMM is available at www.opensamm.org.

  • Building Security In Maturity Model (BSIMM). This model is used to measure the extent to which security is included in software development processes. This model has four domains:

    • Governance
    • Intelligence
    • Secure Software Development Lifecycle (SSDL) Touchpoints
    • Deployment

    Information is available from www.bsimm.com.

  • Agile Maturity Model (AMM). This is a software process improvement framework for organizations that use Agile software development processes. More information about AMM is available here: www.researchgate.net/publication/45227382_Agile_Maturity_Model_(AMM)_A_Software_Process_Improvement_framework_for_Agile_Software_Development_Practices.

Organizations can either perform self-assessments or employ outside experts to measure their development maturity. Some opt for outside experts as a way of instilling confidence for customers.

Operation and maintenance

Software and systems that have been released to operations become a part of IT operations and its processes. Several operational aspects come into play, including:

  • Access management. If the application or system uses its own user access management, then the person or team that fulfills access requests will do so for the application.
  • Event management. The application or system will be writing entries to one or more audit logs or audit logging systems. Either personnel will review these logs, or (better) these logs will be tied to a security information and event management (SIEM) system to notify personnel of actionable events.
  • Vulnerability management. Periodically, personnel will test the application or system to see whether it contains security defects that could lead to a security breach. The types of tests that may be employed include security scans, vulnerability assessments, and penetration tests. For software applications, tests could also include static and dynamic code reviews.
  • Performance management. The application or system may be writing performance-related entries into a logging system, or external tools may be used to measure the response time of key system functions. This helps ensure that the system is healthy, usable, and not consuming excessive or inappropriate resources.
  • Audits. To the extent that an application or system is in scope for security or privacy audits, operational aspects of an application or system will be examined by internal or external auditors to ensure that the application or system is being properly managed and that it is operating correctly. This topic is expanded later in this chapter.

From the time that a software application or system is placed into production, development will continue, but typically at a slower pace. During this phase, additional development tasks may be needed, such as

  • Minor feature updates
  • Bug fixes
  • Security patching and updating
  • Custom modifications

Finally, at the end of a system’s service life, it will be decommissioned. This typically involves one of three outcomes:

  • Migration to a replacement system. Here, data in the old system may be migrated to a replacement system to preserve business records so that transaction history during the era of the old system may be viewed in its replacement.
  • Co-existence with replacement system. Here, the old system may be modified so that it operates in a “read only” mode, permitting users to view data and records in the old system. Organizations that take this path will keep an old system for a period of a few months to a year or longer. This option usually is taken when the cost of migrating data to the new system exceeds the cost of keeping the old system running.
  • Shutdown. In some instances, an organization will discontinue use of the system. Here, the business records may be archived for long-term storage if requirements or regulations dictate.

tip The operations and maintenance activities here may be a part of an organization’s DevOps processes. We discuss this later in this chapter.

Change management

Change management is the formal business process that ensures all changes made to a system receive formal review and approval from all stakeholders before implementation. Change management gives everyone a chance to voice their opinions and concerns about any proposed change, so that the change goes as smoothly as possible, with no surprises or interruptions in service.

Change management is discussed in greater detail in Chapter 9.

remember The process of approving modifications to a production environment is called change management.

warning Don’t confuse the concept of change management with configuration management (discussed later in this chapter). The two are distinctly different from one another.

Integrated product team

DevOps is a popular trend that represents the fusion of Development and Operations. It extends Agile development practices to the entire IT organization. Perhaps not as exciting as an Asian-Italian fusion restaurant serving up a gourmet sushi calzone, but hey, this is software and systems development, not fine dining! (Sorry.)

The goal of DevOps is to improve communication and collaboration between software/systems developers and IT operations teams, in order to facilitate the rapid and efficient deployment of software and infrastructure.

However, as with Agile development methodologies, there are inherent security challenges that must be addressed in a DevOps environment. Traditional IT organizations and security best practices have maintained strict separation between development and production environments. While these distinctions remain in a DevOps environment, they are a little less absolute. Therefore, this situation can introduce additional risks to the production environment that must be adequately addressed with proper controls and accountability throughout the IT organization.

To learn more about DevOps, pick up a copy of either The Phoenix Project or The Visible Ops Handbook, both written by Kevin Behr, Gene Kim, and George Spafford. They are considered must-reads for many IT organizations.

Identify and Apply Security Controls in Development Environments

Development environments are the collection of systems and tools used to develop and test software and systems prior to their release to production. Particular care is required in securing development environments, to ensure that security vulnerabilities and back doors are not introduced into the software that is created there. These safeguards also protect source code from being stolen by adversaries.

Security of the software environments

To ensure the security of the software programs produced by developers and development teams, the software development environment itself must be protected. Controls to be considered include

  • Separate systems for development, testing, quality assurance (QA), and user acceptance testing (UAT). These activities should take place in separate environments, so that various activities will not interfere with each other. In some cases, there may be restrictions on which developers (as well as testers and users) are permitted to access which environments. Also, the workstations used for writing and testing code should not be the same ones used for routine office functions, such as email and accessing internal and Internet applications.
  • Don’t use live production data in development, test, or QA environments. In many applications, production instances contain sensitive data such as personally identifiable information (PII) and other sensitive or personal information. Instead, tools to anonymize such data can be used to provide realistic data in test environments without risking exposing such data.
  • Isolate from the Internet. Because development systems aren’t used for web access or email, there should be little objection to this. Security patches can be pushed from internal systems, instead of retrieving them from the Internet (that’s the preferred practice anyway).
  • Event logging. Logging of events at the OS level, as well as at the development level, is used to troubleshoot problems, as well as to give auditors a running history of developer actions.
  • Source code version control. All changes to source code need to be managed through a modern source code management system that has check-out, check-in, rollback, locking, access control, and logging functions. This helps to ensure that all access to, and modification of, source code is logged. These systems are also used to restrict access to highly sensitive code, such as that used for authentication and session management, so that it is accessible by as few developers as feasible.
  • Remove administrative privileges. The user account used for coding and testing should not be a local or domain administrator.

    Some developers may bristle at this one; they argue that they can’t perform testing like software installation and OS level debugging. No problem. Give them another machine (or virtual machine) with admin privileges for that. But then, don’t let them use that system for routine office tasks such as email and Internet access.

  • Use standard development tools. All developers on the same team or project should be using the same IDE (integrated development environment) or whatever coding, testing, and compiling tools they use. Developers should not be permitted to “do their own thing”, as this may introduce compromised tools or libraries that could leak or inject back doors into the software they’re developing.
  • Use only company-owned systems. Developers should not be developing on BYOD (bring your own device) systems. Instead, they should be using company-acquired and -supported systems, to ensure that these systems are fully protected from malware and tampering.

These safeguards should be applied to both developer workstations and centralized build and test systems.

Configuration management as an aspect of secure coding

Configuration management is often confused with change management, but actually has little to do with approvals and everything to do with recording all the facts about the change. Configuration management captures actual changes to software code, end-user documentation, operations documentation, developer tools and settings, program build tools and settings, disaster recovery planning documentation, and other details of the change. Configuration management archives technical details for each change and release of the system, as well as for each instance of the software, if more than one instance exists.

Configuration management is also entirely relevant and applicable to system environments, including but not limited to operating systems, database management systems, middleware, and all types of network devices. When changes are made via the change management process, the details of all configuration changes are recorded in a configuration management database (CMDB). This can help engineers troubleshoot problems later by giving them the ability to easily know both current (expected) and prior configuration settings in all these types of systems and devices.

remember Change management and configuration management address two different aspects of change in a system’s maintenance mode:

  • Change management is the why.
  • Configuration management is the what.

remember The process of managing the changes being made to systems is called change management. The process of recording modifications to a production environment is called configuration management.

Security of code repositories

During and after development, program source code resides in a central source code repository. Source code must be protected from both unauthorized access and unauthorized changes. Controls to enforce this protection include

  • System hardening. Intruders must be kept out of the OS itself. This includes all of the usual system hardening techniques and principles for servers, as discussed in Chapter 5.
  • System isolation. The system should be reachable by only authorized personnel, and no other persons. It should not be reachable from the Internet, nor should it be able to access the Internet, for any reason. The system should function only as a source code repository and not for other purposes.
  • Restricted developer access. Only authorized developers and any other personnel should have access to source code.
  • Restricted administrator access. Only authorized personnel (ideally not developers!) should have administrative access to the source code repository software, as well as the underlying operating system, and other components such as database management systems.
  • No direct source code access. No one should be able to access source code directly. Instead, everyone should be accessing it through the management software.
  • Limited, controlled checkout. Developers should only be able to check out modules when specifically authorized. This can be automated through integration with a software defect tracking system.
  • Restricted access to critical code. Few developers should have access to the most critical code, including code used for security functions such as authentication, session management, and encryption.
  • No bulk access. Developers should not, under any circumstances, be able to check out all modules. (This is primarily for preventing intellectual property theft.)
  • Retention of all versions. The source code repository should maintain copies of all previous versions of source code, so modules can be “rolled back” as needed.
  • Check-in approval. All check-ins should require approval of another person. This prevents a developer from unilaterally introducing defects or back doors into a program.
  • Activity reviews. The activity logs for a source code repository should be periodically reviewed to make sure that there are no unauthorized check-outs or check-ins, and all check-ins represent only authorized changes to source code.

Assess the Effectiveness of Software Security

Former U.S. President Ronald Reagan was well known for his phrase trust but verify. We take this a little further by saying don’t trust until verified. This credo applies to many aspects of information security, including the security of software.

Initial and periodic security testing of software is an essential part of the process of developing (or acquiring) and managing software throughout its entire lifespan. The reason for periodic testing is that researchers (both white hat and black hat) are always finding new ways of exploiting software programs that were once considered secure.

Other facets of security testing are explored in lurid detail in Chapter 8.

Auditing and logging of changes

Logging changes is an essential aspect of system and software behavior. The presence of logs facilitates troubleshooting, verification, and reconstruction of events.

There are two facets of changes that are important here:

  • Changes performed by the software. Mainly, this means changes made to data. As such, a log entry will include “before” and “after” values, as well as other essentials, including user, date, time, and transaction ID. This also includes configuration changes that alter software behavior.
  • Changes made to the software. This generally means changes to the actual software code. In most organizations, this involves change management and configuration management processes. However, while investigating system problems, you shouldn’t discount the possibility of unauthorized changes. The absence of change management records is not evidence of the absence of changes.

Log data for these categories may be stored either locally or in a central repository, such as a SIEM (security information and event management) system. Appropriate personnel should be notified in a timely manner when actionable events take place. This is discussed more fully in Chapter 9.

Risk analysis and mitigation

Risk analysis of software programs and systems is an essential means for identifying and analyzing risks. The types of risks that will likely be included are

  • Known vulnerabilities. What vulnerabilities can be identified, how they can be exploited, and whether the software has any means of being aware of attempted exploitation and defending itself.
  • Unknown vulnerabilities. Here, we’re talking about vulnerabilities that have yet to be discovered. If you’re unsure of what we mean, just imagine any of several widely available programs that seem to be plagued with new vulnerabilities month after month. Software with that kind of track record certainly has more undisclosed vulnerabilities. We won’t shame them by listing them here.
  • Transaction integrity. In other words, does the software work properly and produce the correct results in all cases, including unintentional and deliberate misuse and abuse? Manual or automated auditing of software programs can be used to identify transaction calculation and processing problems, but humans often spot them, too.

Tools that are used to assess the vulnerability of software include

  • Security scanners. These are tools, such as WebInspect, AppScan, and Acunetix Web Vulnerability Scanner, that scan an entire web site or web application. They examine form variables, hidden variables, cookies, and other web page features to identify vulnerabilities.
  • Web site security tools. These are tools like Burp, Nikto, Tamper Data, and Paros Proxy that are used to manually examine web pages to look for vulnerabilities that scanners often can’t find.
  • Source code scanning tools. These are such tools as Veracode, AppScan Static, and HP Fortify. These tools examine program source code and identify vulnerabilities that security scanners often cannot see.

Another approach to discovering vulnerabilities and design defects uses a technique known as threat modeling. This involves a systematic and detailed analysis of all of a program’s interfaces, including user interfaces, APIs, and interaction with the underlying database management and operating systems. The analysis involves a study of these elements to understand all the ways in which they could be used, misused, and abused by insiders and adversaries. A popular tool for this is the Microsoft Threat Modeling Tool.

The STRIDE threat classification model is also handy for threat modeling. STRIDE stands for the following:

  • Spoofing of user identity
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

Mitigation of software vulnerabilities generally means updating source code (if you have it!) or applying security patches. However, patches often cannot be obtained and applied right away, which means either implementing temporary workarounds, or relying on security in other layers, such as a web application firewall.

Mitigation of transaction integrity issues may require either manual adjustments to affected data, or workarounds in associated programs.

Acceptance testing

Acceptance testing is the formal process of verifying that a software program performs as expected in all scenarios. Acceptance testing is usually performed when a program or system is first acquired, prior to placing it into production use, and whenever configuration changes or code changes are made to the program or system throughout its service life.

Acceptance testing is most often associated with business end-user testing, where it’s called user acceptance testing (UAT). However, acceptance testing is (or should be!) performed in other aspects that are not necessarily visible or obvious to end users, including:

  • Malicious and erroneous input. Users may be satisfied to test programs using reasonable, acceptable input, but security professionals know that this is only the beginning. Inputs of all types, including malicious and erroneous, must be included in testing, to ensure that the system behaves properly and cannot be compromised.
  • Secure data storage. All instances of data storage must be secure, commensurate with the sensitivity of the data. Testing needs to include checks for data remanence, to make sure that programs do not leave sensitive data behind that could be discovered by others.
  • Secure data transport. All instances of data transmitted to another program or system must be performed using means that are commensurate with the sensitivity of the data. Over the public Internet, this almost always means using encryption.
  • Authentication and authorization. These mechanisms must be proven to work properly and not be vulnerable to attacks or abuse.
  • Session management. This mechanism is used to track each individual user of a system or application. Weaknesses in session management can permit an attacker to take over control of an existing user’s session. Years ago, the Firefox browser extension Firesheep was an excellent proof-of-concept tool that could be used to steal another user’s web application session.

Assess Security Impact of Acquired Software

Every organization acquires some (or all) of its software from other entities. Any acquired software that is related to the storage or processing of sensitive data needs to be understood from a security perspective, so that an organization is aware of the risks associated with its use.

There are some use cases that bear further discussion:

  • Open source. Many security professionals fondly recall those blissful days when we all trusted open source software, under the belief that many caring and talented individuals’ examination of the source code would surely root out security defects. However, security vulnerabilities in OpenSSL, jQuery, and MongoDB and others have burst that bubble. It is now obvious that we need to examine open source software with as much scrutiny as any other software.
  • Commercial. Confirming the security of commercial tools is usually more difficult than open source, because the source code usually is not available to examine. Depending on the type of software, automated scanning tools may help, but testing is often a manual effort. Some vendors voluntarily permit security consulting firms to examine their software for vulnerabilities and permit customers to view test results (sometimes just in summary form).
  • Software libraries. Here, we are talking about collections of software modules that by themselves are not programs, but are used to build programs or used by programs while they’re running. Think of them as pre-assembled pieces created by others. Careful scrutiny of all such libraries is essential, as there are many that are not secure, and more that do not always function correctly — particularly under stress and abuse. Further, software libraries should be checked for vulnerabilities — trust and verify!
  • Operating systems. Open source or not, we generally are satisfied with the use of good hardening guidelines, effective patch management, and scanning with such tools as Nessus, Rapid7, and Qualys to find vulnerabilities.

Define and Apply Secure Coding Guidelines and Standards

Organizations that develop software, whether for their own use only or as products for use by other organizations, need to develop policies and standards regarding the development of source code to reduce the number of vulnerabilities that could lead to errors, incidents, and security breaches. Even organizations that use tools to discover vulnerabilities in source code (and at run-time) would benefit from such practices, for two reasons:

  • The time to fix code vulnerabilities is reduced.
  • Some code vulnerabilities may not be discovered by tools or code reviews but could still be exploited by an adversary, leading to an error, incident, or breach.

Security weaknesses and vulnerabilities at the source-code level

Software development organizations must have standards, processes, and tools in place to ensure that all developed software is free of defects, including security vulnerabilities that could lead to system compromise and data tampering or theft. The types of defects that need to be identified include

  • Buffer overflow. This is an attack where a program’s input field is deliberately overflowed in an attempt to corrupt the running software program in a way that would permit the attacker to force the program to run arbitrary instructions. A buffer overflow attack permits an attacker to have partial or complete control of the target system, thereby enabling him or her to access, tamper with, destroy, or steal sensitive data.
  • Injection attacks. An attacker may be able to manipulate the application through a SQL injection or script injection attack, with a variety of results, including access to sensitive data.
  • Escalation of privileges. An attacker may be able to trick the target application or system into raising the attacker’s level of privilege, allowing him or her to either access sensitive data or take control of the target system.
  • Improper authentication. Authentication that is not air-tight may be exploited by an attacker who may be able to compromise or bypass authentication altogether. Doing authentication correctly means writing resilient code as well as avoiding features that would give an attacker an advantage (such as telling the user that the userid is correct but the password is not).
  • Improper session management. Session management that is not programmed and configured correctly may be exploited by intruders, which could lead to session hijacking through a session replay attack.
  • Improper use of encryption. Strong encryption algorithms can be ineffective if they are not properly implemented. This could make it easy for an attacker to attack the cryptosystem and access sensitive data. This includes not only proper use of encryption algorithms, but also proper encryption key management.
  • Gaming. This is a general term referring to faulty application or system design that may permit a user or intruder to use the application or system in ways not intended by its owners or designers. For example, an image-sharing service may be used by criminals to pass messages using steganography.
  • Memory leaks. This type of defect occurs when a program fails to release unneeded memory, resulting in the memory requirements of a running program growing steadily over time, until available resources are exhausted.
  • Race conditions. This type of defect involves two (or more) programs, processes, or threads that each access and manipulate a resource as though they had exclusive access to the resource. This can cause an unexpected result with one or more of the programs, processes, or threads.

These weaknesses, and others, are addressed in detail by the Open Web Application Security Project (OWASP) at www.owasp.org.

Security of application programming interfaces

Application programming interfaces, or APIs, are components of software programs used for data input and data output. An API will have an accompanying specification (whether documented or not) that defines functionalities, input and/or output fields, data types, and other details. Typically, an API is used for non-human interaction between programs. Although you would consider a web interface as a human-readable interface, an API is considered a machine-readable interface.

APIs exist in many places: operating systems, subsystems (such as web servers and database management systems), utilities, and application programs. APIs also are implemented in computer hardware for components, such as memory, as well as peripheral devices, such as disk drives, network interfaces, keyboards, and display devices.

In software development, a developer can either create his or her own API from scratch (not recommended), or acquire an API by obtaining source code modules or libraries with APIs built in, such as RESTful. An API can be a part of an application that is used to transfer data back and forth to other applications, in bulk or transaction by transaction.

APIs need to be secure so that they do not become the means through which an intruder is able to either covertly obtain sensitive data or cause the target system to malfunction or crash. Three primary means of ensuring an API is secure include

  • Secure design. Each API needs to be implemented so that it carefully examines and sanitizes all input data, to defeat any attempts at injection or buffer overflow attacks, as well as program malfunctions. Output data must also be examined so that the API does not output any non-compliant or malicious data.
  • Security testing. Each API needs to be thoroughly tested to be sure that it functions correctly and resists attacks.
  • External protection. In the case of a Web Services API, a web application firewall may be able to protect an API from attack. However, such an option may not be available if the API uses other protocols. Packet filtering firewalls do not protect APIs from logical attacks —firewalls do not examine the contents of packets, only their source and destination IP addresses and ports.

Secure coding practices

The purpose of secure coding practices is the reduction of exploitable vulnerabilities in tools, utilities, and applications. The practice of secure coding isn’t just about secure coding, but many other considerations and activities. Here are some of the factors related to secure coding:

  • Tools. From the selection and configuration of integrated development environments, to the use of static and dynamic code testing tools (SAST and DAST, respectively), tools can be used to detect the presence of source code defects including security vulnerabilities. The earlier such defects are found, the less effort it takes to correct them.
  • Processes. As discussed earlier in this chapter, software development processes need to be designed and managed with security in mind. Processes define the sequence of events; in the context of software and systems development, security-related steps such as peer reviews and the use of vulnerability scanning tools will ensure that all the right steps are taken to make sure that source code is reasonably free of defects.
  • Training. Software developers and engineers are more likely to write good, safe code when they know how to. Training in secure development is essential. Very few universities include secure development in their undergraduate computer science programs. Secure coding is not a part of university training, so organizations must fill the gap.
  • Incentives. Money talks. Providing incidents of some form will help software developers pay more attention to whether they’re producing code with security vulnerabilities. We like the carrot more than the stick, so perhaps rewards for the fewest defects per calendar quarter or year is a good start.
  • Selection of source code languages. The selection of source code languages and policies about the use of open-source code comes into play. Some coding languages by design are more secure (or we might say “safe”) than others. For example, the C language, as powerful as it is, has no protective features, which requires software developers to be more skilled and knowledgeable about writing safe and secure code. Developed in the 1970s, the C language was created during an era when there was more trust. However, Brian Kernigan or Dennis Ritchie, the co-creators of C, are allegedly attributed with the saying, “We (Unix and C) won’t prevent you from doing something stupid, as that restriction might also prevent you from doing something cool.” We have been unable to confirm whether either one said this or not. It might have been in a book (such as The C Programming Language), in a lecture, or in a pub after quaffing a few pints of ale. The point is, some languages are, by design, safer than others. We’re sorry for the rabbit hole. Well, mostly sorry.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.235.176