CHAPTER 5: APPLICATION SECURITY AND ISO27001

 

 

 

As the threats to applications increase, we need a structured approach for managing the security of our applications. ISO27001 is the international standard for information security management best practice, and is the most comprehensive standard for information security. It provides a framework to manage the security of our applications.

ISO27001 defines controls for the acquisition, development, customisation, maintenance and operation of applications. The controls are process-centric and technology-independent, thus making the Standard strong. The Standard does not specify the technical details for the controls. It is expected that organisations will draw on the more detailed technical guidance available from specific application developers or from industry forums and other sources of good practice. For example, the specifics of web application security can be obtained from forums such as the Open Web Application Security Project (OWASP).

Risk assessment, which we discussed in Chapter 3, is the foundation of ISO27001. The risk assessor selects the appropriate controls after a risk assessment. The same approach is also followed for securing software applications. The overall approach is:

  1. Perform an information security risk assessment to identify the assets at risk and the level of risk in relation to the organisation’s risk appetite.
  2. Identify which controls are relevant, based on risks and the scope of the ISO27001 ISMS, and document them in the Statement of Applicability (SOA).
  3. Define a risk treatment plan, the master document for implementing these controls.

In this chapter we will look at the ISO27001 controls relevant for application security. We will focus on the objectives of the control, the implementation requirements and the best practices in that area. In the subsequent chapters we will also look at specific security threats and controls relevant to some specific platforms. We hope that this will help the reader’s understanding of how the controls from the Standard can be applied to guide the implementation of technical security controls in an enterprise. The controls are presented in a sequence that makes it easier to see the inter-relationships between the controls. The sequence is not always the same as that listed in the Standard.

We also cover security metrics in this chapter. Security metrics measure the effectiveness of security controls. ISO27001 requires organisations to show how they collect metrics data, analyse it and take remedial or improvement action. We shall look at sample metrics for some controls.

The table below lists the ISO27001 controls for application security. The entries in bold are the main categories and the entries below are the relevant security controls within that category. As an example, A.12.1 is a main category and A.12.1.1 is a control within it.

Table 3: ISO27001 controls relevant for application security

Control Number

Control

A.6.1: Internal organization

A.6.1.2

Segregation of duties

A.9.2: User access management

A.9.2.1

User registration and de-registration

A.9.2.2

User access provisioning

A.9.2.3

Management of privileged access rights

A.9.2.4

Management of secret authentication information of users

A.9.2.5

Review of user access rights

A.9.2.6

Removal or adjustment of access rights

A.9.4: System and application access control

A.9.4.1

Information access restriction

A.9.4.2

Secure log-on procedures

A.9.4.3

Password management system

A.9.4.4

Use of privileged utility programs

A.9.4.5

Access control to program source code

A.12.1: Operational procedures and responsibilities

A.12.1.4

Separation of development, testing and operational environments

A.12.4: Logging and monitoring

A.12.4.1

Event logging

A.12.4.2

Protection of log information

A.12.4.3

Administrator and operator logs

A.14.1: Security requirements of information systems

A.14.1.1

Information security requirements analysis and specification

A.14.1.2

Securing application services on public networks

A.14.1.3

Protecting application services transactions

A.14.2: Security in development and support processes

A.14.2.2

System change control procedures

A.14.2.3

Technical review of applications after operating platform changes

A.14.2.4

Restrictions on changes to software packages

A.14.2.5

Secure system engineering principles

A.14.2.7

Outsourced development

A.14.2.9

System acceptance testing

A.14.3: Test data

A.14.3.1

Protection of system test data

A.18.2: Information security reviews

A.18.2.3

Technical compliance review

A.6.1.2 Segregation of duties

ISO27001 mandates segregation of duties across the organisation, including for IT operations and application development. The objective of this control is to ensure that no security breach occurs by accident or through intentional misuse. It is not always an easy task to segregate duties, especially for small and medium-sized organisations. The standard is realistic in its requirement for compliance to this control. The requirement is for practical segregation as far as possible, and recognises the fact that segregation might not always be possible.

Segregation of duties requires that an activity and its authorisation should lie with different entities, e.g. the request for change in an application should be made by a different person or team, as compared to the person or team that approves the change. Every organisation should look at its activities, its roles and responsibilities, and consider the risks and segregate duties in the best way possible.

In the application development and maintenance processes it is critical that responsibilities for development, test and operations are segregated. Segregation of responsibility for development and testing ensures impartial results and the detection of both functional and security flaws. Operations should be segregated from development to ensure that a developer who understands the code and working of software does not manipulate the production system for fraudulent transactions. A good example from the banking industry is where a developer manipulates electronic funds transfer protocols to automatically move large sums of money to his accomplices after the application is installed. Note that, where the segregation of duties is not possible, this segregation of development, testing and production environments can be a suitable countermeasure in some situations.

A.9.2.1 User registration and de-registration

The objective of this control is to ensure that formal processes are established to reduce the risk of fraudulent IDs and unauthorised access.

Processes should be established for user registration and de-registration. The process for registration should start from the time a person joins the organisation. The need for user ID in application systems and access required should be based on the job responsibilities or role of a person. This should be clearly documented. The human resources (HR) team should liaise with the IT team to provide the user ID for the new joiner.

An authorisation process should be established to validate the need for the creation of user ID in systems. A typical process could have a request for user provisioning going through the required approvals before the user ID is commissioned in production systems. Increasingly, enterprises are adopting self-provisioning systems, where the roles and user IDs required in systems are codified in identity and access management (IAM) systems. IAM systems also provide the required workflows for authorisations. In a self-provisioning system, users can go to a portal and request access to application systems.

Once the approval is provided, the user IDs are auto-provisioned in the multiple applications where the user has been allowed access. The required access privileges are also assigned, based on roles or profiles in each application. The critical success factor is that the business role of the user should be linked to the roles and privileges in the application. This should be documented and approved by management. The standard requires that a process should be set up and adhered to. It does not mandate automation. Automation using IAM systems is a best practice and can reduce the process overheads. A similar process should also be available for de-registration. HR should notify IT as soon as a person has left the organisation; IT in turn should remove or disable the user ID for a certain period before deletion. Other scenarios that the process should address include transfer of users across divisions, promotions and related changes in user ID and privileges.

User ID should be unique as far as possible. Group ID should be allowed only as an exception and only if the business requirements cannot be achieved without it. When group IDs are used, there should be mechanisms to still link to the actual user based on time of use or terminal, etc. A common risk in many enterprises is the presence of multiple admin user IDs in applications. Admin IDs should be minimised in applications to reduce the chances of misuse. Users should also sign a formal statement on conditions of access and acceptable usage.

Formal processes should also be established for periodic checks in systems for the existence of redundant, fraudulent IDs and their removal. Once again, manual mechanisms are sufficient from the ISO27001 control perspective. Manual mechanisms, however, are tedious and do not scale for large enterprises. Identity audit (IA) systems are adopted to automate these processes. IA systems integrate with IAM systems to check for approved IDs, compare them with the ‘as is’ scenario and produce reports on exceptions. Once the exception list has been analysed, redundant IDs can be removed as approved.

User IDs often become backdoors for fraud in large enterprises. Fraudulent banking transactions using dormant IDs and manipulation of ERP systems using fraudulent IDs are realities. Robust user management processes can provide the required safeguards.

A.9.2.4 Management of secret authentication information of users

A.9.4.3 Password management system

A.9.4.2 Secure log-on procedures

Most controls in security are ultimately tied to a password. Password breach is one of the easiest and most high-impact methods for system compromise. ISO27001 provides guidelines to manage passwords. These guidelines can be applied to applications, systems, network devices and a number of other IT systems. We will analyse this control from an application security perspective.

Have a well-defined password policy that takes into account the risks, ease of use and ease of enforcement. Applications should have password management modules that can enforce the password policy. The Standard’s requirements are:

  • Applications should support password complexity. They should enforce passwords with a combination of lower and upper case characters, numerals and special characters.
  • Applications should enforce minimum and maximum password length.
  • Applications should force users to change temporary passwords at first logon.
  • Applications should enforce periodic password changes. Periodicity should be configurable.
  • Applications should transmit and store passwords in encrypted form.
  • Applications should maintain password history to prevent password reuse.
  • Applications should not display passwords on screen when users type passwords.
  • Application password files should not be stored along with application data.
  • Applications should have a secure ‘forgot password’ feature to allow users to obtain a new password if they forget their password.

The challenges for consumer-facing enterprise applications are even higher due to new forms of attack. Phishing has emerged as a serious threat to Internet banking applications. Phishing attacks capture passwords by tricking users into submitting their passwords to a fake website that looks like the original site. Hence such applications have to have more sophisticated password management mechanisms. As a response, there are banks that have integrated their authentication systems with two-factor authentication mechanisms. ‘Two-factor’ refers to authentication based on a combination of two items of information: one based on what the user ‘knows’ and the other on what the user ‘has’, e.g. a PIN that the user knows or a random number generated by a hardware token. To build an application with robust password management is a challenging task. Chapter 9 has more details on the best practices developers should adopt for authentication.

In addition to application-supporting features for strong passwords, processes for password management should be established. The Standard requires compliance as follows:

  • Users should be made aware of their responsibility to protect passwords.
  • Users who are provided with a temporary password should be forced to change their password at first logon. Users should at least be advised on the risks and the requirement to change the password at first logon.
  • Processes should be established to verify the identity of the user before providing a temporary or replacement password. These processes have to be stringent, especially in the banking and financial services industry.
  • Passwords should be generated and disseminated in a secure fashion. The team that generates or disseminates passwords should not be able to view the passwords. As an example, an ATM PIN is printed directly from the system and sent to the customer. Such practices prevent internal fraud happening through system compromise.
  • Passwords should never be available for viewing by the operations team. As an example, the PIN that a banking customer types in to validate themselves during a conversation with the bank’s call centre should not be seen by the call centre agent, or the agent should be able to see only the individual characters automatically generated from within the password and on which he is testing the customer.

Default vendor passwords should be changed as part of the application commissioning process.

A.9.4.1 Information access restriction

A.9.2.2 User access provisioning

A.9.2.3 Management of privileged access rights

A.9.2.5 Review of user access rights

A.9.2.6 Removal or adjustment of access rights

Weak access control remains a significant risk in most enterprises. ISO27001 has a set of controls to manage access rights and appropriate privilege management in applications. The objective of these controls is to implement robust processes for access control such that application compromise and the chance of fraud are reduced.

Access control in applications starts with the definition of user roles and corresponding authorisations or privileges based on business requirements. Once the roles and required authorisations are clear, they should be implemented in applications. Applications should have an administrative module through which we can define the roles and their associated privileges. Most enterprise applications have support for such functionality. The gap usually exists in the implementation. Applications should also support an intuitive way of defining such authorisations. In the absence of user-friendly menus for configuring authorisations, higher authorisations are provided by the operations team to ensure that functionality is not affected.

Application owners should be responsible for the implementation of access controls. The changes in required authorisations should undergo stringent change management controls. The change management committee that deals with this should include senior business representatives who can validate the need for changes in privileges related to business needs. Application-to-application interfacing should also go through strict access control. An application querying data from another application should go through middle software (‘middleware’) that has the authorisations defined. If an application is directly querying data, the query should be limited to data required for business requirements.

As a rule, minimum privileges as required by the business should be assigned to a role. Privileged access rights should accordingly be restricted and assigned only where necessary. Programs should also run with minimum privileges. Regular user activities should not be carried out with administrative privilege accounts. Such practices increase the chances of an inadvertent error leading to adverse impact on operations. The chances of fraud also increase. Records should be kept of all privileges assigned. Any change to user privileges should be approved. It should go through change management request and approval.

An authorisation audit process to review user rights should be set up. User access rights can change, as a result of promotions, transfers, and employment termination. It is essential that user privileges are reviewed periodically. The frequency of review should be higher for critical users in systems. Review frequency can be determined by considering parameters such as the scale of operations and the level of risk. Normal user authorisation audits can be once every six months, whereas critical user IDs could be reviewed once every three months.

User access management is a complex activity, given the number of roles and the granularity of privileges in applications. It is good practice to automate these processes through software. Details of some of the software systems that can be used are discussed in control A.9.2.1 User registration.

A.9.4.4 Use of privileged utility programs

System utilities are tools used to manage and troubleshoot applications and system data. Examples include database administration software and registry editors. Many of these utilities can access critical system resources. Hence, in the wrong hands, they become effective attack tools. A database administrator can bypass the application controls and directly access the database with a database administration utility. This is an example of where a system utility can be used to bypass application controls. Thus the objective of this control is to limit the use of such system utilities.

Separate system utilities from application systems. Disable them as far as possible in application systems. Only give access to these utilities to specific users. As far as possible, maintain a log of the access and use of system utilities.

A.9.4.5 Access control to program source code

The objective of this control is to prevent the introduction of malicious code to applications through unauthorised changes. Program source code should be access-controlled.

Source code should be stored centrally. Source code in this context also includes design documents, functional specifications and other software development lifecycle (SDLC) documents. In most development environments, source code is managed using a configuration management (CM) tool. It is good practice to use a centralised CM tool with strict access control processes implemented. Code check-in and check-out should go through a formal authorisation process. Audit logs should be maintained for code access. Production systems should not contain any program source libraries.

ISO 10007:2003 (configuration management) and ISO/IEC 12207:2008 (software lifecycle management) are good reference standards for details on configuration management.

A.12.1.4 Separation of development, test and operational environments

Well controlled development, test and operations environments are a must to ensure minimum disruption from unauthorised access to the production environment. This control mandates key steps to meet this objective.

A process should be defined for checking code into the production environment once it has been approved for release. The process should ensure that appropriate change management processes along with testing are triggered for moving code into production. It is also critical that development, test and operational facilities are separated. There should be no development activity allowed or possible on any production server, so production servers should not host compilers and other development tools, since they make possible unauthorised changes to production software, in the production environment itself, for fraud.

Test environments should also be separated from the development environment and should model the production environment as closely as possible. Test environments that do not mirror the production environment will be unable to provide accurate and comprehensive test results; tests will not be accurate and incompletely tested software roll-outs can cause production downtime. Test environments should also simulate production interfaces with other systems. All critical systems in production should have test systems available. Direct testing on production systems should not be carried out, as that creates a risk of disrupting production.

On a practical note, it is in reality sometimes difficult to maintain a test environment for all applications in production. Every organisation should determine the risks arising from the absence of a complete test environment for servers. Critical business applications should have a test environment. Low value applications may not have a test environment and could be treated as an acceptable risk. In some cases, for critical applications, hardware costs might become prohibitive. In such cases the same hardware with a different logical domain for testing is also acceptable. Production data should not be copied into the test environment. We will cover this when we discuss control A.14.3.1, Protection of system test data.

A.12.4.1 Event logging

A.12.4.2 Protection of log information

A.12.4.3 Administrator and operator logs

These three controls constitute the monitoring controls for applications. The scope of these controls extends to applications, systems, network, security and other IT devices. We will focus on the implications for application monitoring. Most controls we have discussed so far focused on protection, while these controls focus on detection.

Monitoring applications starts with enabling of audit logs. Unless logs are available, they cannot be monitored. The level of logging need not be the same in all applications. One approach is to consider the business value of the application and the exposure of the application. Exposure is a function of several factors:

  • Users accessing the applications.
  • Placement of the server.
  • Existing controls.
  • Threats the application is subject to.
  • The probability of attack.

Image

Figure 3: Maximum, moderate and minimum logging

The figure above shows one approach. Maximum logging is enabled on applications with high value and high exposure. Minimum logging is enabled for low value applications with low exposure. Typical audit logs required are listed below:

  • Access logs that capture both success and failure. The log should capture user ID, name and description of the event, IP address of the user, IP address of the application server, the date and time, and the object that was accessed.
  • Logs capture privileged operations, including success and failure. Privileged operations include use of administrator accounts, changes in permissions on files, changes to authorisations, changes to application security settings, and creation/deletion of objects.
  • Application failure logs and error logs should capture error ID, event description, date and time, user ID if applicable and application server IP address/name.

The logs generated should be monitored. Critical systems should be monitored 24 x 7 x 365; this can be an in-house or outsourced activity. Other system logs can be reviewed periodically – daily, weekly, fortnightly or monthly.

Protect the logs as they can be manipulated by an attacker. A good practice is to send logs to a central log server (CLS) for monitoring. Copy the logs to the central server as soon as they are generated. Let only the monitoring team have access to the CLS. This ensures that logs cannot be manipulated. The monitoring team should have read-only access to the CLS. There are many ways to achieve this. Some applications support ‘syslog’ format.13 In such cases, they can be configured to send the logs to the IP address of the CLS running a syslog server. Commercial security information management (SIM) products are another option. They can extract logs from applications to the CLS running the SIM manager. Digitally sign the logs: this can be used to verify the integrity of the logs later.

Comply with the regulatory and legal requirements of the country for storing logs. This will help ensure that the logs will meet the evidential requirements when produced in a court of law. Different countries have different requirements – multiple time stamps, recording of specific fields, digitally signing the logs, etc.

A.13.1.3 Segregation in networks

This control looks for meaningful ways to limit operational impact due to resource-sharing between networks.

So, what are the practical implications of this control? In order to streamline cost, enterprises try to extract the maximum possible from available resources. As an example, a core banking software application might be running on an extremely powerful hardware system which might not be fully utilised. The organisation might decide that it will run one more application on the same hardware for a better return on investment. While this might save cost, it might also lead to production downtime. It is quite possible that the OS components and versions of these components across the two applications might not be compatible, leading to intermittent performance issues. It is also possible that peak utilisation of these applications could coincide leading to further performance issues. As far as possible, for critical applications, resource sharing should be limited. Alternative solutions can be looked at, for the above scenario. For instance, running the applications in two logical domains within the same hardware might be fine as long as capacity requirements are met.

The core idea of segregation in networks is to place servers in logical network segments with access controls between the segments. Segments can be created based on criticality of servers, access requirements and levels of trust. This will ensure that even if there is disruption in a certain segment, the segments with critical servers are isolated and are not affected.

A. 14.1.1 Information security requirements analysis and specifications

ISO27001 emphasises building security early in the software development lifecycle (SDLC). The objective of this clause is to include security requirements in the software specification itself. That ensures security features are integrated early into the application and prevents costly rework to add security features later.

Build security requirements into the software requirement specifications (SRS) for new software and also for customised software. When security requirements are specified in the SRS, they can be used to design security features in the design stage. Trace the security requirements across the SDLC process at various stages – security feature design, development of security features, and testing of security features.

Analyse commercial off-the-shelf (COTS) software for compliance with your security requirements before procuring it. Establish a formal process for verifying that it complies with your security requirements. You can also look at software that is already certified or evaluated for security. ISO15408 is the standard for carrying out product certification.

Application owners are responsible for implementing the security requirements. They should work with the information security team to arrive at the right requirements and controls specification. The information security team provide the technology expertise, whereas the application owners bring the business perspective. For example, the maker-checker requirement in a banking application is a business requirement, as much as it is a security feature.

The level of security requirements depends on several factors:

  • the business, contractual and compliance importance or value of the application;
  • the potential business, contractual or compliance impact if the identified risks manifest themselves;
  • third party access;
  • accessibility from the Internet.

Here are some examples of security requirements:

  • Application should authenticate all users before allowing access.
  • Application should not display passwords while they are being keyed in.
  • All transactions with financial implications should have a separate requestor and approver.
  • Application should follow the principle of least privilege.
  • The application should restrict menu options based on a need-to-know and need-to-do basis.
  • Application should not allow any modifications to be made after an entry is authorised. Any subsequent changes should be made only be reversing the original entry and passing a fresh entry.
  • Passwords should be encrypted when transmitted between client and server.

A.14.1.2 Securing application services on public networks

A.14.1.3 Protecting application services transactions

ISO27001 also defines controls for the security of data across networks, which also covers e-commerce transactions. These controls have requirements to limit legal implications arising from fraudulent transactions.

E-commerce sites have to ensure that transactions and related information are protected. Most controls we have discussed in this chapter are relevant for achieving this objective. Additional controls are required for non-repudiation of transactions, protection of documents (e.g. contracts) and secure electronic payment mechanisms. Non-repudiation can be achieved through digital signatures. Both trading parties should use digital signatures to meet this objective. The need for digital signatures depends on the risk – they should be used to reduce high risk threats. For instance, banks should use digital signatures for high value fund transfers.

Organisations should ensure that legal requirements are complied with. As an example, digital signatures might be required to be signed by country-level root certification authorities (CA) for legal compliance, while regulatory requirements might mandate specific encryption strength.

Confidentiality of transactions can be achieved through public key cryptography. Encrypt all communications with at least TLS 1.1 with strong ciphers and TLS 1.2. All Internet banking sites must disable all versions of SSL on their server and implement TLS 1.1 with strong ciphers and TLS 1.2 to protect transactions. All communication over HTTP must support the HTTP Strict Transport Security (HSTS) header so that webservers indicate to all clients that they accept TLS-only connections.

TLS 1.1 and 1.2 provide encryption to protect confidentiality and also authentication. Customers get the assurance that they are transacting with the actual banking site.

Cryptographic techniques definitely provide a reasonable level of assurance for e-commerce sites and transactions. In the light of the new attacks we discuss in Chapter 6, a number of additional controls are required to well and truly protect e-commerce sites. Web application security threats can be used to compromise e-commerce sites in spite of the cryptographic controls. As an example, a payment gateway transaction could potentially be manipulated by intercepting and changing the http request between the e-commerce site and the payment gateway. This can result in large scale fraud. Chapters 7 and 9 describe the solutions that need to be implemented for mitigating such risks.

E-commerce sites should have well-designed process controls in addition to technology controls. As an example, reconciliation mechanisms should be implemented to check for any deviations between payments and goods sold. Implement strong authorisation processes for signing critical documents, changing inventory information, and approving critical transactions. In applications such as stock trading, the actual transaction should not take place unless the back office has verified the transaction. Business controls have to align well with technology controls for protection of e-commerce transactions.

Privacy of customer data is also a key focus area. Data related to customers and customer transactions should be stored in secure intranets that are protected at the network, application and physical layers. E-commerce sites should also consider the type of information collected and stored. Do not store authorisation data of customers. Customer credit card numbers, social security IDs, date of birth, etc. might be required for specific transactions. No such data should be used in further or additional transactions without the specific prior authorisation of the customer. Sites should also disclose their terms of business, privacy policies and security controls proactively.

A.14.1.3 Protecting application services transactions

Message authenticity and integrity should be protected within applications. Applications often transfer messages or text files between different processing stages in the same application or between different components of the same application or between different applications. In each case, it is important to have controls that will check the integrity of data to ensure that there are no accidental or intentional changes.

Hashing totals is a good mechanism to verify integrity of applications. Hash totals14 are verified between applications or within the application across stages or components to ensure that data accuracy has been maintained. As an example, an Internet banking application might generate certain files for transaction data that act as input for the core banking system. It is important to ensure that the transaction information in the file generated by Internet banking is not manipulated before it gets processed by the core banking application. The best way to do this is to generate the hash value of the file in both applications and the two hash values are compared by the core banking application before it is used for processing. Similar mechanisms can be used as long as they have the same effect – that the risk is reduced – although it is important to ensure that the hash values or other data used during this process are transmitted securely.

Authenticity of data is also critical. It should be checked that the data is received from the correct source. This can be achieved using a secret key between the applications.

A.14.2.2 System change control procedures

ISO27001 has controls for change management in relation to IT applications and infrastructure, to ensure that changes to systems do not introduce new risks.

Control and verify changes so there is no impact on security. A common problem is the short time usually available to develop and deploy applications. This leads to a number of short cuts to quickly deploy the applications. These, in turn, result in a number of vulnerabilities that can go unidentified. These vulnerabilities remain in the systems thereafter and usually get identified and reported only during application audits. Adherence to this control ensures that changes will not compromise application security.

Change management is applicable for changes to existing applications and the introduction of new applications to the environment. Major changes should go through a formal change management process. Depending on your organisation’s risk appetite, you can decide if all changes need to go through a formal change management process. It is good practice to aggregate minor changes into one release and then run this release through the change management process.

The change management process should include a risk assessment. This should analyse the change to see if it introduces new risks or dilutes existing controls.

Consider how one of our clients almost diluted one of their controls when they changed their authentication scheme: the bank moved from database-based authentication to LDAP15-based authentication for its Internet banking module. We discovered that Internet banking users who had been disabled in the system (because, for instance, they had closed their accounts) could not transact through the Internet banking portal but they could still transact through the payment gateway. Analysis of this finding enabled us to detect that the update to the payment gateway authentication module had been missed out.

A risk assessment of changes made to applications shows the impact on information security arising from the changes. The controls to address the risks should also be identified. Document the changes and have them approved by a change management committee. The committee should consider the risk assessment results, their possible impact and the exhaustiveness of testing and should provide approval for the change. The application owner is responsible for implementing the changes securely. Once the change is implemented, verify that the recommended controls have been implemented.

Document and maintain the records of the change. Identify all the components affected by the change. Update the documentation and version numbers and maintain audit trails. Remember that only authorised users should be allowed to submit change requests.

Testing of the software change should be carried out in environments that are isolated from both development and production environments. There are multiple challenges and controls in achieving this; we will cover them in detail when we discuss clause A.12.1.4, Separation of development, test and operational facilities.

A.14.2.3 Technical review of applications after operating platform changes

The objective of this control is to ensure that applications continue to be secure even after changes are made in the underlying operating system (OS). Some applications use OS directory permissions and other OS features to assist their own security. Thus application security can be affected by changes in OS configuration or binaries.

An application that depends on OS-level directory permissions can be affected if the next version of the OS changes the way permissions are managed, or if the underlying authentication methods change.

Operating systems are ‘hardened’16 and their settings tightened to improve security. Applications must be tested after hardening to verify that required functionality has not been affected.

OS updates can also change system binaries – DLL files. These in turn can lead to disruption of application functionality.

This control requires an organisation to set up a process for review, and to review the major OS changes. The application owner owns this process. There should be a formalised interface with the operations team to get notification of OS changes.

A.14.2.4 Restrictions on changes to software packages

Many software applications are sold as customisable packages today. This is especially true in the banking and financial services industry. Too many changes to a software application might introduce errors and reduce its security. Changes also increase the maintenance overhead for the application. This control therefore aims to minimise changes to software after it is built.

The level of customisation possible, and its impact, is not always analysed thoroughly during procurement. Vendor promises of customisation are not validated thoroughly prior to purchase. The software might start misbehaving after extensive customisation: security controls might get by-passed to implement special requirements, and the integrity of transactions can be affected, with the result that user experience is adversely affected.

This control mandates that changes to software should be minimised. Only necessary changes should be made and all changes should be strictly controlled.

Assess security impacts arising from customisation of software. The assessment should ascertain that security controls in the software have not been affected by customisation and should also check that new risks have not been introduced. Consider purchasing software that matches the functional requirements most and reduce customisation.

Sometimes the scalability of the software is not assessed. Consider the case of a large telecoms operator who purchased a billing application suited for a smaller telecom operator. They saved on the initial purchase cost and planned to add features incrementally and to customise the software along the way. They ran into scalability issues as their business grew faster than any customisation could cope with. This led to billing errors, unhappy customers, and loss of revenue.

Remember that ownership and maintenance of changes is also important. The vendor should be involved in the changes and maintenance as far as possible. Retain original copies of the software and document all changes to it.

A14.2.5 Secure system engineering principles

When making changes to secure systems it is essential that data input and output during the engineering process are monitored using principles established by the organisation – these principles must be documented, maintained and applied consistently across all such projects. It is vital to establish these rules up front, and to apply and enforce them consistently. This will ensure that security becomes an integral part of everyday operations both during and after the development process.

Input validation has evolved as a critical application security requirement in the light of new risks, especially in web applications. This control in ISO27001 provides guidance to mitigate such risks. The different types of attack that can be used to exploit weak input validation include SQL injection, cross-site scripting and buffer overflows. Chapter 6 discusses these attacks with demos.

Input validation should be automated as far as possible. It is good practice to build a module for input validation within the application. Application owners should be responsible for building input validation checks. The information security team can play a consultative role in providing guidelines and technical expertise on the types of check. Vendor or internal teams who are developing the application should be made aware of these risks and should be contractually responsible for implementation of input validation controls.

Input validation controls should check for invalid characters, threshold violations with upper and lower bounds, and dual inputs. The approach for input validation could be to allow known and required characters (also known as the whitelist approach). The other approach could be to block all invalid characters (also known as the blacklist approach). We discuss these options and best practice for input validation in Chapter 9.

Input validation will also require process checks to be incorporated. Hard copy documents, if used for input, should be checked for any unauthorised changes. A process should be established for validating the hard copy documents.

Processing errors in mission-critical applications can be costly. Manipulation of application processing can also lead to high-impact transactional fraud. Internal processing controls should be established for applications. This control objective lays emphasis on the implementation of appropriate mitigation mechanisms for accurate processing of data.

Processing should also be validated through automated mechanisms within and outside the application. Application owners should be responsible for implementation of appropriate controls. The information security team should provide the required domain expertise. Internal or external audit teams should be involved in periodic assessments of accuracy of application processing. Calculation of interest rates in banking applications is an example. Erroneous calculations of interest can lead to catastrophic results for the bank. Many times, due to extensive customisation of the application, there are errors in the way the application is coded and configured for calculation of interest for various schemes and multiple scenarios. In such cases, it is essential that testing procedures are rigorous and take in to account relevant scenarios before putting the application into production. It is also critical that there are periodic audits to ensure that processing is accurate. Usually, for such a scenario, checks outside applications using automated tools and manual checks will be required to verify the accuracy of interest rate processing.

Processing controls can also be manipulated for fraud. Such risks can be minimised through appropriate segregation of duties and allocation of minimum privileges required for a business role.

Integrity checks for data should also be carried out to ensure that there is no manipulation in intermediate stages. Applications sometimes use text files to transfer data between different phases in processing. Such files should be checked for integrity at multiple stages to ensure that the files are not manipulated.

Application programs should run in the defined sequence or at the defined time to ensure correct processing. Also, in case of an intermediate program failure, there should be controls to ensure that the other programs following it will not execute unless the problem is resolved. Applications should have checks and balances for maintaining the sequence and for error handling.

An approach that is gaining prominence is that of real time audits to check for accuracy in application processing. A parallel systems runs and is configured to sample relevant data from production and calculate the expected output. Any deviations from the production output are then reported immediately. This provides the required level of assurance to capture errors by accident or intention in applications. Even if a parallel system is not used, it is good practice to identify certain critical parameters and monitor them on a daily basis to check if there are deviations from expected output. This can be carried out by the internal audit team or even by the application owners themselves as a self-compliance mechanism. Logs should be generated for critical stages in processing. Copying these logs to a central server and monitoring these logs for deviation is also a good practice to catch errors in processing.

On a related note, web applications can be subverted with the cross-site scripting attack if the data that is supplied as output is not validated. We discuss this in Chapter 6.

A.14.2.7 Outsourced development

Outsourcing application development and maintenance (ADM) is a key initiative for most large enterprises. This enables organisations to focus on their areas of core competence. As in the case of any outsourcing initiative, the risks need to be managed to achieve tangible benefits. This control lays emphasis on the controls that an organisation should implement to ensure that risks in software outsourcing are mitigated.

Security requirements for the software being developed should be part of the contractual requirements. The ADM provider should meet both functionality and security requirements. As an example, the ADM provider should contractually agree to comply with information security standards and guidelines provided by the organisation.

The ADM provider should also have a secure environment, including secure processes and technologies. Organisations should retain the right to audit an ADM provider for complying with security standards set by the organisation. If internal expertise is not available, you should employ the services of a specialist to develop the required standards and also to audit the ADM provider periodically.

Similarly, the organisation should retain the right to audit the quality of software in meeting defined functional requirements, as well as auditing the security features. Penalty clauses can be built into the contract for major deviations from defined requirements. As an example, a specialist financial auditor might be required to check that interest rates are being calculated as mandated by country regulations and in a consistent fashion. Security features can be checked by a specialist security services firm or an internal team with the required skills.

A serious risk for most organisations that outsource is the risk of failure of the ADM vendor. In large outsourcing contracts, this could potentially lead to the failure of the organisation itself. Appropriate controls should be designed and implemented to mitigate this risk. Software escrow is one mechanism. Ensure that software is escrowed17 such that software code and related documents are available in the case of ADM provider failure. Splitting ADM outsourcing across two or more vendors, as well as software escrow, provides a higher level of assurance.

Software licensing should be well documented and mutually agreed upon without any ambiguity. Software licensing requirements should also extend to address the ADM provider using licensed third-party components – if any – in the software. This sometimes gets missed out and leads to the organisation being liable for the practices of the ADM vendor. Contracts should mandate the use of licences for third-party tools and should also have clauses for indemnification in case of breach, whether deliberate or accidental.

A. 14.2.9 System acceptance testing

The objective of this control is to have a structured process to commission systems into the production environment, thereby minimising risks related to availability, performance and security.

The process for system acceptance should ensure that there is a standard secure baseline defined for applications, including security parameters to be configured, policy baselines for permissions and modules to be enabled. The application and underlying OS should be configured in line with a recognised hardening standard and tested in the test environment before releasing the system to production. A verification process should certify the system before it is moved to production. The application owner should ensure that his systems are certified before being moved to production.

System acceptance checklists should also ensure that a business continuity plan has been established for the application. All the relevant operational procedures should be documented and tested. Capacity and performance requirements should be verified. Relevant teams should be trained in the operation and use of the system. The system acceptance process has to ensure that all potential risks that could affect use of the system in production have been addressed.

A.14.3.1 Protection of test data

Development teams face an interesting challenge – they have to simulate data in their test environment and that data needs to be as close to production reality as possible. Test data should be close to production data in terms of both volumes and content. At times, development teams take the easy route of copying production data to the test set-up. The objective of this control is to protect test data and to ensure that production data is not compromised in the test environment.

Data scrubbing or scrambling is a key requirement of this control. Whenever production data has to be used for testing software systems, the data should be cleaned or it should be scrambled beyond recognition. Sensitive fields including customer names, date of birth, social security numbers, email IDs, credit card numbers, etc., should be replaced with dummy values before such data is released into production. This requirement becomes an even more serious concern in an outsourced scenario where testing is carried out by a vendor. Production data with sensitive customer information including credit card numbers or date of birth in such scenarios could be used for large-scale fraud by some malicious individuals in the vendor team (and, in some countries, will constitute a prima facie breach of data protection legislation).

Here is an interesting incident that occurred in a bank. The bank was testing software for marketing campaigns. To do this, data was copied from the production system. There were errors in data scrubbing and this led to production data being used for testing. Data contained live customer email IDs. The test process triggered emails being sent to hundreds of customers with dummy promotional messages. It resulted in a public relations crisis that the bank could have lived without.

There are cases where it might be impossible to test without production data. In such a scenario, authorisation should be obtained from management for each such requirement. All controls used to protect data in the production environment should also be implemented in the test environment. Audit trails should be created to capture the copy of production data. Once the test is complete, production data should be deleted immediately from the test environment. This has to be closely co-ordinated between the development team, the application owner and the information security team in the organisation. It is good practice to audit the test environment to check for the retention, and ensure the deletion of, production data, both periodically and also after each test cycle that uses production data.

A.18.2.3 Technical compliance review

The aim of this control is to ensure that systems comply with technical standards defined by the organisation. From the perspective of application security, this implies that there should be regular audits on applications. The audits will ensure that planned controls have been implemented and that the application is not vulnerable.

Applications should be assessed periodically and the vulnerabilities detected should be mitigated in line with a defined schedule. Assessments should combine application penetration testing (especially for the web applications), review of user access rights and application security process assessments. Applications should be certified as secure before being commissioned for production. The ideal frequency for assessments depends on the business value of the applications: quarterly assessments for high value applications, six-monthly for medium value applications and annual assessments for low value applications. Add to the mix on-demand assessments for major changes to high and medium value applications.

A gap in most enterprises is the lack of follow-up and mitigation of detected vulnerabilities. A robust mitigation process tracks and resolves vulnerabilities detected during assessments. A sample mitigation matrix is provided in Figure 4:

Image

Figure 4: A sample mitigation matrix

Due care must be taken to ensure that assessments and penetration testing do not lead to downtime from destructive tests.

Security metrics

ISO 27001:2013 specifies the need to evaluate the information security performance and the effectiveness of the ISMS. Organisations can decide what needs to be monitored and measured (including processes and controls). Security metrics are mechanisms to measure the effectiveness of security control implementation. They provide data that can be used to assess and decide on security investments, as well as assessing the effectiveness of the controls themselves and identifying opportunities for improving their effectiveness. Which areas is the organisation doing well in? Which controls require further investments in technology and people? We can get answers to these questions from security metrics. Analysing the trends of the metrics also enables an organisation to check if there are consistent improvements in security. ISO 27004:2009 is a standard for security metrics.

Previously, ISO 27001:2005 required the use of metrics and corresponding measurements in the Plan-Do-Check-Act (PDCA) cycle. In ISO 27001:2013, however, any similar process can be used according to preference. Company policy should dictate the need for metrics and also define the objectives. Metrics should be developed and tracking methods should be implemented. The organisation should also define when to measure the metrics, and analyse them for effectiveness. When action is taken, it should drive enhancements and fix the gaps.

One of the challenges an organisation faces is to identify a meaningful set of metrics. ISO27001 calls for a set of metrics that can be used to evaluate the performance and effectiveness of the ISMS. Metrics can be for individual controls or a group of controls, and can also be for ISMS processes that could encompass varied sets of controls. A rule of thumb that we can use while identifying metrics is to ensure that the selected metric is Specific, Measurable, Actionable, Relevant and Timely (SMART). We should use the 80/20 rule: focus on a small set of metrics (20%) that can provide us with most of the answers (80%). Identify metrics that can be measured easily: automated mechanisms for measurement will make it easier to collect the data. Metrics should be expressed as a number or percentage, to reduce subjectivity. Metrics should provide data that provides insights into action to be taken. Frequency of measurement of metrics should be timely. Measurement frequency should strike a balance between effort for collection and relevance based on time. Very frequent collection makes it effort-intensive, while a lower collection frequency might make the data irrelevant. Organisations can start with a limited set of metrics; establish a sound process for measurement, reporting and security improvements. Once this process is initiated, it can be extended to a larger set of metrics.

Metrics for application security follow the same principles. A sample set of application security metrics is provided in the table below. The metrics represent a sample set and may not apply to all organisations. The objective here is to illustrate the use of metrics.

Table 4: Sample application security metrics

Control

Control title

Sample metrics

A.9.2.1, A.9.2.2

User registration and de-registration; User access provisioning

% of unauthorised users in applications

% of applications with more than three administrator IDs

% of users with unauthorised privileges

A.9.2.4

Management of secret authentication information of users

% of applications with default vendor passwords

% of applications with features to support password policy

A.9.2.5

Review of user access rights

% of critical applications assessed for user access rights periodically

A.9.4.1

Information access restriction

% of applications with documented roles and privileges

A.9.4.5

Access control to program source code

% of production systems with program source libraries

A.12.1.4

Separation of development, testing and operational environments

% of critical applications that have a separate test environment

A.12.4.1

Event logging

% of critical applications that are monitored 24x7x365

% of applications that are covered for log analysis

A.14.1.1

Information security requirements analysis and specification

% of applications with security requirements specified in SRS

% of COTS software that were analysed for security risks before procurement

A.14.1.2 A.14.1.3

Securing application services on public networks; Protecting application services transactions

% of critical web applications configured with TLS 1.1 with strong ciphers and TLS 1.2

A.14.2.2

System change control procedures

% of changes with formal risk assessment report

No. of downtime incidents due to uncontrolled changes

A.14.2.4

Restrictions on changes to software packages

% of software development projects with customisation requirements clearly documented and approved by management

A.14.2.5

Secure system engineering principles

Number of input validation vulnerabilities detected per application

Average number of input validation vulnerabilities across critical applications

Number of processing errors detected per application

Average number of processing errors detected across critical applications

A.14.2.7

Outsourced development

% of outsourced software development contracts that specify security responsibilities of vendor

% of critical applications with software escrow

% of applications assessed for compliance to security requirements

A.14.2.9

System acceptance testing

% of applications with secure baseline standards

% of applications with business continuity plans

A.14.3.1

Protection of test data

% of applications with data scrubbing scripts or software

% of applications with production data in test environments as detected during the latest audit cycle

A.18.2.3

Technical compliance review

% of applications that are subject to application security audits

% of Internet-facing web applications that are subject to periodic application penetration tests

Average cycle time to fix critical vulnerabilities exposed during assessments

ISO27001 requires that metrics are captured, analysed and reported using well-defined formats and processes. Reporting can use multiple mechanisms including balanced scorecards, visual dashboards to capture metrics and analyse trends, and graphical representation using green, orange and red traffic lights. Each organisation can decide on reporting formats; a visual representation is recommended for easier comprehension of metrics.

A sample format for capturing metrics is provided below:

Table 5: Format for capturing metrics

Field

Description

Metric definition

Metric along with the scope is captured here.

Objective

This field should capture the purpose of the metric, and the goals and objectives to be achieved using the metric.

Scoring method

This field captures the calculation for the metric. Scoring method can be percentage, average, actual number, historical trend.

Collection method

This field captures the source of the metric or methodology to capture the metric. Collection sources can be internal and external audits, help desk, security products, user surveys, log analysis results. Organisations should automate the collection, distillation and analysis tasks wherever possible to reduce effort and increase data validity.

Collection frequency

This field captures the frequency of collection. Frequency can be real-time, daily, weekly, monthly, annually.

Collection responsibility

This field captures the owner for collection of the metric.

Indicators

This field captures the baselines for comparison. It should provide guidelines to determine if the metric is meeting expectations, below expectations, above expectations.

Date of measurement, person

This field captures the date and the person who collected the metric.

Level of effectiveness

This field indicates the actual value of the metric.

Reporting to

This field captures the stakeholders who will view the metric, e.g. board, steering committee, head of IT, ISO, process owners.

Causes of non-achievement

This field captures the root cause analysis for not meeting the target indicators for metrics.

We will next take a sample metric for outsourced software development and complete it for the fields discussed previously. The approach and values in this example will change from organisation to organisation.

Table 6: Sample metrics for outsourced software development

Field

Description

Metric definition

Outsourced software development contracts that specify security responsibilities of vendor. The scope of this metric will cover all outsourced software development projects.

Objective

Minimise risks from outsourced software development.

Scoring method

The metric is calculated as a percentage.

A = Number of outsourced development projects with security responsibilities of vendor in contracts.

B= Total outsourced development projects.

C= % Outsourced development projects with security responsibilities of vendor in contracts.

C=(A/B)*100

Collection method

Examine the outsourced development contracts for security responsibilities as defined by security policies and standards.

Collection frequency

Once every six months.

Collection responsibility

Information Security Management team.

Indicators

90%-100% – Metric is above expectations.

80%-90% – Metric meets expectations.

<80% – Metric is below expectations and requires action for immediate improvement

Date of measurement, person

14 January 2015. Assessed by John Cooper.

Level of effectiveness

91%

Reporting to

Board, Information Security Steering committee

Causes of non-achievement

-

It is worth noting that common maturity models such as OpenSAMM can provide an efficient way to establish metrics. These frameworks can be used to reliably assess the maturity of current policies and procedures, as well as building a strategy and demonstrating improvements following the implementation of changes.

In this chapter, we covered the important application security controls in ISO27001. We also looked at the concept of security metrics and some examples to get started on the implementation of security metrics.

In the chapters that follow we will look at the different types of application security attacks and how some of these controls we discussed become useful. We will look at the practical aspects of securing applications. The solutions detail the implementation aspects of the controls we have discussed in this chapter. The solutions we discuss cover a range of issues including the technical controls for web application security, process controls required for secure coding practices, and techniques for writing secure code in ASP.Net applications. The chapters that follow are written with the objective of highlighting issues in real-world enterprise application development and deployment.

Bibliography

  • ISO/IEC27001:2013 Information technology – Security techniques – Code of practice for information security management.
  • Ted Humphreys and Angelika Plate, Measuring the effectiveness of your ISMS implementations based on ISO/IEC27001.
  • Andrew Jaquith, Security Metrics: Replacing Fear, Uncertainty, and Doubt.
  • ISO/IEC27001 and 27002 implementation guidance and metrics, prepared by the international community of ISO27k implementers at www.ISO27001security.com.

13 Syslog is a service for logging data.

14 A hash total is a validation check in which an otherwise meaningless control total is calculated by adding together numbers (such as payroll or account numbers) associated with a set of records. The hash total is checked each time data are input, in order to ensure that no entry errors have been made.

15 Lightweight Directory Access Protocol is an Internet protocol used by applications to look up information in a central server.

16 ‘Hardening’ means removing known, potential vulnerabilities and ‘locking down’ configuration options in line with security specifications usually (but not always) prepared by the publisher of the OS. There are, for instance, specific hardening specifications available for Microsoft Server software.

17 ‘Escrow’ is a legal arrangement in which software source code is delivered to a third party (called an escrow agent) to be held in trust pending a contingency or the fulfilment of a condition or conditions in a contract. If and when the identified event occurs, the escrow agent will deliver the source code to the proper recipient.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.151.44