Chapter 2 identified the key IT risks arising from the use of Internet technologies. This chapter considers how these risks can be addressed through the implementation of certain IT solutions. The categories of technology considered are:
• communications;
• information and data;
• business continuity and disaster recovery;
• networks;
• identity and access management;
• outsourced IT;
• Web 2.0.
The principal technology solution for securing electronic communications is the application of cryptography, which is a technique employed for the concealment of the content of communications.
The method of cryptography employed within electronic communications is encryption. Currently, the two most common drivers for the encryption of data generally are: first, the need for compliance, especially with regard to data protection requirements; and second, the need to protect data from loss, interference or exposure, most especially in the case of sensitive data. Data encryption is a risk management strategy.
Encryption disguises the content of a message. Decryption reveals the content of a message. The encryption process is performed by the creation of a numerical key that disguises the content in such a way that it can only be deciphered by a corresponding key. The security of the key depends upon its length. The widely used industry standard is 128-bit encryption. There are two types of encryption: symmetric encryption and asymmetric encryption.
Symmetric encryption
In this case, sender and recipient use the same key to both encrypt and decrypt, so the parties must have the same key to communicate with each other. If a sender wants to communicate with a large number of recipients, the same key must be held by each recipient. This is hardly practical, as it severely limits the level of confidentiality.
Asymmetric encryption
In the asymmetric model, two keys are connected by a set of numbers. One key (the first set of numbers) is made public and is distributed as required. It is referred to as the public key. The second key (the second set of numbers) remains private. This is called a private key and is coded to decrypt a message encrypted with a specific public key.
The procedure can be more easily understood when broken down into stages:
1 A wishes to send a confidential e-mail to B using asymmetric keys. A obtains B’s public key from a public key repository (see Certification and Registration Authorities page 168), which is a public storage facility.
2 A encrypts the e-mail with B’s public key (the first set of numbers) and sends the message to B.
3 B receives the e-mail and combines his or her public key (the first set of numbers) with his or her private key (the second set of numbers).
4 The combination of the first and second sets of numbers enables B to decrypt the e-mail.
The use of this model, known as public key encryption, is an effective solution to two uncertainties of using e-mail: privacy is preserved since only B can open the message; integrity is preserved since the message cannot have been tampered with as only B can open it.
The name given to the framework supporting the creation and administration of public key encryption is called public key infrastructure, commonly referred to as PKI.
However, PKI is not used exclusively. The size and cost of installing a fully operational PKI is beyond the skill and resources of many organisations. Another solution has been developed – the secure socket layer (SSL). In this model, when a secure transaction is required, for example for the supply of confidential information over a website, the SSL protocol may be employed. The use of SSL technology is signified by a small padlock or key graphic appearing on the website.
Technologically, the web server and browser exchange public keys and use them to create another key, which is sent back down the line. The received key is combined with the original key to make a new key that is common to both. Third parties cannot intercept because they will have no knowledge of the original keys.
Secure electronic transfer (SET) is a model frequently used for the transfer of payment by credit card. The model differs from SSL in that, even at the merchant’s site, the card number remains encrypted, so eliminating another stage of potential fraud. The encrypted card number is passed to the bank, where it is decrypted and the merchant then receives payment.
VoIP is the transmission of voice communications over Internet protocol networks. While VoIP technology offers the advantage of lower cost and more flexibility for voice and data communications, it has the potential to expose organisations to a number of security vulnerabilities. These were considered earlier.
Like any other digital data, voice communications travel in packets and although some of the traditional security measures, such as firewalls, can be employed to protect voice data, VoIP technology also requires additional measures. Some of these protective measures can affect the quality of the service in terms of disruption or delay.
For instance, firewalls are programmed to protect data by governing the traffic entering systems. The effect of this process on VoIP technology can render the system almost inoperative. Other tools, such as network address translation, can protect internal addresses by concealing them but at the same time also make calls into an organisation difficult to manage.
Furthermore, firewalls on their own will not always afford protection to VoIP communications and although VoIP communications can be encrypted, the management logistics become complex and inevitably reduce the quality of service.
Certain standards have emerged, such as session initiation protocol, but the adoption of an overarching benchmark standard for VoIP security has yet to emerge. In the meantime, certain observations should be noted in respect of both VoIP and its security:
• VoIP infrastructures are complicated and, if possible, voice and data infrastructures should be run separately.
• Firewalls need to be especially configured to allow VoIP communications.
• Encryption of VoIP communications should be applied centrally.
• VoIP communications are more vulnerable than traditional telephone communications because of their involvement with data transfers.
• If mobile devices are used for VoIP communications, Wi-Fi protected access technologies should be adopted.
VoIP technology is a complex and potentially resource-intensive infrastructure requiring personnel skilled and experienced in both security, and voice and data transfer technology. While the cost and varied functionalities of VoIP are a potential attraction for organisations, these attributes come at a price. Voiptalk is an example of VoIP solutions (www.voiptalk.org).
Instant messaging is rapidly becoming one of the most popular forms of business communication, mainly because of its ability to enable communications to be delivered in real time among employees and customers, and, of course, it can run in parallel with e-mail as an internal communication technology.
In many ways, IM resembles e-mail in respect of the immediacy of the communication and its ability to be transmitted to a large number of recipients. It also suffers from similar problems in terms of IT insecurity, legal and compliance issues, and potential abuse by users.
In terms of security, data transmitted by IM is governed by the DPA in the same way as any other electronic data and if not secured, it offers the opportunity of infection from viruses and other forms of malware.
These communications must, therefore, be treated in the same way as e-mail, with appropriate attention paid to security and confidentiality in addition to the deployment of protective measures that will ensure the network is not infiltrated by malicious code.
An example of one IM solution is MessageLabs’ Instant Messaging Security Service (www.messagelabs.com), an IM security control and management service which offers protection against new, emerging and converging threats. Incoming messages are scanned for viruses and malware and links to websites containing malware. Outgoing messages are matched against control and IM use policies. Suspicious messages are blocked and all messages are logged in MessageLabs’ secure infrastructure.
Instant messaging communications do not receive the high-profile exposure of e-mail, but in many ways IM resembles the state of e-mail a few years ago, when e-mail use was not governed closely and e-mail security was an embryonic concern for organisations. However, some commentators have expressed the view that in the near future, IM will overtake e-mail as the mainstream medium for electronic communications.
Digital signatures should be distinguished from electronic signatures. The latter refer to any method used to connect an individual’s identity with an electronic document, for example the sender’s name typed at the foot of an e-mail. Digital signatures refer to specific technology (asymmetric encryption) which binds an individual’s identity to an electronic record.
The characteristics of a digital signature are that as only one person creates it, there is protection from fraud or forgery; it provides present and past confirmation of identity and it can be easily stored and generated.
PKI may ensure secure communication between A and B and authenticate each party by reference to key technology. However, there remains the risk that A is not genuine and that an impostor is using A’s public key. Addressing this risk involves two more keys: the public signature key and the private signature key. When a document is digitally signed and transmitted, A’s private key forms an attachment which includes a partial copy of the document being sent, together with a certificate from a certification authority to confirm the identity of the sender. B will obtain the public key from the certification authority and can check the signature.
Digital signatures are recognised by the Electronic Communications Act 2000 and are intended to have the same legal effect as a handwritten signature. The object is to identify unequivocally a party to the message in a document. An algorithm – a private key tied to the user’s identity – is embedded in the message, recognisable as the sender’s signature only by the recipient’s key. In addition to the digital signature which authorises the origin of a communication, there is also a time stamp which verifies the time of the communication and the time of receipt.
A recognised and trusted infrastructure is required to support PKI. Organisations known as certification and registration authorities fulfil various functions in this area. They are recognised under the Electronic Communications Act 2000.
A certification authority is responsible for the management of certificates for certificate users and may also be responsible for generating key pairs for the encryption process. Its primary role is as an independent and trusted authority for authenticating the relationship between individuals and their public keys.
A registration authority is a representative of a certification authority and can undertake some of its management functions. These might include registration of parties for entitlement to certificates, or administrative functions undertaken by a certification authority such as updating certificates.
The certificate bears certain authenticating information. It contains the name of the certification authority and the public key holder. It also contains the holder’s public key and is digitally signed with the certification authority’s private key. Various types of information are included in the certificate; for example, the period of its validity, the range of transactions for which the certificate is valid and the identity and description of the certificate holder.
In order to understand how the procedure works, it is helpful to return to A and B. If A wishes to acquire a public key certificate, application is made to a certification authority. When A sends a message to B, the certificate is also sent and this confirms to B that A is genuine and that the certificate is current.
The certification process resolves two further uncertainties. A’s authenticity is confirmed because A’s public key is validated by a certification authority. B, therefore, knows that the message that purports to come from A does, in fact, do so. Repudiation is impossible, because the certification of A’s public key means that B can be satisfied that A is the actual person from whom the communication originates, and, therefore, the contract cannot be repudiated, at least not without the risk of some legal liability on the part of A.
Certification software may be installed ‘in-house’ where an enterprise wants to control its use of public key encryption, and take sole responsibility for the issue and use of certificates in its operations. As well as being employed externally, there is also the potential for securing internal privacy. An example of a PKI solution, beTRUSTed, can be seen on the website of Entrust (www.entrust.com).
Smaller enterprises are less likely to use an external certification authority, because PKI can be too complicated for small business and consumers. However, certification authorities can offer PKI services to communities without the problem of the infrastructure.
The cost of an in-house PKI will be significant in terms of the resources, skills, organisation, administration and human resources required. Encryption will be employed almost exclusively for the benefit of clients. Organisations must weigh up the value of the clients against the cost of importing the technology and training. There is also the question of legal liability arising in favour of any party who, in the ordinary course of business, relies upon a certificate which is subsequently found to be defective.
Digital certificates are published in directories. These hold a record of certificate notifications, revocations and suspensions. The digital certificates contain individuals’ public keys. An individual proposing to send a message refers to the directory for the recipient’s public key and digital certificate.
Encryption keys need careful storage. They should not, for instance, be left exposed on the organisation’s web server. Organisations providing such services include Thales (www.thalesgroup.com) of which the original providers, nCipher, are now part. The Regulation of Investigatory Powers Act 2000 (see Chapter 9) enables authorities to require encrypted messages to be decrypted in certain circumstances. For this reason also, the secure storage of encryption keys is essential.
As an alternative to PKI, some communities rely on trust and good faith in the use of encryption technology for those with whom they wish to communicate securely. This avoids the complex administrative framework of certificated PKI, as well as the cost and impracticality for small organisations. An example of a software product that serves such ‘communities of trust’ is Pretty Good Privacy (www.pgp.com), now part of Symantec.
Some suppliers provide Cloud-based encryption services. In this case, encryption management is performed by a supplier organisation which provides encryption software, manages the encryption keys, administers encryption policies and provides records, reporting and administrative services on an on-demand basis. An example of this is the service offered by Proofpoint (www.proofpoint.com).
At present, the industry is self-regulated. Tscheme Ltd. (www.tscheme.org) is a United Kingdom organisation with a membership of leading business, professional and commercial organisations. It is a self-regulating non-statutory body established for the approval of organisations operating electronic trust services. Such organisations are called trusted service providers.
Certification and registration authorities, and trusted service providers, should be distinguished from PKI vendors, which, it is suggested, are organisations selling PKI hardware or software infrastructures. They may be organisations that provide PKI for authentication within a company that is only used internally, or part of a complete solution. They might also provide certification and registration authority services.
The PKI Challenge, launched by the European Forum for Electronic Business (EEMA), ran from January 2001 to April 2003 and introduced solutions to PKI interoperability problems arising throughout Europe.
Concerns about digital signatures have been expressed, and have yet to be fully resolved, over the potential for a digital signature to fall into the hands of a third party and the transfer of risk of fraud to the owner of the signature.
Information and data must be properly stored with access controlled according to the value and sensitivity of the information at risk, and the qualifications and entitlement of personnel concerned. Over networks, in particular, it is necessary to identify the information that needs to be secured and ensure that access to the information is carefully controlled. Information must be authentic, valid and pure; that is to say, free from distortion through tampering or attack.
Information security needs consideration at various levels. This will depend upon the nature, size and structure of the enterprise. There are three aspects of security:
• General defensive capability: this means the general awareness of the potential for external and internal attacks on the organisation’s network.
• Defence in depth: this involves security at desktop level, security at network level and security throughout the system.
• Vigilance: this involves staff training and regular vulnerability assessments.
This approach involves awareness and participation by all employees. Security vigilance is particularly important. A security breach may just as easily occur at junior or subordinate level as it might at senior level. No matter how sophisticated the systems, nor how well briefed those at senior management level, if employees are either insufficiently aware or inadequately trained, the security system will not be effective.
A typical example of one of the most common data security incidents surrounds the use of laptops and portable media. Frequently, these are left in public places by accident, but may also be the subject of theft.
Portable media should always be encrypted as a basic security measure – it is suggested to the requirements of the Federal Information Processing Standard (FIPS) 140-2 issued by the National Institute of Standards and Technology. This standard prescribes four levels of security, level one being the most basic and level four being the most advanced. The standard is supported by a testing and validation process, although it does not prescribe the level of security for any particular application.
There must be a framework governing the environment in which the information is stored and accessed. This applies both to those inside and outside the organisation. An information security framework can be likened to a hard outer shell, inside which is an internal shell that protects the organisation’s data.
The electronic environment is collaborative and new alliances may involve outsiders having access to internal systems. In some professional services, for instance, IT systems allow clients access to organisations’ internal systems to track the progress of ongoing instructions or projects, or to obtain up-to-date billing information.
Firewalls are computers which guard access to a network, block malicious files, and prevent unwanted intrusion. They are the principal defence mechanism for preservation of information security and operate by means of the Internet protocol breaking down data into information packets, with routers directing the information packets to the correct source. Once an organisation decides what kind of material is to be allowed into and out of the network, a firewall is configured appropriately.
Firewalls protect information entering and leaving the network, and within the network itself. There should be a firewall between the web server and the internal secure network. The web server sits in isolation with firewalls on guard externally and internally. This isolation area is sometimes referred to a demilitarised zone (DMZ). While information exchanged with the outside world needs careful control, some internal information might need to be sent externally, requiring conversion into a form that will not compromise the internal secure network.
Depending upon the configuration of the firewall, access can be allowed to certain internal information for certain external parties, and to internal parties for certain external information. If the firewall has been correctly programmed, only ‘permitted party’ communications should be able to penetrate the firewall and pass through. If firewalls are not properly configured, the risk to the organisation is almost worse than if there were no firewall – because of the temptation to assume that the mere existence of the firewall is the complete solution to information insecurity.
In some situations, firewall protection may be limited. For instance, e-mails with attachments that masquerade as program updates but which, in fact, contain viruses may not be prevented from entry, despite the presence of a firewall. Other instances might include the transfer of files or software infected with a virus. There are many different viruses, and numerous new viruses being devised and discovered regularly.
A firewall operates as a sole gateway for the purposes of identifying unauthorised intrusion and protecting information. However, in the case of a network, each access point is potentially vulnerable to attack and must, therefore, be protected.
Although firewalls can be operationally complex, there are a number of benefits beyond their security function. The monitoring and auditing procedures involved in firewall administration can provide valuable management data.
A firewall either allows complete access or prevents any access. The flexibility of the electronic business model demands a position somewhere between the two extremes. One approach is a combination of firewalls through which the party seeking entry to the system is able to pass until reaching the specific level of information to which access is allowed.
Using this formula, the ‘black and white’ approach to the deployment of firewalls is avoided and a degree of flexibility can be introduced which can still be controlled by the organisation seeking to protect particularly sensitive information. Firewalls can be deployed for both intranets and extranets, effectively creating a virtual enterprise network.
Firewalls, however, have a number of limitations. Sophisticated threats, increasing traffic volumes and the cost and time of management are all significant challenges. Stonesoft (www.stonesoft.com) has developed its Stonegate ‘Next Generation Firewall’, which integrates the functions of security, availability, scalability and management. Details can be found in its white paper: The Evolution to the Next Generation Firewall.
Passwords are commonly used but have a number of weaknesses that were identified earlier. If their use is widespread and viewed favourably in organisations, it is sensible to adopt a common approach by all who use them.
If, therefore, it is management policy that passwords should be used, there are some useful guidelines to observe. They should be as long as permitted and employ as many keys as possible. More effective are pass-phrases. Useful tips when employing these include:
• the use of upper and lower cases randomly;
• the mixing of numbers and letters;
• deliberate misspelling of words;
• regular changes of procedure;
• avoidance of words with personal associations;
• avoidance of passwords recorded in writing.
Increasingly, organisations are moving towards technology-driven passwords in the form of keys requiring two-factor authentication or the generation of random passwords which expire by the effluxion of time and require renewal. Some alternative password solutions involve biometrics.
With the proliferation of viruses, and the potential damage that can be caused, the installation of software is essential to protect information. While firewalls offer some protection, in certain cases viruses can still infiltrate under the guise of acceptable software, containing a damaging element which is released once inside the system.
Anti-virus solutions may perform some or all of a number of different functions, for instance they may:
• act as a device for warning of suspicious activity;
• look for malicious code;
• warn of unexpected system changes;
• act as an agent to identify viral signatures.
The extent to which an organisation achieves these objectives as part of its virus defence strategy depends upon its preferences and its risk assessment. It seems sensible, however, to install software that will as far as possible provide most, if not all, of these functions.
There are numerous proprietary anti-virus software products but anti-virus software will be ineffective, unless selected and installed appropriately for the organisation it is intended to serve.
The installation of virus protection software is the responsibility of those with appropriate skills in the IT department. Anti-virus software will never be a complete solution because of the number of new viruses that are constantly being created.
Returning to the risk assessment principles described earlier, the real issue to address is how to reduce the threat to an absolute and acceptable minimum, and minimise possible damage.
Firewalls control access to systems and information. Anti-virus software attempts to prevent damage to systems and information. In an attempt to avoid the consequences of hacking activities, another mechanism is available –intrusion detection (IDS) or prevention systems (IPS). The aim of these systems is to identify unauthorised use, both internally and externally. The idea behind the solution is that the intruder’s course of action will be easily distinguishable from that of an authorised entrant to the system.
Intrusion detection and prevention systems can seek out known problems such as defective passwords and also act either as scanners, tracing events as they occur in the form of abnormal activity, or as agents for detecting hostile activity. These systems check internal network operations and traffic. This enables a track to be kept of attacks and other illicit activity, as well as the origins of attacks themselves.
In assessing whether these systems are appropriate, organisations should consider the resources to be protected and the importance of those resources to the operation of the organisation and its clients or customers. As these systems require skilled and knowledgeable personnel to be deployed to manage the installation, management issues also arise.
Returning to risk assessment principles, there is an obvious need to balance cost against the likelihood of risk. A large organisation with a vast network and numerous access points, with many at remote locations, will be far more vulnerable than a small organisation with only two or three networked computers. Resources are also an issue. The monitoring and analysis of incidents that the system detects will be labour intensive and demand personnel time.
IPSs block or prevent activities identified by IDSs and allow legitimate traffic. The technology may be regarded as an extension of IDS technology. An IPS should be programmed to minimise the rate of false positives.
Penetration testing is a method of establishing the quality of security provided within a network or system. A number of tools exist that are specifically designed to identify weaknesses and vulnerabilities within a network or a system. Specialists are usually employed to undertake penetration testing if the results are to be relied upon for expenditure to increase the quality and degree of security.
External penetration testing involves probing a site and checking, for example, the security and configuration of firewalls. Internal penetration testing involves examining internal activities and is an important checking mechanism, particularly since vulnerability most frequently arises from internal sources. An example might be the need to check any patterns of use in connection with newsgroups to which there is an outside connection.
Password testing involves an audit of password application and use and checks the quality of management surrounding the employment of passwords, both internally and externally.
Database security is a highly specialised area. However, observing certain high-level principles can significantly help in the safe preservation of data. The following steps are suggested:
• identify data categories;
• assess the risks attaching to each category;
• implement an identity and access management strategy;
• implement measures (such as encryption) to address and manage risks;
• regularly monitor, audit and review.
The emergence of the global market as a result of the development of Internet technologies has had a significant impact on the infrastructure of many organisations. IT networks that were once relatively confined are now frequently extended over global locations, both nationally and internationally.
Pressure arises on network management, not only from geographical spread, but also from the employment of a wide variety of portable and mobile devices on which vast amounts of confidential data can be stored and transferred seamlessly from system to system. Examples of these devices include: laptop computers; mobile devices, such as Blackberrys and smartphones; iPods; memory sticks and CD-ROMs. They are most frequently employed by personnel who may be remotely located and from time to time need to transfer data from such devices on to corporate networks.
The expression given to the artificial and uncoordinated extension of networks in this way is ‘deperimeterisation’. This means that the traditional perimeter boundary of a network (usually the desktop computer) is extended by a proliferation of other devices, all of which extend network perimeters.
The general expression for the securing of various entry points to an extended network is ‘end-point security’. End-point security involves securing the particular device at the perimeter of the network – for instance mobile phones, laptop computers or memory sticks. As every network is different, this raises different security issues, in turn presenting significant problems for network administrators.
There are various solution providers who can supply software to address end-point security issues – see, for instance, Symantec’s suite of end-point protection software for servers, desktops, laptops and mobile devices, which also includes network access controls and end-point encryption (www.symantec.com).
Reported examples abound of the loss of portable devices loaded with vast amounts of data. At the same time, the flexibility of portable devices enables dishonest personnel to store and remove large quantities of corporate data without authority and frequently without any danger of discovery until after the event.
Real challenges exist for organisations wishing to protect themselves from these vulnerabilities, with enforcement of use policies, securing data on portable devices and prevention of data loss, while at the same time maintaining user productivity. While firewalls may protect traditional networks, the range and complexity of portable devices make firewall protection impossible.
One strategy adopted to address the problems posed by portable devices has been to create special users or groups who are assigned specific encryption keys and are authorised with access to the same data. In a white paper entitled Extending enterprise security beyond the perimeter (2008), Secuware (www.secuware.com) explains how its C2K solution creates computer profiles to govern the use of specific devices, and user profiles that authorise access to data by specific personnel. Devices or users which do not conform to prescribed profiles will be denied access to protected data. In this way, it is claimed that the organisation can control what device and encryption procedure is to be adopted and which users or groups can access the data.
Increasing numbers of organisations are now deploying wireless networks for internal use by personnel and for use by external sources, such as clients and strategic allies.
The principal vulnerability of wireless networks lies in the ability of hackers to gain access to the network from rogue access points which exist without the knowledge of the wireless administrator. Frequently, these access points may be created by the hackers themselves.
Some measures can be taken to address wireless network security. Routers should be programmed to the WPA2 standard, currently the highest level of security available. Passwords should be complex, difficult to decipher and changed regularly. Encryption software, such as PGP and Truecrypt, should be employed where possible.
Various solutions are available to alert administrators to the possibility of unauthorised attempts to access or penetrate a wireless network. These tools are generically referred to as network scanners or sniffers, for instance NetStumbler (www.netstumbler.com). The methods they employ involve identifying and alerting the administrator to dubious or inadequately protected access points. Network scanners also perform other tasks, including identifying hardware and system vulnerabilities and interference.
Some devices, such as Bluetooth, Blackberry and certain PDAs, have built-in security mechanisms, in which case a check should be made to ensure network compatibility. However, these security measures do not necessarily offer protection against viruses and other malware for which traditional anti-virus protection should be obtained.
Earlier, a number of risks were identified as arising from personnel accessing systems and networks from remote locations while networked to the organisation. One solution is the creation of what has become known as a ‘virtual private network (VPN)’. The demand for VPNs springs from the increasingly global spread of organisations and the consequent need to connect employees to the organisation. A VPN has a flexibility of application in a number of different environments that, in many respects, makes it an attractive proposition for firms whose offices are geographically spread, both nationally and internationally. A VPN may be used in connection with the firm’s intranet, extranet and remote-access employees. Examples are the solutions offered by Novell (www.novell.com).
The principal technologies comprise three devices: first, the installation of a personal firewall on the remote device to provide protection; second, the installation of regularly updated anti-virus software; and third, the installation of encryption software.
An intranet VPN allows secure communication among internal departments, branch offices and the principal office of an organisation. The important features are that sensitive internal communications can be protected by means of encryption; information and documents can be securely stored; and the network can be constructed to accommodate growing numbers of users and offices.
An extranet VPN connects the organisation with strategic allies, clients and suppliers, as well as other agencies. The attraction of this technology is that the employment of commonly recognised security standards by all those linked within the extranet generates confidence in the security of communication and information passing between them. Traffic using the extranet can be controlled and monitored.
A remote access VPN connects the firm’s network with remote mobile employees. The important feature of this technology is that strong authentication processes can be introduced to ensure that there are no issues as to the identity of a remotely placed employee communicating with the organisation. Management controls can be centrally operated and, where necessary, additional employees can be accommodated.
A VPN creates what are sometimes termed secure tunnels between networks connected over the Internet. To this secure tunnel is added a capability for authentication and encryption. The most common methods of authentication are passwords; uncertificated public and private key encryption; and certificated public/private key encryption.
The competent management of both the identity of personnel seeking to access a network and the categories of data to which access may be authorised is critical to the adequate protection of an organisation’s confidential data. Identity and access management (IAM) is now a significant governance issue in terms of information security.
The objectives of a formal IAM scheme are to provide a straightforward, transparent and methodical approach to the management of personnel and the data to which they may have access. Not only should this approach bring about efficiencies in terms of improved service, better organisation and lower costs; more importantly, the level of control that an organisation maintains over its data is an important governance principle in the context of risk management and regulatory compliance.
Security concerns are addressed by automating and simplifying the categories of personnel and the classifications of data to which they are permitted access. Compliance concerns are addressed by transparency and clear allocation of permissions to named users or groups of users. Risk management concerns are addressed through the comprehensive application of an IAM scheme at all levels of the organisation. Improved business performance is achieved through a clear understanding of the roles, responsibilities and accountabilities of users or groups of users in respect of the data to which access is authorised.
It is common for IAM schemes to include a category for privileged users – privileged user access – for instance, by function (system/database administrator) or seniority (CEO, CIO, or COO). Some commentators suggest that overuse of this category risks the abuse of IAM schemes.
CA IT Management Software Solutions has commissioned a helpful independent report, Privilege User Management, from Quocirca Ltd. on the subject. CA offers two IT solutions: CA Security Management and CA Access Control (see www.ca.com).
The process for establishing an IAM framework will vary between organisations because each organisation and its respective data protection requirements are different. However, some general principles can be identified. The organisation’s policy document should define:
• the governance principles on which the IAM framework is based, its purpose and its intended objectives;
• the legal, regulatory and compliance provisions it is designed to address and any industry codes and standards that support compliance;
• procedures to be adopted for compliance with the organisation’s IAM scheme;
• the categories of users and groups of users who are subject to the IAM scheme;
• any technologies deployed within the organisation that support compliance with the scheme.
Various solutions are available for developing IAM schemes. Microsoft® offers the Access and Management series which provides a template for an IAM scheme (http://technet.microsoft.com/en-us/library/cc162924.aspx).
Organisations should monitor and control potentially damaging e-mail. The ideal solution is an automated system that monitors e-mail, and detects and quarantines questionable messages for further review. Conventional keyword searches do not identify underlying meaning, but simply search for words or phrases that indicate unacceptable content. The result can be that numerous meaningless search results are generated. An ability to discern between questionable and innocent messages is required.
Software that is intended to monitor and control e-mail content should have certain features. For instance, there should be mechanisms to prevent the leakage of confidential documents and data, and to filter e-mail with inappropriate content by preventing it from entering or leaving the organisation. Linked to this is the need to block attachments with particular file types and to limit the size of attached documents that can be sent or received. There should also be a capability to add legal disclaimers to outbound mail, or corporate messages to inbound mail. Some software is able to block and quarantine, or even delete e-mail from specific sites. It should be possible to remove quarantined e-mail to a quarantine area for further review.
Access control involves authorising who can send e-mail and to whom. Software is available to enable controls to be applied to individuals, groups or entire messaging domains, authorising communications at specifically selected levels. By combining this with content control, it becomes possible to direct who sends what to whom and, in so doing, prevents the loss or unauthorised distribution of secure information and supports IAM policies.
The ability to control how sensitive information is handled is an essential feature of preserving information security. It is also a compliance requirement of the DPA, which specifies that data which is defined by the DPA as ‘sensitive’ requires special protection. Messages can be given security labels, and software is available to perform checks that will denote the sensitivity of the information being conveyed.
Examples of suppliers of software enabling such control of e-mail content include Websense (www.websense.com) and Omniquad (www.omniquad.com).
There are four principal issues with which an organisation will be concerned in respect of access to, and use of, information obtained from the Internet in the workplace. The issues to be addressed are: control of access generally to the World Wide Web; control of access to specific sites which, if accessed, may expose the organisation to legal and professional sanctions; control over the downloading of material; and monitoring those with access to the Internet.
There are a number of software solutions able to detect and control inappropriate use of the Internet in the workplace. It is important to appreciate, however, that introducing technology to address these types of risk also requires the organisation to have in place appropriate guidelines and policies for their use. Because the software allows monitoring and surveillance of users in the workplace, the use of this type of software also has legal compliance implications. These are discussed in the next chapter. The question of suitable policies to both guide and control employees in this respect is considered in Chapter 10. Websense also offers solutions to address these issues. An example of this type of solution is offered by Pearl Software (www.pearlsw.com).
As online business develops, consumers expect organisations to accept payment for their services over the Internet. Several technology solutions are now emerging that protect information given by online purchasers of services and enable such transactions to take place in a secure environment. The dominant payment methods to date are credit and debit cards.
Credit cards are the most familiar method of payment and have a number of advantages. They are easy to use for both card merchant and consumer. Protection is available for the client under Section 75 of the Consumer Credit Act 1974, which gives the consumer certain rights of action against either the supplier or the card issuer in certain circumstances where loss has been incurred. There is also a clear audit trail of transactions performed.
The two principal technologies employed in connection with the use of credit and debit cards over the Internet are SSL and secure electronic transaction (SET). The processes underlying these technologies were discussed earlier.
Most banks are willing to offer an organisation a merchant account to enable it to accept credit cards. If an organisation proposes to provide services electronically, it seems sensible to consider providing facilities for payment by credit card.
However, a number of questions need to be considered by an organisation proposing to provide online payment facilities for consumers. Although technology solutions are available for online payment, various operational issues arise. Are there sufficient personnel resources available to support the operation? Are there sufficient facilities to provide education, training, supervision and monitoring? Can the costs of establishing a mechanism for introducing an electronic payment system be justified on a cost-benefit analysis in terms of the fees payable, the technology systems to be installed and operated, and the anticipated marketing benefits?
One useful barometer is whether or not the organisation is already offering any form of electronic payment facilities. If so, it may conveniently carry on. If not, the organisation needs to consider whether it has a sufficiently large number of clients who would find the service of value to justify the cost of setting up and operating of the facility.
In respect of invoicing or billing procedures, software is now available that will deal with invoice completion and delivery, whether to clients or suppliers. The software is developed to standards set by the Business and Accounting Software Developers Association (www.basda.org).
There are a number of technology solutions available for the adoption and management of online payment systems including:
• Cybersource (www.cybersource.com);
• Netbanx (www.netbanx.com);
• PayPal (www.paypal.com);
• RBSWorldPay (www.rbsworldpay.com).
The selection of solution providers listed above gives an idea of the types of electronic payment products and services accessible to organisations of all sizes and types. A visit to their websites provides full details of the available services and the terms and conditions that apply.
The PCI Security Alliance provides services to members of the payment card industry, for instance retailers, e-commerce organisations and organisations that must achieve compliance with the PCI Data Security Standard.
The PCI Data Security Standard has been developed by the PCI Security Standards Council and provides controls around data to address potential credit card fraud (see www.pcisecuritystandards.org). Essentially, the standard involves the taking of certain steps to protect the payment system operated by a provider of goods and services. The principles addressed by it concern:
• secure network infrastructure;
• protection of confidential cardholder information;
• adequate protective measures;
• access management procedures;
• rigorous testing and monitoring;
• an information security policy.
The standard specifies within these categories the types of measures needed in order to demonstrate compliance. Compliance with the standard is invariably a contractual requirement carrying legal responsibilities.
The prospect of attack whether by malicious code in the form of a virus, or from external or internal hackers raises the possibility of a ‘denial-of-service’ incident should, for some reason, the deployed firewalls or anti-virus software fail. ‘Spamming’ is a potential threat to business continuity because when sent in large volumes, spam e-mail can overload systems and networks to the extent that they are unable to function.
Software is available to combat spamming. One example is: the Mail Abuse Protection System (www.mail-abuse.com) owned by Trend Micro.
The need for business continuity planning arises from an organisation’s responsibility to its clients and the need to comply with requirements imposed by legislation, practice rules, regulators and insurers. Such planning requires resources in terms of adequacy of finance and the appointment of staff with appropriate skills. As a protective, as opposed to an income-producing, strategy, it is not likely to be popular. It is, therefore, important that the risk assessment and management strategy evaluate the proportionality of the cost against the risk to be addressed.
In formulating a risk policy, there are some useful questions to ask. What would happen to the organisation if there was a significant interruption to the functioning of its IT and Internet technology systems? Have the various IT functions been assessed for risk and prioritised? Are there any documented plans in place for testing and revising? Are staff aware of any plan in respect of their roles and responsibilities?
In terms of Internet technologies, there are likely to be implications for business continuity in three areas: the functioning of the network or systems, both internally and externally; access to, and preservation of, data; and the availability of key personnel.
Networks and systems
Most systems will be networked or distributed. In the event of discontinuity, the question to be considered is whether it will be possible to reconstruct the network to the original configuration within a reasonable time and at a proportionate cost. Many organisations use back-up tapes to support their systems. The question arises as to their adequacy, their management and the reliability of their continuing operation. Are plans in place, tested and reviewed, and has adequate time been allowed for training?
The ideal enterprise has a business continuity plan to which it can devote appropriate resources in terms of cost and time, supported by suitably skilled employees to test and implement a suitable recovery plan. However, many organisations do not have the time, financial resources or personnel to take control of business continuity plans. In many cases, the most practical solution is to outsource the responsibility to an expert hosting service.
Outsourcing involves sharing the responsibility between the supplier and the organisation. The supplier needs an appropriate brief as to the organisation’s requirements and should be able to assist with risk assessment, planning and management by providing the right skills within defined agreed time limits. Outsourcing is a continuous process and the organisation will need to allocate resources on an ongoing basis as its use of the Internet technologies develops.
Data
Data should be backed up on a regular, usually daily, basis. There is a strong argument for identifying a facility that will back up data off-site in the event of a physical or technological threat. It is important to remember that in addition to business continuity issues, the safe storage and preservation of data is also an obligation under the DPA.
Adequate data storage is essential for business continuity. This is all the more important because data is scattered all over an organisation in servers, desktop computers, laptop computers, memory sticks, mobile phones, iPods, CD-ROMs and DVDs. Furthermore, there are numerous different types of data held, for instance, about personnel, clients and business partners.
Three technology approaches are involved:
1 Network attached storage (NAS): this integrates with a local area network.
2 Storage area network (SAN): this can be scaled for use in terms of performance and capacity.
3 Direct attached storage (DAS): this comprises a dedicated server with its own storage resource.
For organisations unwilling to risk installing solutions that have not been developed to recognised standards, the alternative option is to outsource the function. Some issues in respect of outsourcing are considered in the next section.
Data back-up requires careful thought. Although a technology issue, it is also a management issue. Typical issues which an organisation should address include ensuring that data can be retrieved with the minimum of delay; the need for a data storage infrastructure that caters for increased business activity; ensuring that suitable encryption solutions are applied to all stored data; ensuring that adequate reporting, auditing and monitoring processes are in place; and obtaining the services of a suitable supplier with an acceptable reputation in the market.
Personnel
In the event of business discontinuity, the firm will need to locate relevant staff as quickly as possible to address the difficulties that have arisen. In practical terms, contact details of key personnel should be safely stored and be easily retrievable, and such personnel should be engaged on the basis that they might be called upon out of conventional hours to cope with the event of business discontinuity.
Every organisation is different and will have its own priorities in such an event. However, some commonly applicable principles emerge:
• Plan for business discontinuity and consider how the collapse of the organisation’s systems, loss of data and the absence of key personnel might be overcome.
• Be ready for such an event and have a business discontinuity plan in place and operative.
• Ensure personnel are familiar with the routine to be observed in such an event, in much the same way as a rehearsal for fire drill.
• Implement a regular system of personnel training and education, including reporting procedures.
Resources
There are numerous resources available for organisations requiring assistance and guidance in creating and developing business continuity strategies.
The Business Continuity Institute (www.thebci.org) was established in 1994 and certificates its professional membership with competence to perform business continuity management to a high standard.
BS 25999-1:2006: business continuity code of practice, BS 25999-2:2007: business continuity specification and BS 25777:2008: information and communications technology continuity management code of practice were discussed in the context of risk management on pages 150-151.
Global good-practice guidelines for business continuity management and related disciplines can be downloaded from the Business Continuity Planning Group’s website.
A detailed examination of the IT outsourcing process is beyond the scope of this book. Outsourcing IT is a complex strategy which can significantly affect an organisation’s position in the market. Legal advice should always be sought before embarking on an IT outsourcing strategy.
Outsourcing involves a supplier offering an organisation the option of transferring responsibility for the operation of a part or the whole of its IT function. The service may be a combination of standardised software, implementation, infrastructure, service and support, and is usually designed to meet the specification of small to medium-sized organisations.
A decision to outsource the IT function goes to the heart of any organisation’s business strategy since IT is an essential business tool for every organisation. Entrusting a business tool that is so critical to the survival and success of an organisation to a supplier about whom the organisation may know little or nothing carries significant risk. Many IT outsourcing projects benefit both organisation and supplier, but almost equally as many result in project failure. A principal reason for the high incidence of project failure is neglect by the organisation in addressing and managing adequately the process and risk that surround the project – in other words neglecting to apply principles of governance.
The traditional IT outsourcing model involves a process of identifying how and why an outsourcing strategy should be adopted, within the context of the organisation achieving its objectives and business goals. There follow the processes of: supplier identification and selection; due diligence, tender negotiations; the contractual and service level agreement (SLA) processes; transition, implementation and change control through contract management; and termination.
Each of these processes calls for systematic and focused strategic, managerial and operational skills to ensure that:
• the most suitable supplier is selected;
• the contract supports the organisation’s business goals;
• the SLA provides levels of service that will satisfy the needs of the organisation’s end-users;
• the project is implemented efficiently and effectively.
As outsourcing projects typically continue for several years and can involve many millions of pounds, the need for the organisation to ensure the project’s success becomes critical.
Underpinning the actual mechanics of the transaction, several other issues arise:
• the project must have top-level support or sponsorship;
• the interests of the stakeholders must be accommodated;
• the relationship with the supplier must be managed;
• strategic, IT, legal and compliance, operation and financial risks must be identified and managed.
IT outsourcing is a process of considerable complexity and significant risk, which has the potential to destroy an organisation either as a commercially viable entity or simply in terms of its reputation. It requires that principles of governance are not only understood, but adopted then rigorously applied.
Suppliers claim to provide a number of benefits. Assuming the service is provided to acceptable standards, there is no doubt that its availability relieves an organisation of its concerns in this area and can be both a relief from the burden of managing IT and also cost-effective. For relatively low initial investment, the organisation’s responsibilities can be transferred with the assurance of maintenance and support from the supplier. The need to wrestle with the installation of new technology is removed and if adequate service is provided, there should be little or no disruption to the organisation’s ongoing business activities.
At present, there is no standard type of supplier recognised by a trade or industry benchmark, so care must be taken to ensure that the prospective supplier is not simply a software vendor. It is necessary, therefore, to establish the market reputation, technical competence and financial position of the prospective supplier, and particularly to ensure that the supplier has the skill and competence to provide the service required.
This will involve a comprehensive due diligence exercise. On a general level, the organisation should be satisfied that the supplier is experienced in the market, has adequate qualifications, and has a shared vision of the project.
More specifically, the organisation’s due diligence exercise should check that the supplier is a strategically, technologically, legally, operationally and financially suitable partner with which to enter into a formal outsourcing agreement which may last many years.
Before, or as part of, entering into a contract for the supply of business continuity services, it may be a sensible precaution to consider implementation through a pilot scheme in the first instance. The contract contains the principal terms of agreement and is supported by the SLA, which sets standards of performance and the benchmarks for maintaining those standards. A check should be made to ensure that responsibility under any contract is not shared with another party. Related to this is the need to establish the precise extent of the liability of the supplier for loss, both direct and indirect, and the available remedies, together with any termination provisions. Payment methods should be scrutinised. In particular, there should be an awareness of hidden costs, which might be incurred through consultancy fees or the cost of integrating systems.
The SLA defines the levels or standards of service required by the organisation. The objective is to obtain clear and consistent levels throughout the lifetime of the contract.
Typical issues to which particular attention should be paid include targets, measurable objectives, improvements and innovations, supported by appropriate monitoring and review processes. These issues are measured by a series of metrics included in the SLA.
The purpose of metrics is to ensure supplier compliance with the contract. Metrics should fall within the competence of the supplier and should be realistic or performance disputes will arise. They should be relevant, capable of analysis, and consistently applied; yet, at the same time, care should be taken to ensure that metrics are not too complex. Technologically, the organisation will wish to consider volumes, responsiveness, efficiency and quality.
Traditionally, this process has been conducted manually. However, in the case of multiple SLAs, the process can easily become cumbersome, costly, labour intensive and prone to dispute. Technology employed to date has included the ad hoc use of spreadsheets and Word documents, which have done little to simplify the process.
Software is now available to automate the service level management process. Oblicore (www.oblicore.com), now acquired by CA Inc (www.ca.com), has developed a solution which maps out the management of the portfolio of services and the levels at which they are provided.
The portfolio (or catalogue) of services defines the services to be offered, activates them and defines the standards by which they are to be measured. This is referred to as service portfolio management. The solution can also be programmed to manage the services levels, establishing the contract, defining relevant measurements, defining reports and setting performance parameters, all in collaboration with the supplier.
The benefits are significant. The process is standardised and the data gathering process is more consistent. Different metrics, for example performance, usage or financial, can be applied without difficulty. The infrastructure allows oversight network monitoring of all types of application and enables instant comparisons to be made with past performance.
The effectiveness of metrics can only be measured against a properly conducted audit. The audit process presents clear evidence of compliance, or non-compliance, with the contract and SLA. It is also a key risk awareness and risk management process. It can identify trends in performance that may lead to problems further ahead and can recommend controls that address inconsistencies. The potential range of an audit can stretch from minute examination of detailed metrics to discrepancies, investigation into fraudulent activity and even the activity of other management teams.
Technologically, the Board or Partners will wish to check the performance of the supplier’s systems, applications and infrastructures; and if the contract is for a period of years, the frequency with which they are upgraded.
The audit may be conducted by the supplier but, although auditors are bound by professional standards, there is obvious potential for a conflict of interest to arise. Therefore, the organisation should consider obtaining the services of its own auditors to whom data for the purposes of the audit will be supplied by an audit team.
In an IT outsourcing contract, it is likely that the organisation’s data will be processed in some way by the supplier. It is, therefore, vital that the organisation satisfies itself that the data is secure and safe from interference, contamination and theft. Benchmark security standards are available against which to measure the suitability of the supplier in this respect.
BS ISO/IEC 27001 provides a benchmark for the management of information security management systems. It is expressed as being most effective for supplier organisations which manage information on behalf of other organisations as it can be used to assure organisations that their data is properly protected. This is considered in more detail on page 152.
There are also other relevant standards addressing third-party management of information security issues:
• BS ISO/IEC TR 14516:2002 information technology security techniques – guidelines for use and management by trusted third parties;
• BS ISO/IEC 27004: information security techniques: information security management: measurement.
Any outsourcing contract should incorporate a best-practice infrastructure for effective IT service management. The IT Infrastructure Library (ITIL) is widely recognised for providing comprehensive documentation on best practice for IT service management (www.itil-officialsite.com). Version 3 of ITIL presents the concept of life-cycle management from the design stage to identification of measurable service levels, operation, monitoring, support, data gathering and feedback, to renewal through continuous improvement.
The contract should establish a procedure for dispute resolution, perhaps by graded procedures from informal, to alternative dispute resolution (ADR), to expert determination, to arbitration and litigation. In connection with this, it is sensible to establish escalation procedures for assistance in resolving practical problems and support issues
Cloud Computing raises the potential for a wide range of significant information security risks to arise. Given that the Cloud model can apply to most features of an IT infrastructure, for instance at infrastructure, platform and software levels, the extent of the potential security issues becomes readily apparent.
Areas in which technology risks might arise in the Cloud model include business continuity and disaster recovery; application security; storage technology; virtualisation processes; encryption procedures; data management; and data centre performance.
Presently, there are no easy answers to Cloud security issues, although some solutions are emerging, such as Commensus’ (www.commensus.com) development of its Virtual Infrastructure Platform, which claims to offer a secure virtual solution for data stored and compartmentalised in virtualised servers.
Therefore it is incumbent on organisations outsourcing through the Cloud model to obtain confirmation of adequate performance of the following from the supplier.7
• records of past performance levels;
• records of forecast performance levels;
• IT systems and server management records;
• governance and enterprise risk management infrastructure;
• processes for managing data.
Compliance
• evidence of ability to comply with the contract;
• evidence of ability to comply with the SLA;
• evidence of compliance with any relevant standards and methodologies;
• proposals for compliance audits;
• compliance with electronic discovery processes.
Security
• security of servers;
• security of networks;
• security of IT platforms and infrastructure;
• security of applications;
• application of intrusion detection and prevention systems;
• application of encryption technology and standards applied;
• business continuity and disaster recovery procedures;
• incident response and notification procedures;
• identity and access management procedures;
• data storage procedures.
• procedures for monitoring the service;
• procedures for reviewing the service;
• reviews of metrics and service levels.
No matter how thoroughly due diligence procedures are undertaken, the organisation can never be certain of the quality of the supplier’s performance until operations begin. An organisation should, therefore, be rigorous in its assessment of a Cloud supplier before the contract begins.
Andrew Rose, Clifford Chance, regards the Cloud model as:
. . . probably the most significant developing Internet risk in 2010. Effectively, this involves outsourcing IT into a shared commonly virtualised environment where confidential data is stored on remotely located servers, in many cases, internationally. This inevitably raises questions surrounding: the safety and security of data; the confidentiality of data; the auditing of the supplier’s service; and the whereabouts of data; with all the subsequent compliance issues connected with international data transfers and access.
Cloud models can be both internal and external. Key areas an organisation should address include legal and compliance issues (in particular with regard to data protection); data security management; incident management; IAM schemes; and encryption and key management.
Two white papers have been published on Cloud security issues:
• Cloud Cube Model version 1, Jericho Forum, April 2009 (www.opengroup.org);
• Security Guidance for Critical Areas of Focus in Cloud Computing, Cloud Computing Alliance, April 2009 (www.cloudsecurityalliance.org).
Web 2.0 is fast emerging as a recognised business tool in terms of its ability to generate and foster new business connections and to communicate rapidly and widely with a range of strategic allies, prospects and clients.
The principal components of Web 2.0 technologies currently comprise entities such as Wikipedia, FaceBook, YouTube and Twitter, and were initially designed for consumer interests. Now that organisations have taken a greater interest in these communication channels, a number of security considerations arise. Many organisations now allow their personnel to access Web 2.0 sites, mostly for business purposes but to a lesser extent, for social purposes also.
The nature of Web 2.0 is an extension, or development, of Web 1.0 in that Web 2.0 is interactive and this interactivity is the foundation of both its value and the interest supporting its rapid development. Underpinning this interactivity is the ability for users to access rich user-generated content through social interaction in the form of collaboration and information sharing. The initiative for the development of Web 2.0 is led primarily by the younger generation who frequently use social networking channels as their communication of choice, both socially and in the workplace. As the future of business and commerce depends upon their input and performance, it is clear that Web 2.0 technologies will not be a passing phenomenon.
Like all Internet technologies, Web 2.0 technologies present a number of security vulnerabilities that were identified earlier. Hackers and criminals gain access in order to infect Web 2.0 sites or to assume false identities with a view to the commission of various types of fraud.
There are also significant threats to the safety, security and confidentiality of data posted on, or exchanged within, these sites, giving rise to the potential for major leakages of data. Furthermore, the interactivity that Web 2.0 encourages means that the flow of data, whether or not infected by malware, is both inward and outward and potentially distributable to large numbers of individuals.
The attraction of Web 2.0 technologies is that they offer access to large numbers of other users and their systems and are, therefore, an ideal source for exploitation in terms of propagating malware, distributing spam, creating ‘botnets’ or implanting spyware.
While traditional devices, such as firewalls, anti-virus solutions and spyware identification tools have some use, the ease and speed with which data is exchanged through Web 2.0 technologies frequently mean that these solutions are unable to keep up with the changing nature of the threats presented by the continuous sharing and exchange of large volumes of data and information. Only a security solution that is able to respond to new threats in real time is likely to provide adequate protection in terms of the ability to monitor inbound and outbound communications, and the nature of the content being exchanged between systems.
Furthermore, where organisational policies allow access to, and participation in, selected Web 2.0 sites, any security solution must be able to identify and allow access to those and prevent access to other unacceptable sites.
Another issue is that of users accessing Web 2.0 websites from remote locations while on the legitimate business. It is just as simple for mobile devices to be infected with malware or to be a source of data leakage.
In summary, any security solution should be able to:
• control communications and content in real time;
• identify and prevent malware and data leakage incidents;
• discriminate between acceptable and unacceptable Web 2.0 sources;
• enforce a policy which includes all IT communication devices;
• provide monitoring, activity and performance reports.
Technologies are emerging to address these requirements. For instance, Websense (www.websense.com) offers two modules:
• Web Security Gateway: identifies sites and their content and addresses malware threats;
• Threatseeker Network: enforces behavioural protocols and identifies potentially unsafe content, including the encryption and decryption of content before entering the network.
Of course, security solutions alone are not a complete answer to data and security threats. Equally critical is the behaviour of those in the organisation using these and other Internet technologies. This aspect is considered in the context of operational issues in Chapter 10. Likewise, the employment and deployment of IT security solutions gives rise to a number of legal and compliance issues, which are examined in the next chapter.
7 Outsourcing IT: a governance guide, Kendrick R, IT Governance (2009).
13.58.199.182