This chapter provides a foundation to understanding computer security. It is defined as technological and managerial procedures applied to the computer systems to ensure the availability, integrity and confidentiality of information against unwanted access, damage, and modification. The chapter begins with a brief overview of computer-related security aspects. It further gives an insight into the malicious programs that affect the computer system. The discussion continues on various techniques (cryptography, digital signature) that play an important role in transmitting data securely over the Internet. Firewall application along with its types and an overview on different ways of identification and authentication of the network is explained. Finally, the chapter emphasises on training programs and policies for security.
After reading this chapter, you will be able to understand:
Computer security—a technological and managerial procedure applied to secure the computer system
Malicious programs—unusual activity that penetrates along with the useful files
Cryptography—the process of altering messages to hide their meaning from adversaries who might intercept them
Digital signature—the digitized images of paper signature used to verify the authenticity of electronic document
Firewall—an application that prevents certain outside connections from entering into the network
Identification and authentication, which is another line of defence against the unauthorized people from entering into a computer system
Data backup methods and recovery tools to ensure recovery of data in the event of data loss
Security awareness and its related policies
Security has always been an overriding concern of humankind. For any organization, information plays a fundamental role in running business. Therefore, it is required to safeguard information from reaching the illegal hands. When data are in the digital form, a different security procedure is required. Computer security refers to the technological and managerial procedures applied to computer systems to ensure the availability, integrity, and confidentiality of information managed by the computer system against unauthorized access, modification, or destruction. It deals with the transmission of data in a secured environment to the people sitting thousand miles away from each other. The growth in the area of telecommunication and networking has enabled people to make use of electronic medium as the fastest way to stay in touch with each other. However, it has created unreliability and insecurity in the minds of people using it.
Information technology has also some loopholes associated with it like the possibility of stealing of vital information and intentionally implanting destructive programs onto the other's computer system. The motive behind such activities is to slow down the pace of an organization and harm it economically. Intruders penetrate into the computer using different ways; they make use of malicious programs to cause destruction and breach privacy. Therefore, it has become essential for an organization to take preventive measures to safeguard their data and privacy. Security experts make use of firewall and cryptography techniques to prevent suspicious data from reaching to the host computer and use algorithms to encrypt the data while sending it across the network.
Computer security refers to the protection given to computers and the information contained in them from unauthorized access. The practice of computer security also includes policies, procedures, hardware, and software tools that are necessary to protect the computer systems and the information processed, stored, and transmitted by the systems. It involves the measures and controls that ensure confidentiality, integrity, and availability of the information, processed and stored by a computer. These three aspects are responsible for effective computer security (Figure 19.1).
Figure 19.1 Computer Security
Confidentiality: Confidentiality ensures that information is available only to those persons who are authorized to access it. Strict controls must be implemented to ensure that only those persons who need access to certain information have that access. The most common form of access control is the use of passwords. Requiring passwords, smart cards, or single-use-password devices is the first step to prevent unauthorized individuals from accessing sensitive information and is the first layer of defence in access control. Therefore, keeping password confidential is one of the most fundamental principles of computer security.
Integrity: Integrity ensures that information cannot be modified in unexpected ways, as loss of integrity could result from human error, intentional tampering, or even catastrophic events. The consequence of using inaccurate information can be disastrous; therefore, an effort must be made to ensure the accuracy and integrity of data at all times. When the validity of information is of utmost importance, it is often helpful to design controls and checks to ensure accuracy of information. For this, encryption process is used, which transforms information into some secret form to prevent unauthorized individuals from accessing the data. Such a technique prevents an intruder from reading or modifying the information.
Availability: Availability prevents resources from being deleted or becoming inaccessible. This applies not only to information, but also to the machines on the network and other aspects of the technology infrastructure. This inability to access the required resources is called “denial of service.” Intentional attacks against computer systems often aim to disable access to data. Another aspect of availability ensures that needed resources are usable when and where needed, thus providing system redundancy, in the form of backup data and power source.
Computer systems are vulnerable to many kinds of threats that can cause various types of damages, which may result in significant data loss. These damages can range from errors that can cause harm to database integrity to natural calamity destroying entire computer centres. Precision in estimating computer security-related losses is not possible because many losses are never discovered. The effects of various threats vary considerably; some affect the confidentiality or integrity of data while others affect the availability of a system. A threat can come from any person, object, or event that, if realised, could potentially cause damage to the computer network. It can also arise from intentional modification of sensitive information or accidental error in a calculation or because of a natural disaster like flood, storm, or fire. Some of the commonly occurring threats to a network are discussed below.
User, system operators, and programmers frequently make unintentional errors, which contribute to security problems directly and indirectly resulting in system crashes. In other cases, errors create vulnerabilities. It can occur in all phases of the system life cycle. Programming and development errors, often called bugs, range in severity from gentle to catastrophic. In the past decade, software quality has improved measurably to reduce this threat, yet software “errors” still bound. Errors and omissions are important threats to data integrity.
Errors and Omissions: Errors and omissions are important threats to data and system integrity. These errors are caused not only by data entry operators, processing hundreds of transactions per day, but also by users who create and edit data. Many programs, especially those designed by users for personal computers, lack quality-control measures. However, even the most sophisticated programs cannot detect all types of input errors or omissions. A sound awareness and training program can help an organization reduce the number and severity of errors and omissions.
Fraud and Theft: Information technology is increasingly being used to commit fraudulent and theft activity. Computer systems are exploited in numerous ways, both by automating traditional methods of fraud and by using new methods. For example, individuals may use a computer to steal money from a large number of financial accounts, thus generating a significant sum for their own use. Financial systems are not the only institutions facing fraudulent activity, but the systems such as time and attendance systems, inventory systems, school grading systems, or long-distance telephone systems, which control access to any resource, are also the targets.
The majority of fraud uncovered on computer systems is committed by insiders who are authorized users of a system. Since insiders have both access to and familiarity with the victim computer system, including what resources it controls and where the flaws are, they are in a better position to commit crimes. An organization's former employees may also pose threats, particularly if their access is not terminated promptly.
Loss of Physical and Infrastructure Support: Loss of physical and infrastructural support in an organization also contributes to the security threat. The infrastructural support includes power failures, loss of communications, water outages and leaks, lack of transportation services, natural calamity, and so forth. Recent study has shown that more loss is associated with fires and floods than with viruses and other more widely publicized threats. A loss of infrastructure often results in system downtime, sometimes in unexpected ways.
Hacker and Cracker: The term hacker refers to the person with the intention of finding some weak points in the security of websites and other computer systems in order to gain unauthorized access. The activities of hackers are not limited to only gaining the unauthorized access to systems, but also include stealing and destroying the confidential information. Hackers can also introduce viruses in the network, which can enter database or other applications and crash the whole server. In addition, they can also modify links in websites to redirect the sensitive information to the database of their interests.
In hacking community, hackers have been classified into two categories, namely, white-hat hackers and black-hat hackers depending on their intent behind hacking. The hackers who break into the computer security with non-malicious reasons are known as white-hat hackers. Usually, such hackers are security experts working with manufacturers. On the other hand, the hackers who break into the computer security without authorization for ulterior purposes, such as property theft, credit card theft, terrorism, etc., are known as black-hat hackers or crackers.
Note: In mass media, the terms hacker and cracker are often used interchangeably.
Malicious Code: Malicious codes are the programs that generate threats to the computer system and precious data. This code can be in the form of viruses, worms, Trojan horses, logic bombs, and other “uninvited” software. Virus is a small segment of code, which replicates by attaching copies of itself to existing executables files. The new copy of the virus is executed when a user executes the new host program. Trojan horse is a program that performs a desired task, but also includes unexpected functions. Worm is a self-replicating program, which is self-contained and does not require a host program. The program creates a copy and causes it to execute; no user intervention is required. Worms commonly utilize network services to propagate other host systems.
The number of known viruses and worms is increasing, and the rate of virus attacks is growing alarmingly. Most organizations use Antivirus software and other protective measures to limit the risk of virus infection.
Foreign Government Espionage: In some instances, threats can be posed by foreign government intelligence services. In addition to possible economic espionage, foreign intelligence services may target unclassified systems to collect information about intelligence missions. Some unclassified information that may be of interest includes travel plans of senior officials, defence, and emergency preparedness, manufacturing technologies, satellite data, personnel and payroll data, investigative, and security files. Therefore, adequate guidance must be sought from the security office regarding such threats.
It is often seen that users are made to open certain files (screen saver, game, utility, and so on) on the Internet. On opening such files, a user begins to face unusual activities from the computer such as malfunctioning of the applications or inefficient running of hardware resources and so on. Such unusual activity is the result of malicious programs, which penetrate along with the useful file. These malicious programs are often called virus, worms, Trojan horse, logic bomb, and so on.
Viruses are programs, which are designed to replicate, attach to other programs, and perform unsolicited and malicious actions. It executes when an infected program is executed. On MS-DOS systems, these files usually have the extensions .exe, .com, or .bat. Virus enters computer systems from an external software source and easily hides in software. Just as flowers are attractive to the bees that pollinate them, virus host programs are deliberately made attractive to victimize the user. They become destructive as soon as they enter a system or are programmed to lie dormant until activated by a trigger. The different types of virus are discussed below.
Boot Sector Virus Boot sector virus infects the master boot record of a computer system. This virus either moves the boot record to another sector on the disk or replaces it with the infected one. It then marks that sector as a bad spot on the disk. This type of virus is very difficult to detect since the boot sector is the first program that is loaded when a computer starts. In effect, the boot sector virus takes full control of the infected computer (Figure 19.2).
Figure 19.2 Boot Sector Virus
File-Infecting Virus: File-infecting virus infects files with extension .com and .exe. This type of virus usually resides inside the memory and infects most of the executable files on a system. The virus replicates by attaching a copy of itself to an uninfected executable program. It then modifies the host programs and subsequently, when the program is executed, it executes along with it. File-infecting virus can only gain control of the computer if the user or the operating system executes a file infected with the virus (Figure 19.3).
Figure 19.3 File Infecting Virus
Polymorphic Virus: Polymorphic virus, unlike other viruses, consists of static virus program that gets copied from file to file as it propagates. Such virus is difficult to detect because each copy it generates appears different from the other one. It uses encryption algorithm to multiply new copies of the program. For an encrypted virus to execute, it must decrypt the encrypted portion of itself. When an infected program launches, the virus decryption routine gains control of the computer and decrypts the rest of the virus body so that it can execute normally (Figure 19.4).
Figure 19.4 File Infecting Virus
Stealth Virus: Stealth virus attempts to conceal its presence from the user. It reads system files or system sectors and when some other program requests for information from portions of the disk, it changes back into the correct (unchanged) form. Use of stealth virus is the major reason why most antivirus programs operate best when the system is started (booted) from a known-clean floppy disk. When this happens, the virus does not gain control over the system and is immediately available to be seen and dealt with. The Stoned Monkey virus is an example of stealth virus. This virus uses “read stealth” capability and if a user executes a disk editing utility to examine the main boot record, the user would not find any evidence of infection (Figure 19.5).
Figure 19.5 Stealth Virus:
Multipartite Virus: Multipartite virus infects both boot sectors and executable files and uses both mechanisms to spread. It is the worst virus of all because it can combine some or all of the stealth techniques along with polymorphism to prevent detection. For example, if a user runs an application infected with a multipartite virus, the virus activates and infects the hard disk's master boot record. Moreover, the next time the computer starts; the virus gets activated again and starts infecting every program that the user runs. One-half virus is an example of a multipartite virus, which exhibits both stealth and polymorphic behaviour (Figure 19.6).
Figure 19.6 Multipartite Virus
Apart from viruses, other threats, which harm computer, are worms, Trojan horses, and logic bombs. Each of these programs can also be used as a medium to propagate any kind of virus.
Worms: Worms are the programs constructed to infiltrate on the legitimate data processing programs and alter or destroy the data. They often use network connections to spread from one computer system to another, thus, worms attack systems that are linked through communication lines. Once active within a system, worms behave like a virus and perform a number of disruptive actions. To reproduce themselves, worms make use of network medium such as:
A worm with harmful objectives could perform a wide range of destructive activities such as deleting files on each affected computer. On October 16, 1989, a worm named WANK infected many VAX and VMS computers on the SPAN network. This worm changed the system announcement message to “Worms Against Nuclear Killers!” The message was then graphically displayed as the first letters of each word and the last three letters of the last word.
The worm's replication mechanism can access the system by using any of the three methods given below:
By using a combination of these methods, the network worm is able to copy itself to different computers, which are using similar versions of the operating system.
Trojan Horses: The term “Trojan Horse” is an ancient Greek mythology. In the war between Greeks and Troy, the Greek army barricaded the city of Troy but was unable to penetrate inside the city. Therefore, they decided to trick the enemy by building a large wooden horse with soldiers hidden secretly inside it and presenting it as a gift to the citizen of Troy. During night, the warriors came out from the horse and overran the city.
In computer terminology, there are programs, which perform similar destructive activities. These programs enter into a computer through an e-mail or free programs that have been downloaded from the Internet. Once safely get into the computer, it usually opens the way for other malicious software (like viruses) to enter into the computer system. In addition, it may also allow unauthorized users to access the information stored in the computer.
Trojan horses spread when users are convinced to open or download a program because they think it has come from a legitimate source. They can also be included in software that are freely downloadable. They are usually subtler especially in the cases where they used for espionage. They can be programmed to self-destruct, without leaving any evidence other than the damage they have caused. The most famous Trojan horse is a program called back orifice, which is an unsubtle play of words on Microsoft's Back Office suite of programs for NT server. This program allows anybody to have the complete control over the computer or server it occupies.
Logic Bombs: A logic bomb is a program or portion of a program, which lies dormant until a specific part of program logic is activated. The most common activator for a logic bomb is a date. The logic bomb checks the computer system date and does nothing until a pre-programmed date and time is reached. It could also be programmed to wait for a certain message from the programmer. When logic bomb sees the message, it gets activated and executes the code. A logic bomb can also be programmed to activate on a wide variety of other variables such as when a database grows past a certain size or a user's home directory is deleted. For example, the well-known logic bomb is a Michelangelo, which has a trigger set for Michelangelo's birthday. On the given birth date, it causes system crash or data loss or other unexpected interactions with existing code.
To safeguard computer system against viruses, it is important to understand how virus spreads and what they do to infect the computer system.
How Virus Spreads? Virus is designed to proliferate and propagate in computer network. This means any contact between two or more computers is an opportunity for infection. Unauthorized users break into the system and easily cause destruction by planting virus in the most sensitive locations of the computer. Viruses come through many sources and being the software code, they can be transmitted with any other software, for example, on a disk, through network, or using e-mail.
With the advent of the Internet, people can now access and exchange information and resources. Most people do so using the World Wide Web, FTP, or e-mails. These viruses are predominantly in the form of macro viruses incorporated into Microsoft Word and Excel documents. A new type of virus infects the computer by seeing the “preview” in the e-mail client of Microsoft Outlook (or Outlook Express).
Virus infection is growing exponentially, and there have been several incidents involving e-mail-borne viruses such as Melissa, Code Red, Happy99, and many more. Viruses are never likely to go away; therefore, all users need to be vigilant and know how they should combat infection.
System Components Affected by Virus: These two conditions can cause a virus to attack on the computer system.
The simple act of write-protecting floppy disks by covering the notch in 5¼ disks or opening the hole in 3½ disks can prevent many virus infections. Unfortunately, this procedure does not always work. Viruses can hide in a number of places on a disk. If a virus simply attaches itself to an executable file stored on a hard disk or a floppy disk, change in the file size can help detect the presence of virus. Another technique used is the checksum. A checksum adds up various file characteristics to produce a hexadecimal number. Unfortunately, smart viruses record the original checksum and report it when program is tested, rendering this test null and void. Viruses, which locate in the logical drive of bootstrap program, are more difficult to trace. Another possible location for virus to reside is on the physical sector of the hard disk. Virus also modifies directory entries for DOS format and installs a copy in the directory that executes when other programs run. These linked viruses change directory listings so that they can hide themselves from simple directory display commands. Simply scanning for executable files whose size has been changed since the last scan is not sufficient. Hidden files, system files, and areas of the disk normally invisible to software must also be explored.
Different organizations have different styles of operation. This fact extends to the way they set up their computer networks and operating procedures. If computer operations consist of one or two personal computers used by fewer people, then the need for an elaborate defence system is not of utmost importance. However, if system is large enough to include the worldwide networks used by large corporations, then a detailed and systematic defence system is required.
Using Antivirus Software: In the early days of computer networking, computers were not networked very well, and computer viruses spread extremely slowly. Files were transmitted by means of BBSs (Bulletin Board Systems) or on diskette. As a result, the transmission of infected files was not that fast and easy. However, as the connectivity improved, mostly by the use of computers in the workplace, the scope of virus threat widened. First, there was LAN, and then there was WAN and now the Internet. The extensive use of e-mail has also contributed to the significant rise in the number of virus incidents. As a result, probability of getting infected by a virus today is more than it was a few years ago.
Until the preventive measures for virus were not invented, the only option to get rid of them was to get rid of that valuable file which was infected. This proved very costly to the companies whose valuable work was destroyed due to viruses. As the computer technology gained heights, these viruses also advanced in causing destruction. Then came a software utility called Antivirus to the rescue. Antivirus is a software utility, which (upon installing on a computer) scans the hard disk for viruses and tries to remove them if found. Most Antivirus programs include an auto-update feature that enables the program to download profiles of new viruses so that it can check for the new viruses as soon as they are discovered. The most popular Antivirus software available include Norton Antivirus, McAfee Antivirus, and Avira AntiVir.
Antivirus software has normally a built-in scanner, which scans all files on the computer's hard disk. It looks for changes and activities in computers typical in case of a virus attack. They look for particular types of code within programs. The software generally relies on having prior knowledge of the virus. As a result, frequent update to the tools is necessary. The important thing is to be aware of the possibility of an attack, to possess a good virus checking software, and to have data backups.
Figure 19.7 depicts a typical virus detection mechanism used by an Antivirus program. The image illustrates that if an Antivirus program is not installed on the computer, the virus in the e-mail gets into the computer. However, once an Antivirus program is installed in the computer, it checks all the incoming files (mails), detects the viruses, and removes them before storing the files onto the user's machine. However, in some cases, Antivirus is not able to remove the virus and thus, we have to delete that file.
Figure 19.7 Virus Protection Using Antivirus
When data are sent over the Internet, it passes through many different servers before it reaches the desired destination. This data remain on servers for months and at any stage of journey, it is vulnerable to interception. Most of the time, this may not concern the user, but there are times when sensitive data (business quote or personal details) are transmitted. Therefore, the best way is the use of an encryption program or the cryptography technique. In simple terms, cryptography is the process of altering messages to hide their meaning from adversaries who might intercept them. In data and telecommunications, cryptography is an essential technique required for communicating over any untrusted medium, which includes any network, particularly the Internet. Cryptography provides an important tool for protecting information and is used in many aspects of computer security. Cryptography is traditionally associated only with keeping the secrecy of data. However, nowadays these are used to provide many security services such as electronic signatures and non-modification of data. Cryptography relies upon two basic components: an algorithm (or cryptographic methodology) and a key. Communication over the Internet, for example, e-mail, is not secure if no encryption is used. Hackers may be able to read messages, or even modify it if cryptography or any other medium is not present.
In modern cryptographic systems, algorithms are the complex mathematical formulae and keys are strings of bits. For two parties to communicate over a network (Internet), they must use the same algorithm (or algorithms that are designed to work together). In some cases, they must also use the same key. In all cases, the initial unencrypted data are referred to as plain text, which is encrypted into cipher text, and in turn decrypted into usable plain text. Cryptography techniques are broadly classified into three types:
Figure 19.8 Message Exchange without Encryption
With secret key cryptography (SKC), a single key is used for both encryption and decryption of data. With this form of cryptography, it is obvious that the key must be known to both the sender and the receiver. If the key is compromised, the security offered by SKC is severely reduced or eliminated. SKC assumes that the parties who share a key rely upon each other not to disclose the key and protect it against modification. SKC schemes are generally categorized as being either stream ciphers or block ciphers. Stream ciphers operate on a single bit at a time and implement some form of feedback mechanism so that the key is constantly changing. On the other hand, a block cipher scheme encrypts one block of data at a time using the same key on each block. In general, the same plain text block will always encrypt to the same cipher text when using the same key in a block cipher whereas the same plain text will encrypt to different cipher text in a stream cipher.
As shown in Figure 19.9, the sender uses the key to encrypt the plain text and sends the cipher text to the receiver. The receiver applies the same key to decrypt the message and recover the plain text. As there is a usage of single key for both functions, SKC is also called symmetric encryption.
Figure 19.9 Message Exchange Using Secret Key
The main problem in SKC is getting the sender and receiver to agree on the secret key without anyone else finding out. If they are in separate physical locations, they must trust on a courier, or a telephone system, or any other transmission medium to prevent the disclosure of the secret key being communicated. Anyone who overhears or intercepts the key in transit can later read, modify, and forge all messages encrypted or authenticated using that key.
This concept was introduced in 1976 by Whitfield Diffie and Martin Hellman of Stanford University to solve the problem, found in secret key-based cryptography. In this technique, each person gets a pair of keys, one called the public key (used for encryption) and the other called the private key (used for decryption). Each person's public key is published while the private key is kept secret. The need for the sender and receiver to share secret information is eliminated and all communications involve only public keys, and no private key is ever transmitted or shared. No longer is it necessary to trust some communication channel to be secure against betrayal. The only requirement is that public keys are associated with their users in a trusted (authenticated) manner (for instance, in a trusted directory). Anyone can send a confidential message by using public information, but the message can only be decrypted with a private key, which is in the sole possession of the intended recipient.
When a sender wants to send a secret message to the receiver, the sender uses receiver's public key to encrypt the message. When the receiver receives the encrypted message through the Internet, he/she uses his/her private key to decrypt it. Figure 19.10 shows an overview of the whole process.
Figure 19.10 Message Exchange Using Public Key
A hash function is a one-way encryption algorithm that does not use any key to encrypt or decrypt the data. This technique generates a sequence of bit values of fixed length from the original message. This particular technique works on a fixed-length hash value, which is computed based upon the plain text that makes it impossible for either the contents or length of the plain text to be recovered. Hash algorithm typically uses digital fingerprint of a file's contents, which is often used to ensure that the file is not altered by an intruder or virus. Hash functions are also commonly employed by many operating systems to encrypt passwords and preserve the integrity of a file (Figure 19.11).
Figure 19.11 Message Exchange Using Hash Function
In today's commercial environment, establishing a framework for the authentication of computer-based information requires familiarity from both the legal advisor and computer security fields. Combining these two disciplines is not an easy task. The historical legal concept of “signature” is defined as any mark made with the intention of authenticating the marked document. Digital signature refers to the digitized images of paper signature used to verify the authenticity of electronic document. In other words, digital signatures play the role of physical signatures in verifying electronic documents.
To understand the concept of digital signature in a better way, we must first know the legal implications of digital signature. A signature is not part of the substance of a transaction, but is a representation. Signature serves the following general purposes:
Although the basic nature of transactions has not changed, the cyber law has only begun to adapt new technology. The legal and business communities must develop rules and practices, which use new technology to achieve and surpass the effects historically expected from paper forms. To achieve the basic purposes of signatures outlined above, a signature must have the following attributes:
Digital signatures use public key cryptography (PKC) technique, which employs an algorithm using two different but mathematically related keys; one for creating a digital signature or transforming data into a seemingly unintelligible form, and another key for verifying a digital signature or returning the message to its original form. Computer equipment and software utilizing two such keys are often collectively termed as an asymmetric cryptosystem.
The complementary keys of an asymmetric cryptosystem for digital signatures are arbitrarily termed the private key, which is known only to the signer and is used to create the digital signature, and the public key, which is ordinarily more widely known and is used by a relying party to verify the digital signature. If many people need to verify the signer's digital signatures, the public key must be available or distributed to all of them. Although the keys of the pair are mathematically related, if the asymmetric cryptosystem has been designed and implemented securely, it is computationally infeasible to derive the private key from the knowledge of the public key. Thus, many people may know the public key of a given signer and use it to verify that signer's signatures; they cannot discover the signer's private key and use it to forge digital signatures.
Another fundamental process, termed as hash function, is used in both creating and verifying a digital signature. A hash function is an algorithm, which creates a digital representation in the form of a hash value of a standard length, which is usually much smaller than the message but nevertheless, substantially unique to it. Thus, use of digital signatures usually involves two processes, one performed by the signer and the other by the receiver of the digital signature. These are:
To have a clear understanding of how digital signature is applied, let us consider an example. Suppose Mr. A wants to send his signed message to Mr. B through Internet, he can use the public key cryptosystem to provide digital signatures. Mr. A creates the digital signature using his private key. This process makes message in encrypted form and results as signed message. The signed message is then sent through the Internet to Mr. B (Figure 19.12).
Figure 19.12 Message Exchange Using Digital Signature
Mr. B receives the message along with the digital signature from the Internet. Mr. B gets a copy of Mr. A's public key and using this key, he verifies the signature by A. This process results in the decryption of the message with A's public key. If the decrypted result is the same as that of transmitted message, Mr. B can believe that the message has really come from Mr. A. This is because only Mr. A holds his private key, which is needed to generate the digital signature.
Note that if the message or the digital signature is modified during the transmission, the decrypted form of digital signature will not match with the message. From here, Mr. B can conclude that either the message transmission has tampered or the message has not been generated by Mr. A.
The ongoing occurrences of incidents pertaining to network security caused a great concern to the people, using computers as their medium to exchange data across the country. A need was felt for a method of controlling the traffic, which allows access of information to computers. Organizations required an application that could protect and isolate their internal systems from the Internet. This application is called Firewall. Simply put, a firewall prevents certain outside connections from entering into the network. It traps inbound or outbound packets, analyses them, and then permits access or discards them (Figure 19.13).
Figure 19.13 Firewall Software and Hardware
Firewall makes decisions on whether data should be allowed to pass based upon the security policy. For each packet of data, the firewall compares known components of the packet to a security rule set and decides if the packet should be allowed to pass. In addition, firewall has security rules that involve altering the packet in some basic way before passing the data. With a sensible security policy and a security rule set designed to implement that policy, a firewall is used in protecting local area networks from attacks.
Generally, firewall system comprises software (embedded in a router), computer, host, or a collection of hosts set up specifically to shield a site or subnet from protocols and services that can be a threat from hosts outside the subnet. It serves as the gatekeeper between an untrusted network (Internet) and the more trusted internal networks. If a remote user tries to access the internal networks without going through the firewall, its effectiveness is diluted. For example, if a travelling manager has an office computer that he or she can dial into while travelling, and his or her computer is on the protected internal network, then an attacker who can dial into that computer has circumvented the firewall. Similarly, if a user has a dial-up Internet account, and sometimes connects to the Internet from his or her office computer, he or she opens an unsecured connection to the Internet that circumvents the firewall.
To understand the working of a firewall, consider an example where an organization is having hundreds of computers on the network. In addition, the organization will have one or more connections to the Internet lines. Now, without a firewall in place, all the computers are directly accessible to anyone on the Internet. A person who knows what other people are doing can probe those computers, try to make FTP (file transfer protocol) connections to them, or telnet connections, and so on. If one employee makes a mistake and leaves a security hole, hackers can get to the machine and exploit that hole.
With a firewall in place, the network landscape becomes much different. An organization will place a firewall at every connection to the Internet (for example, at every T1 line coming into the company). The firewall can implement security rules. For example, one of the security rules may be: out of the 300 computers inside an organization, only one is permitted to receive public FTP traffic. A company can set up rules like this for FTP servers, web servers, telnet servers, and so on. In addition, an organization can have control on how employees connect to websites, whether files can be sent as attachments outside the company over the network, and so on. Firewall provides incredible control over how people use the network. It provides protection against the following:
One can also customize firewalls according to the specific needs. This means that one can add or remove filters based on several conditions:
A firewall intercepts the data between the Internet and the computer. All data traffic passes through it, and it allows only authorized data to pass into the corporate network. Firewalls are typically implemented using one of the three primary architectures: packet filtering, application-level gateway, and circuit-level gateway.
Packet Filtering: Packet filtering is the most basic firewall protection technique used in an organization. It operates at the network layer to examine incoming and outgoing packets and apply a fixed set of rules to the packets to determine whether they will be allowed to pass. The packet filter firewall is typically very fast because it does not examine any of the data in the packet. It simply examines the IP packet header, the source and destination IP addresses, and the port combinations and then it applies filtering rules. For example, it is easy to filter out all packets destined for Port 80, which might be the port for a web server. The administrator may decide that Port 80 is off limits except for specific IP subnets, and a packet filter would suffice for this. Packet filtering is fast, flexible, transparent (no changes are required at the client), and cheap. This type of filter is commonly used in small-to-medium businesses that require control over users to use the Internet (Figure 19.14).
Figure 19.14 Packet Filtering Firewall
Application-Level Gateway: An application-level gateway firewall uses server programs (called proxies), which run on the firewall. These proxies take requests from the external network, examine them, and forward the legitimate requests to the internal host computer. This type of firewall supports functions such as user authentication and logging. As this type of firewall is considered the most secure type, it provides a number of advantages to the medium-to-high risk websites:
This type of firewall requires every client program to be set up as a proxy. In addition, the firewall must have a proxy in it for each type of protocol that can be used. This can cause a delay in implementing new protocols if the firewall does not support it. The penalties paid for this additional level of security are performance and flexibility. Proxy server firewalls have large processor and memory requirements in order to support many simultaneous users, and introduction of new Internet applications and protocols can often involve significant delays while new proxies are developed to support them. True proxy servers are undoubtedly the safest, but impose an overhead in heavily loaded networks.
Firewall working on application-level gateways technique requires a proxy for each service (such as FTP and HTTP), which is compatible to it. When a service is required that is not supported by a proxy, an organization has three possible choices to perform:
Figure 19.15 Application-level Gateway Firewall
Circuit-Level Gateway: In the circuit-level firewall, all connections are monitored, and only those connections that are found to be valid are allowed to pass through the firewall. This generally means that a client behind the firewall can initiate any type of session, but clients outside the firewall cannot see or connect to a machine protected by the firewall. Stateful inspections usually occur at the network layer, thus making it fast and preventing suspicious packets from travelling up the protocol stack. Unlike static packet filtering technique, stateful inspection makes its decisions based on all the data in the packet (corresponding to all the levels of the OSI model) (Figure 19.16).
Figure 19.16 Circuit-level Gateway Firewall
Using this information, the firewall builds dynamic state tables and then uses these tables to keep track of the connections, which go through it. Rather than allowing all packets that meet the rule set requirements to pass, it allows only those packets, which are part of a valid, established connection. A typical use of circuit-level gateways is a situation in which the system administrator trusts the internal users.
Identification and authentication (I&A) is another line of defence against the unauthorized people from entering into a computer system. I&A is a critical building block of computer security as it forms the basis for most types of access control and for establishing user's accountability. Such access control often requires a system to identify and differentiate among different users. For example, access control is often based on least privilege, which refers to granting of accesses to only those users who are required to perform their duties. User accountability requires the linking of activities on a computer system to specific individuals and, therefore, requires system to identify users.
Often people confuse identification from authentication as both have similar aspects. Identification is the means through which a user provides a claimed identity to the system. On the other hand, authentication refers to establishing validity of the claim. Computer systems make use of data authentication for recognizing people, which the systems receive. Authentication presents several challenges such as collecting authentication data, transmitting the data securely, and identifying the same person who was earlier authenticated and is still using the computer system.
There are three ways of authenticating the users, each of which can be used either individually or in combination with others:
These ways provide strong authentication; however, there are certain problems associated with them. If people want to pretend to be someone else on a computer system, they can guess or learn an individual's password; they can also steal or fabricate tokens.
The most common form of authentication is the combination of a user ID and password. This technique is based solely on user requirement. In general, password systems work by requiring the users to enter a user ID and password. The system compares the password to a previously stored password for that user ID. If there is a match, the user is authenticated and granted access. This type of security access has been successfully providing security to computer systems for a long time. They are integrated into many operating systems and users and system administrators are familiar with them. When properly managed in a controlled environment, they can provide effective security. However, this technique is dependent upon keeping passwords secret. Unfortunately, there are many ways that the secret key may be divulged.
Figure 19.17 User ID and Passward
Although some techniques are based solely on users’ requirements, most of the techniques are based on what the users possess. This technique uses token system. Such tokens are divided into two categories: memory tokens and smart tokens.
Memory Tokens: Memory tokens are meant for storing information (Figure 19.18). It requires special reader/writer devices for writing and reading of data to and from the tokens. The most common type of memory token is a magnetic strip card, in which a thin stripe of magnetic material is affixed to the surface of a card (for example, as on the back of credit cards). A common application of memory tokens for authentication to computer systems is the automatic teller machine (ATM) card.
Figure 19.18 Momory Token
Memory tokens when used with PINs provide significantly more security than passwords. A hacker must have both a valid token and the corresponding PIN to pretend to be someone else. This is much more difficult than obtaining a valid password and user ID combination. Tokens can be used in support of log generation without the need for the employee to key in a user ID for each transaction or other logged event since the token can be scanned repeatedly. If the token is required for physical entry and exit, then people will be forced to remove the token when they leave the computer. This can help maintain authentication. However, this method also has certain limitations. Although sophisticated technical attacks are possible against memory token systems, most of the problems associated with them relate to their cost, administration, token loss, user dissatisfaction, and the compromise of PINs. Most of the techniques for increasing the security of memory token systems relate to the protection of PINs.
Smart Tokens: A smart token is the functionality expansion of memory token, incorporating one or more integrated circuits into the token itself. When used for authentication, a smart token is another example of authentication based on users’ possession category. A smart token requires a user to provide something the user knows (PIN or password) in order to “unlock” the smart token for use. Smart tokens offer great flexibility and are used to solve different authentication problems. The benefits of smart tokens vary, for the type it is used. In general, they provide greater security than memory cards. It can solve the problem of electronic monitoring even if the authentication is done across an open network by using one-time passwords.
However like memory tokens, most of the problems associated with smart tokens relate to their cost, the administration of the system, and user dissatisfaction. Smart tokens are generally less vulnerable to the compromise of PINs because authentication usually takes place on the card. However, smart tokens cost more than memory tokens and are more complex (Figure 19.19).
Figure 19.19 Smart Token
Biometric authentication technologies use the unique characteristics (or attributes) of an individual to authenticate the person's identity. These include physiological attributes (such as fingerprints, hand geometry, or retina patterns) or behavioural attributes (such as voice patterns and hand-written signatures). Biometric authentication technologies based upon these attributes have been developed for computer log in applications. Biometric authentication is technically complex and expensive, and user acceptance can be difficult.
Biometric systems provide an increased level of security for computer systems, but the technology is still new as compared to memory tokens or smart tokens. Biometric authentication devices provide imperfection, resulting from technical difficulties in measuring and profiling physical attributes as well as from the somewhat variable nature of physical attributes. These may change, depending on various conditions. For example, a person's speech pattern may change under stressful conditions or when suffering from a sore throat or cold. Due to their relatively high cost, biometric systems are typically used with other authentication means in environments requiring high security (Figure 19.20).
Figure 19.20 Biometric Techniques
In spite of all the precautions you take to protect your data, some failure such as hardware failure, software failure, or natural disaster may occur resulting in data loss. To protect against such failures, you must take a backup of your data as data loss is very common to all.
Data backup (also known as dump) refers to making duplicate copies of your data and storing them onto some permanent storage device such as magnetic tape, magnetic disk, CD, DVD, pen drive, etc. The data backup is essential for recovering the data as quickly, and with as little damaging impact on users, as possible. Data recovery is a process of restoring data to the most recent consistent state that existed before the failure.
There are four backup methods, which are full backup, differential backup, incremental backup, and mirror backup.
Full Backup Method: This is the basic backup method that backs up all the files on the computer system. Generally, full backups are performed on a weekly or monthly basis. This method is advantageous because only a single backup is enough to restore all the backed up files. However, it consumes a lot of time as well as secondary memory space to back up all the files.
Incremental Backup Method: This method backs up only those files that have been modified since the most recent backup (either full, differential, or incremental), thereby making each backup an increment to the last backup. Hence, restoring the files after a crash requires all the backups. For example, suppose a full backup has been taken on the tenth of month and incremental backup on the twelfth and the fourteenth. Then, restoring the files after a crash on the fifteenth requires the full backup as well as the two incremental backups. An important advantage of this method is that it consumes less time and memory space to take a backup. However, a single backup is not sufficient to restore all the files, which makes the restoring process slower.
Differential Backup Method: This method backs up all the files that have any difference from the last full backup. Hence, restoring the files requires the full backup and only the last differential backup. This makes restoring little faster than incremental backup, which requires all the previous backups. However, backing up is slower and storage requirement is higher than incremental backup.
Mirror Backup Method: All the methods discussed above compress the files being backed up into a single file. However, the mirror backup method copies the files or folders being backed up without any compression. Thus, it keeps each file separate in the destination, thereby making the destination a mirror of the source. This feature makes this method fastest of all the methods. However, it requires more storage space as compared to other methods.
Online backup means taking backup of data on a remote computer over the Internet. There are several advantages of this method over other backup methods. One is that this method not only enables you to recover data after file corruption or computer failure, but you can also recover data even if a natural disaster (such as fire, flood, theft, etc.) has destroyed everything at your place. However, it is not affordable for individual users or small businesses to subscribe for online backup services.
Sometimes, you lose important data because of disk failure, accidental deletion of file, etc. The situation even becomes worse if you are not having backup of data. In such situations, recovery tools come to our rescue.
A recovery tool enables you to recover your lost data. A wide variety of recovery tools are available in the market, some of which are PC Inspector File Recovery, EASEUS Data Recovery Wizard, VirtualLab Data Recovery, and Stellar Phoenix Windows Data Recovery.
In today's computing environment, everyone in an organization has access to system resources and, therefore, has the potential to cause harm. Each computer on networks is potentially a door in which access to other computers on the network is possible. Therefore, a need for security awareness and training is required to implement computer security in an organization. Computer security awareness and training is an issue that affects all computer users, whether they use a personal computer or terminals connected to the mainframe computer.
The main purpose behind security awareness is to enhance security by improving awareness of the need to protect system resources, developing skills and knowledge so that computer users can perform their jobs more securely and build knowledge needed to design, implement, or operate security programs for organizations and systems. Awareness is used to reinforce the fact that security supports the mission of the organization by protecting valuable resources. In addition, it is also used to remind people of basic security practices such as logging off a computer system or locking doors.
To spread awareness of computer security, various teaching methods are deployed, such as video tapes, newsletters, posters, bulletin boards, briefings, short reminder notices at logon, discussions, and lectures. Awareness is often incorporated into security training and is used to change employee attitudes. However, employees often regard computer security as an obstacle to productivity. To help motivate employees, it must be emphasized how security, from a broader perspective, contributes to productivity. The consequences of poor security must be explained, while avoiding the fear and intimidation that employees often associate with security.
Providing training is also an awareness activity, which teaches skills to the people that enable them to perform their jobs in a more secure manner. This includes teaching people what they should do and how they should (or can) do it. Training addresses many levels, from basic security practices to more advanced or specialized skills. It can be specific to one computer system or generic enough to address all systems. Many personnel's need advanced or specialized training rather than just basic security practices. For example, managers may need to understand security consequences and costs so that they can take the security factor into their decisions. There are different ways to identify individuals or groups who need specialized or advanced training. One method is to look at job categories, such as executives, functional managers, or technology providers.
A security-training program normally includes training classes, either strictly devoted to security or as added special sections or modules within existing training classes. Training is either computer or lecture based and may include hands-on practice and case studies.
A security policy is a formal statement of the rules for people who are given access to an organization's technology and information assets. When developing a security policy, care must be taken to identify and understand relevant and valid issues. Mostly, resources are wasted on reacting to a high-profile hoax call while a serious issue goes unnoticed. When evaluating the effectiveness of a particular security policy, the resources being protected must be analysed, the information stored in today's computer ranges from public domain material such as telephone numbers to highly sensitive data, for example, an individual's genome. It is not practical nor is it possible to firmly secure all this information. The goal is to protect information in line with its relative value and importance to the business process. A good security policy should focus on allowing employees to access only the resources he or she needs to perform their job function.
The main purpose of security policy is to inform users, staff, and managers of their obligatory requirements for protecting technology and information assets. The policy should specify the mechanisms through which these requirements can be met.
Components of Security Policy: Once a security policy has been established, it should clearly be communicated to users, staff, and management. Having all personnel signing on the statement indicating that they have read, understood, and agreed to abide by the policy is an important part of the process. To retain the value and genuineness of the policy, it must include the following components:
The important characteristics of security policy are as follows:
It must be implementable through system administration procedure, publishing of acceptable user guidelines, or other appropriate methods.
It must be enforceable with security tools, where appropriate, and with sanctions, where actual prevention is not technically feasible.
It must clearly define the areas of responsibility for the users, administrators, and management.
3.135.205.172