Preface

The mobile industry for wireless cellular services has grown at a rapid pace over the past decade. Similarly, Internet service technology has also made dramatic growth through the World Wide Web with a wire line infrastructure. Realization for complete mobile Internet technologies will become the future objectives for convergence of these technologies through multiple enhancements of both cellular mobile systems and Internet interoperability.

Flawless integration between these two wired/wireless networks will enable subscribers to not only roam worldwide but also solve the ever increasing demand for data/Internet services. However, the new technology development and service perspective of 4G systems will take many years to come. In order to keep up with this noteworthy growth in the demand for wireless broadband, new technologies and structural architectures are needed to improve system performance and network scalability greatly, while significantly reducing the cost of equipment and deployment. The present concept of P2P networking to exchange information needs to be extended to implement intelligent appliances such as a ubiquitous connectivity to the Internet services, the provision of fast broadband access technologies at more than 50 Mbps data rate, seamless global roaming, and Internet data/voice multimedia services.

The 4G system is a development initiative based on the currently deployed 2G/3G infrastructure, enabling seamless integration to emerging 4G access technologies. For successful interoperability, the path toward 4G networks should be incorporated with a number of critical trends to network integration. MIMO/OFDMA-based air interface for beyond 3G systems are called 4G systems such as Long Term Evolution (LTE), Ultra Mobile Broadband (UMB), Mobile WiMAX (Worldwide Interoperability for Microwave Access) or Wireless Broadband (WiBro).

Chapter 1 begins with a brief history of the Internet and describes topics covering (i) networking fundamentals such as LANs (Ethernet, Token Ring, FDDI), WANs (Frame Relay, X.25, PPP), and ATM; (ii) connecting devices such as circuit- and packet-switches, repeaters, bridges, routers, and gateways; (iii) the OSI model that specifies the functionality of its seven layers; and finally, (iv) a TCP/IP five-layer suite providing a hierarchical protocol made up of physical standards, a network interface, and internetworking.

Chapter 2 presents a state-of-the-art survey of the TCP/IP suite. Topics covered include (i) TCP/IP network layer protocols such as ICMP, IP version 4, and IP version 6 relating to the IP packet format, addressing (including ARP, RARP, and CIDR), and routing; (ii) transport layer protocols such as TCP and UDP; (iii) HTTP for the World Wide Web; (iv) FTP, TFTP, and NFS protocols for file transfer; (v) SMTP, POP3, IMAP, and MIME for e-mail; and (vi) SNMP for network management. This chapter also introduces latest Social Network Services and smart IT devices. With the introduction of smart services and devices, security problems became an issue. This chapter introduces security threats such as (i) Worm, Virus, and DDoS for network security; (ii) Phishing and SNS security for Internet security; (iii) Exploit, password cracking, Rootkit, Trojan Horse, and so on for computer security.

Chapter 3 presents the evolution and migration of mobile radio technologies from first generation (1G) to third generation (3G). 1G, or circuit-switched analog systems, consist of voice-only communications; 2G and beyond systems, comprising both voice and data communications, largely rely on packet-switched wireless mobile technologies. This chapter covers the technological development of mobile radio communications in compliance with each iterative generation over the past decade. At present, mobile data services have been rapidly transforming to facilitate and ultimately profit from the increased demand for nonvoice services. Through aggressive 3G deployment plans, the world's major operators boast attractive and homogeneous portal offerings in all of their markets, notably in music and video multimedia services. Despite the improbability of any major changes in the next 4–5 years, rapid technological advances have already bolstered talks for 3.5G and even 4G systems. For each generation, the following technologies are introduced:

1. 1G Cellular Technology
  • AMPS (Advanced Mobile Phone System)
  • NMT (Nordic Mobile Telephone)
  • TACS (Total Access Communications System)
2. 2G Mobile Radio Technology
  • CDPD (Cellular Digital Packet Data), North American protocol
  • GSM (Global System for Mobile Communications)
  • TDMA-136 or IS-54
  • iDEN (Integrated Digital Enhanced Network)
  • cdmaOne IS-95A
  • PDC (Personal Digital Cellular)
  • i-mode
  • WAP (Wireless Application Protocol)
3. 2.5G Mobile Radio Technology
  • ECSD (Enhanced Circuit-Switched Data)
  • HSCSD (High-Speed Circuit-Switched Data)
  • GPRS (General Packet Radio Service)
  • EDGE (Enhanced Data rates for GSM Evolution)
  • cdmaOne IS-95B
4. 3G Mobile Radio Technology
  • UMTS (Universal Mobile Telecommunication System)
  • HSDPA (High-Speed Downlink Packet Access)
  • FOMA
  • CDMA2000 1x
  • CDMA2000 1xEV (1x Evolution)
  • CDMA2000 1xEV-DO (1x Evolution Data Only)
  • CDMA2000 1xEV-DV (1x Evolution Data Voice)
  • KASUMI Encryption Function

Chapter 4 deals with some of the important contemporary block cipher algorithms that have been developed over recent years with an emphasis on the most widely used encryption techniques such as Data Encryption Standard (DES), the International Data Encryption Algorithm (IDEA), the RC5 and RC6 encryption algorithms, and the Advanced Encryption Standard (AES). AES specifies an FIPS-approved Rijndael algorithm (2001) that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits. DES is not new, but it has survived remarkably well over 20 years of intense cryptanalysis. The complete analysis of triple DES-EDE in CBC mode is also included. Pretty Good Privacy (PGP) used for e-mail and file storage applications utilizes IDEA for conventional block encryption, along with RSA for public-key encryption and MD5 for hash coding. RC5 and RC6 are both parameterized block algorithms of variable size, variable number of rounds, and a variable-length key. They are designed for great flexibility in both performance and level of security.

Chapter 5 covers the various authentication techniques based on digital signatures. It is often necessary for communication parties to verify each other's identity. One practical way to do this is with the use of cryptographic authentication protocols employing a one-way hash function. Several contemporary hash functions (such as DMDC, MD5, and SHA-1) are introduced to compute message digests or hash codes for providing a systematic approach to authentication. This chapter also extends the discussion to include the Internet standard HMAC, which is a secure digest of protected data. HMAC is used with a variety of different hash algorithms, including MD5 and SHA-1. Transport Layer Security (TLS) also makes use of the HMAC algorithm.

Chapter 6 describes several public-key cryptosystems brought in after conventional encryption. This chapter concentrates on their use in providing techniques for public-key encryption, digital signature, and authentication. This chapter covers in detail the widely used Diffie–Hellman key exchange technique (1976), the Rivest-Schamir-Adleman (RSA) algorithm (1978), the ElGamal algorithm (1985), the Schnorr algorithm (1990), the Digital Signature Algorithm (DSA, 1991), and the Elliptic Curve Cryptosystem (ECC, 1985) and Elliptic Curve Digital Signature Algorithm (ECDSA, 1999).

Chapter 7 presents profiles related to a public-key infrastructure (PKI) for the Internet. The PKI automatically manages public keys through the use of public-key certificates. The Policy Approval Authority (PAA) is the root of the certificate management infrastructure. This authority is known to all entities at entire levels in the PKI and creates guidelines that all users, CAs, and subordinate policy-making authorities must follow. Policy Certificate Authorities (PCAs) are formed by all entities at the second level of the infrastructure. PCAs must publish their security policies, procedures, legal issues, fees, and any other subjects they may consider necessary. Certification Authorities (CAs) form the next level below the PCAs. The PKI contains many CAs that have no policy-making responsibilities. A CA has any combination of users and RAs whom it certifies. The primary function of the CA is to generate and manage the public-key certificates that bind the user's identity with the user's public key. The Registration Authority (RA) is the interface between a user and a CA. The primary function of the RA is user identification and authentication on behalf of a CA. It also delivers the CA-generated certificate to the end user. X.500 specifies the directory service. X.509 describes the authentication service using the X.500 directory. X.509 certificates have evolved through three versions: version 1 in 1988, version 2 in 1993, and version 3 in 1996. X.509 v3 is now found in numerous products and Internet standards. These three versions are explained in turn. Finally, Certificate Revocation Lists (CRLs) are used to list unexpired certificates that have been revoked. CRLs may be revoked for a variety of reasons, ranging from routine administrative revocations to situations where private keys are compromised. This chapter also includes the certification path validation procedure for the Internet PKI and architectural structures for the PKI certificate management infrastructure.

Chapter 8 describes the IPsec protocol for network layer security. IPsec provides the capability to secure communications across a LAN, across a virtual private network (VPN) over the Internet, or over a public WAN. Provision of IPsec enables a business to rely heavily on the Internet. The IPsec protocol is a set of security extensions developed by IETF to provide privacy and authentication services at the IP layer using cryptographic algorithms and protocols. To protect the contents of an IP datagram, there are two main transformation types: the Authentication Header (AH) and the Encapsulating Security Payload (ESP). These are protocols to provide connectionless integrity, data origin authentication, confidentiality, and an antireplay service. A Security Association (SA) is fundamental to IPsec. Both AH and ESP make use of an SA that is a simple connection between a sender and receiver, providing security services to the traffic carried on it. This chapter also includes the OAKLEY key determination protocol and ISAKMP.

Chapter 9 discusses Secure Socket Layer version 3 (SSLv3) and TLS version 1 (TLSv1). The TLSv1 protocol itself is based on the SSLv3 protocol specification. Many of the algorithm-dependent data structures and rules are very similar, so the differences between TLSv1 and SSLv3 are not dramatic. The TLSv1 protocol provides communications privacy and data integrity between two communicating parties over the Internet. Both protocols allow client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. The SSL or TLS protocol is composed of two layers: Record Protocol and Handshake Protocol. The Record Protocol takes an upper-layer application message to be transmitted, fragments the data into manageable blocks, optionally compresses the data, applies a MAC, encrypts it, adds a header, and transmits the result to TCP. Received data is decrypted to higher-level clients. The Handshake Protocol operated on top of the Record Layer is the most important part of SSL or TLS. The Handshake Protocol consists of a series of messages exchanged by client and server. This protocol provides three services between the server and client. The Handshake Protocol allows the client/server to agree on a protocol version, to authenticate each other by forming a MAC, and to negotiate an encryption algorithm and cryptographic keys for protecting data sent in an SSL record before the application protocol transmits or receives its first byte of data.

A keyed hashing message authentication code (HMAC) is a secure digest of some protected data. Forging an HMAC is impossible without knowledge of the MAC secret. HMAC can be used with a variety of different hash algorithms: MD5 and SHA-1, denoting these as HMAC-MD5 (secret, data) and SHA-1 (secret, data). There are two differences between the SSLv3 scheme and the TLS MAC scheme: TLS makes use of the HMAC algorithm defined in RFC 2104 and the TLS master-secret computation is also different from that of SSLv3.

Chapter 10 describes e-mail security. PGP, invented by Philip Zimmermann, is widely used in both individual and commercial versions that run on a variety of platforms throughout the global computer community. PGP uses a combination of symmetric secret-key and asymmetric public-key encryption to provide security services for e-mail and data files. PGP also provides data integrity services for messages and data files using digital signatures, encryption, compression (ZIP), and radix-64 conversion (ASCII Armor). With growing reliance on e-mail and file storage, authentication and confidentiality services are becoming increasingly important. Multipurpose Internet Mail Extension (MIME) is an extension to the RFC 822 framework that defines a format for text messages sent using e-mail. MIME is actually intended to address some of the problems and limitations of the use of SMTP. S/MIME is a security enhancement to the MIME Internet e-mail format standard, based on the technology from RSA Data Security. Although both PGP and S/MIME are on an IETF standards track, it appears likely that PGP will remain the choice for personal e-mail security for many users, while S/MIME will emerge as the industry standard for commercial and organizational use. The two PGP and S/MIME schemes are covered in this chapter.

Chapter 11 discusses the topic of firewalls and intrusion detection systems (IDSs) as an effective means of protecting an internal system from Internet-based security threats: Internet Worm, Computer Virus, and Special Kinds of Viruses. The Internet Worm is a standalone program that can replicate itself through the network to spread, so it does not need to be attached. It makes the network performance weak by consuming bandwidth, increasing network traffic, or causing the Denial of Service (DoS). Morris worm, Blaster worm, Sasser worm, and Mydoom worm are some examples of the most notorious worms. The Computer Virus is a kind of malicious program that can damage the victim computer and spread itself to another computer. The word “Virus” is used for most of malicious programs. There are special kind of viruses such as Trojan horse, Botnet, and Key Logger. Trojan horse (or Trojan) is made to steal some information by social engineering. The term Trojan horse is derived from Greek mythology. The Trojan gives a cracker remote access permission, like the Greek soldiers, avoiding detection of their user. It looks like some useful or helpful program, or a legitimate access process, but it just steals password, card number, or other useful information. The popular Trojan horses are Netbus, Back Orifice, and Zeus. Botnet is a set of zombie computers connected to the Internet. Each compromised zombie computer is called as bot, and the botmaster, called as C&C (Command & Control server), controls these bots. Key logger program monitors the action of the key inputs. The key logger is of two types: software and hardware. This chapter is concerned with the software type only. It gets installed in the victim computers and logs all the strokes of keys. The logs are saved in some files or sent to the hacker by network. Key logger can steal the action of key input by kernel level, memory level, API level, packet level, and so on.

A firewall is a security gateway that controls access between the public Internet and a private internal network (or intranet). A firewall is an agent that screens network traffic in some way, blocking traffic it believes to be inappropriate, dangerous, or both. The security concerns that inevitably arise between the sometimes hostile Internet and secure intranets are often dealt with by inserting one or more firewalls on the path between the Internet and the internal network. In reality, Internet access provides benefits to individual users, government agencies, and most organizations. But this access often creates a security threat. Firewalls act as an intermediate server in handling SMTP and HTTP connections in either direction. Firewalls also require the use of an access negotiation and encapsulation protocol such as SOCKS to gain access to the Internet, intranet, or both. Many firewalls support tri-homing, allowing the use of a DMZ network. To design and configure a firewall, it needs to be familiar with some basic terminology such as a bastion host, proxy server, SOCKS, choke point, DMZ, logging and alarming, and VPN. Firewalls are classified into three main categories: packet filters, circuit-level gateways, and application-level gateways. In this chapter, each of these firewalls is examined in turn. Finally, this chapter discusses screened host firewalls and how to implement a firewall strategy. To provide a certain level of security, the three basic firewall designs are considered: a single-homed bastion host, a dual-homed bastion host, and a screened subnet firewall.

An IDS is a device or software application that monitors network or system activities for malicious activities or policy violations and produces reports to a Management Station. Intrusion detection and systems are primarily focused on identifying possible incidents, logging information about them, and reporting attempts. In addition, organizations use IDSs for other purposes, such as identifying problems with security policies, documenting existing threats, and deterring individuals from violating security policies. IDSs have become a necessary addition to the security infrastructure of nearly every organization. Regarding IDS, this chapter presents a survey and comparison of various IDSs including Internet Worm/Virus detection.

IDSs are categorized as Network-Based Intrusion Detection System (NIDS), Wireless Intrusion Detection System (WIDS), Network Behavior Analysis System (NBAS), Host-Based Intrusion Detection System (HIDS), Signature-Based Systems and Anomaly-Based Systems. An NIDS monitors network traffic for particular network segments or devices and analyzes network, transport, and application protocols to identify suspicious activity.

NIDSs typically perform most of their analysis at the application layer, such as HTTP, DNS, FTP, SMTP, and SNMP. They also analyze activity at the transport and network layers both to identify attacks at those layers and to facilitate the analysis of the application layer activity (e.g., a TCP port number may indicate which application is being used). Some NIDSs also perform limited analysis at the hardware layer. A WIDS monitors wireless network traffic and analyzes its wireless networking protocols to identify suspicious activity. The typical components in a WIDS are the same as an NIDS: consoles, database servers (optional), management servers, and sensors. However, unlike an NIDS sensor, which can see all packets on the networks it monitors, a WIDS sensor works by sampling traffic because it can only monitor a single channel at a time. An NBAS examines network traffic or statistics on network traffic to identify unusual traffic flows. NBA solutions usually have sensors and consoles, with some products also offering management servers. Some sensors are similar to NIDS sensors in that they sniff packets to monitor network activity on one or a few network segments. Other NBA sensors do not monitor the networks directly, and instead rely on network flow information provided by routers and other networking devices. HIDS monitors the characteristics of a single host and the events occurring within that host for suspicious activity. Examples of the types of characteristics an HIDS might monitor are wired and wireless network traffic, system logs, running processes, file access and modification, and system and application configuration changes. Most HIDSs have detection software known as agents installed on the hosts of interest. Each agent monitors activity on a single host and if prevention capabilities are enabled, also performs prevention actions. The agents transmit data to management servers. Each agent is typically designed to protect a server, a desktop or laptop, or an application service. A signature-based IDS is based on pattern matching techniques. The IDS contains a database of patterns. Some patterns are well known by public program or domain, for example, Snort (http://www.snort.org/), and some are found by signature-based IDS companies. Using database of already found signature is much like antivirus software. The IDS tries to match these signatures with the analyzed data. If a match is found, an alert is raised. An Anomaly-Based IDS is a system for detecting computer intrusions and misuse by monitoring system activity and classifying it as either normal or anomalous. The classification is based on heuristics or rules, rather than patterns or signatures, and will detect any type of misuse that falls out of normal system operation. This is as opposed to signature-based systems that can only detect attacks for which a signature has previously been created.

Chapter 12 covers the SET protocol designed for protecting credit card transactions over the Internet. The recent explosion in e-commerce has created huge opportunities for consumers, retailers, and financial institutions alike. SET relies on cryptography and X.509 v3 digital certificates to ensure message confidentiality, payment integrity, and identity authentication. Using SET, consumers and merchants are protected by ensuring that payment information is safe and can only be accessed by the intended recipient. SET combats the risk of transaction information being altered in transit by keeping information securely encrypted at all times and by using digital certificates to verify the identity of those accessing payment details. SET is the only Internet transaction protocol to provide security through authentication. Message data is encrypted with a random symmetric key that is then encrypted using the recipient's public key. The encrypted message, along with this digital envelope, is sent to the recipient. The recipient decrypts the digital envelope with a private key and then uses the symmetric key to recover the original message. SET addresses the anonymity of Internet shopping by using digital signatures and digital certificates to authenticate the banking relationships of cardholders and merchants. The process of ensuring secure payment card transactions on the Internet is fully explored in this chapter.

Chapter 13 deals with 4G Wireless Internet Communications Technology including Mobile WiMAX, WiBro, UMB, and LTE. WiMAX is a wireless communications standard designed to provide high-speed data communications for fixed and mobile stations. WiMAX far surpasses the 30-m wireless range of a conventional Wi-Fi LAN, offering a metropolitan area network with a signal radius of about 50 km. The name WiMAX was created by the WiMAX Forum, which was formed in June 2001 to promote conformity and interoperability of the standard. Mobile WiMAX (originally based on 802.16e-2005) is the revision that was deployed in many countries and is the basis of future revisions such as 802.16m-2011.

WiBro is a wireless broadband Internet technology developed by the South Korean telecoms industry. WiBro is the South Korean service name for IEEE 802.16e (mobile WiMAX) international standard. WiBro adopts TDD for duplexing, OFDMA for multiple access, and 8.75/10.00 MHz as a channel bandwidth. WiBro was devised to overcome the data rate limitation of mobile phones (for example, CDMA 1x) and to add mobility to broadband Internet access (for example, ADSL or WLAN). WiBro base stations will offer an aggregate data throughput of 30–50 Mbps per carrier and cover a radius of 1–5 km allowing for the use of portable Internet usage.

UMB was the brand name for a project within 3GPP2 (3rd Generation Partnership Project) to improve the CDMA2000 mobile phone standard for next-generation applications and requirements. In November 2008, Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favoring LTE instead. Like LTE, the UMB system was to be based on Internet (TCP/IP) networking technologies running over a next-generation radio system, with peak rates of up to 280 Mbps. Its designers intended for the system to be more efficient and capable of providing more services than the technologies it was intended to replace. To provide compatibility with the systems it was intended to replace, UMB was to support handoffs with other technologies including existing CDMA2000 1x and 1xEV-DO systems. However, 3GPP added this functionality to LTE, allowing LTE to become the single upgrade path for all wireless networks. No carrier had announced plans to adopt UMB, and most CDMA carriers in Australia, the United States, Canada, China, Japan, and South Korea have already announced plans to adopt either WiMAX or LTE as their 4G technology.

LTE, marketed as 4G LTE, is a standard for wireless communication of high-speed data for mobile phones and data terminals. It is based on the GSM/EDGE and UMTS/HSPA network technologies, increasing the capacity and speed using new modulation techniques. The standard is developed by the 3GPP. The world's first publicly available LTE service was launched by TeliaSonera in Oslo and Stockholm on 14 December 2009. LTE is the natural upgrade path for carriers with GSM/UMTS networks, but even CDMA holdouts such as Verizon Wireless, which launched the first large-scale LTE network in North America in 2010, and au by KDDI in Japan have announced they will migrate to LTE. LTE is, therefore, anticipated to become the first truly global mobile phone standard, although the use of different frequency bands in different countries will mean that only multiband phones will be able to utilize LTE in all countries where it is supported.

The scope of this book is adequate to span a one- or two-semester course at a senior or first-year graduate level. As a reference book, it will be useful to computer engineers, communications engineers, and system engineers. It is also suitable for self-study. The book is intended for use in both academic and professional circles, and it is also suitable for corporate training programs or seminars for industrial organizations as well as in research institutes. At the end of the book, there is a list of frequently used acronyms and a bibliography section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.53.209