Computers, Networks and the Internet

The threats identified in the last section are closely linked with the structure and weaknesses of IT systems consisting of computers, applications, networks, and the Internet. These IT systems are extremely complex and are changing rapidly. A complete description of all its myriad pieces and their interactions is beyond the scope of this book. However, a basic understanding of the building blocks and the underlying architecture is needed for subsequent discussion.

We suspect that most of the readers of this book are already familiar with this architecture; still, for the sake of completeness, we decided to include a brief description with focus on elements crucial for understanding security needs and solutions.

A general purpose computer, also known as a host, runs an OS (Operating System), a complex piece of software responsible for controlling hardware components and providing a high-level abstraction to human users and applications running on the computer for such tasks as data storage, screen- and keyboard-based I/O, user management, access control, data networking and so on. Though a number of different kinds of Operating System software exist, most of the computers run some flavor of UNIX, Linux, or MS-Windows.

More often we find that a computer is attached to a network of computers. Such a network consists of physical elements like cables, connectors, repeaters, and so on (or transmitters and receivers, in case of wireless networks) at the physical layer; electrical (or electro-magnetic, with wireless networks) specifications to demarcate and carry data bits from one node to another at the data link layer; and conventions for host addressing, routing and so on at the network layer. A wide variety of options exist at each layer to construct a functional network. The layered architecture allows the technology at any one layer to be replaced by another without causing a change in other parts of the network. This is a real blessing and has allowed independent evolution of such disparate technologies as wireless home networks, Ethernet LANs, cable and DSL-based networks, all transparent to the communicating programs.

Networks are connected together, with the help of devices known as routers, to form bigger networks. These routers are nothing but specialized computers dedicated to move data bits from one network to another, reconciling the fact that certain characteristics of these networks may differ. Network elements such as hosts and routers are sometimes also referred to as just nodes.

A special network, actually a network of networks, is of particular interest—the Internet, or simply, the Net, due to its pervasiveness and central role in e-commerce, intra- and inter- enterprise computing and now, even day-to-day life. Most of the computers and networks in the world are directly or indirectly connected to it, some always and some intermittently. It is a global network consisting of major backbones of high-bandwidth communication lines and fast routers, operating under the rules defined by a collection of layered protocols, and with ownership distributed among many organizations and governments.

Network layer protocol, IP (Internet Protocol), defines the addressing and routing of data packets over the Internet. An integral part of IP is the addressing mechanism by which each node must have a unique address separate from the address of the hardware component interfacing the network.

In addition to IP, there exists a number of other control and supporting protocols for smooth functioning of the Internet—ICMP (Internet Control Message Protocol) for unexpected event monitoring and testing; ARP (Address Resolution Protocol) for querying the hardware address corresponding to an IP address; RARP (Reverse Address Resolution Protocol) for retrieving the IP address corresponding to a hardware address; DHCP (Dynamic Host Control Protocol) for automatic allocation of IP addresses and other configuration information to hosts, DNS (Domain Name System) for providing many-to-many mapping between IP addresses and symbolic host names and so on.

Two important transport layer protocols, TCP (Transport Control Protocol) and UDP (User Datagram Protocol) built on top of IP provide additional functionality for effective communication between two communicating endpoints. TCP allows connection oriented, reliable communication whereas UDP allows connection-less datagram-oriented communication.

A number of higher level, more specialized protocols are built on top of these two transport level protocols—TELNET for remote login and text mode terminal emulation; FTP (File Transfer Protocol) to move files from one host to another; SNMP (Simple Network Management Protocol) for managing network of hosts and routers; SMTP (Simple Mail Transfer Protocol) to distribute e-mail messages; HTTP (Hyper Text Transfer Protocol) for online access to hypertext documents; NFS (Network File System) for accessing files on remote hosts as if they were on the local machine and so on.

Another class of protocols focus on enabling distributed computing over the Internet by enabling invocation of a program running on one computer by a program running on another computer. Such protocols include RPC (Remote Procedure Call) for invoking procedures on remote computers; IIOP (Internet Inter-ORB Protocol) for CORBA-based object oriented distributed computing; and Java RMI (Java Remote Method Invocation) for distributed computing among Java programs. Note that Java RMI payload can be transported over HTTP and IIOP as well.

Figure 1-1 shows the relationship among these different protocols. Though this figure might give the impression that a higher-level protocol relies on a specific lower-level protocol, it is important to keep in mind that an application level protocol like HTTP or SNMP may be implemented over more than one underlying protocol.

Figure 1-1. Network Communication Protocols.


A protocol may define or use an existing data representation mechanism to package the data being exchanged. This becomes important when the data is not simply a sequence of bytes and contains elements of data types such integer, floating point numbers, arrays, and so on. For example SMTP and HTTP use MIME (Multipurpose Internet Mail Extensions) for various types of attachments; SNMP uses ASN.1 (Abstract Syntax Notation One) and associated encoding rules to represent management data; RPC uses CDR (Common Data Representation) for packaging call arguments and return values and Java RMI Transport uses Java object serialization. Lately, use of XML-based (eXtensible Markup Language) markup languages has become quite popular for representing all sorts of content.

Software implementing these protocols usually follows the structure of a client server system, the server being a program always ready to accept a connection from a client (for TCP-based protocols) and service a request. The server program may be self-contained, as is the case with TELNET, FTP, DNS and many others or may allow extension through a well-defined API. The client may be available as a standalone program, or as a library to be used by user programs or as both.

Most of these protocols were not designed with strong security in mind. For example, TELNET and FTP send username and passwords in clear text over the network where it can be seen by anyone having access to a host connected to the same network with the help of very rudimentary tools. A number of other, subtler, protocol-related weaknesses have also been found and published.

There have been a number of attempts to address these weaknesses with the help of secure protocol design principles and cryptography. SSL (Secure Socket Layer) has been developed as a means to secure TCP and hence, any protocol that runs on top of TCP. SSH (Secure Shell) provides a secure way of remote login and can also be used to tunnel any protocol between two hosts on the Internet. Another mechanism to secure the transport is IPSEC, an integral part of next version of IP, IPv6, and also available to the current IPv4. These advances are key to ensure computer security and we talk more about them in subsequent sections and chapters.

One thing that must be kept in mind is that a secure protocol doesn't necessarily translate into a secure system as subtle implementation flaws could leave security holes. In fact, even with insecure protocols, most of the security breaches take place by exploiting defects in the implementation and not through protocol weaknesses. This is so because (a) exploiting defects is much easier; and (b) it allows much better control of the compromised systems. We have more to say on this topic later in the chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.238.171