Databases

Databases are very much at the center of computing. Most computer applications are either designed to use databases or employ them in the course of doing their jobs. For example, an operating system maintains a database of resources, a word processor has a database of fonts, and so forth. As a way of guiding yourself through this complex subject, note that this section covers six major topics.

  • The elements of a database

  • Planning a database

  • The relational model of database organization

  • Nonrelational databases

  • The power of "legacy" database systems

  • Database markets

The Elements of a Database

A database is simply a way of organizing information. A bunch of books piled on the floor is a collection of information. A bunch of books placed on shelves in some logical order is a database. A computer database management system provides tools that make it possible for the computer to organize and manage the data more efficiently. Let's continue with the library example, which can be a very productive one. Perhaps you've put your books on the shelves according to their size—tall books on one set of shelves, short ones on another, and so forth. After that, volumes are added to the appropriate shelf in the order in which they are purchased. This is a technique that is widely used in European libraries and is not as weird as it sounds; it allows for very good use of shelf space. Let's say, though, that you want to find a book by a certain author. If your library is big enough, and you're over 50 like me, you won't remember where it is and will have to scan all of the books to find the one you want. This can be very tedious. The solution to this problem, having to look at everything to find a single item, is solved by the use of an index—in this case, a card catalog that lists all your books, alphabetized by author. When you find the right card in the index (catalog), you see that it has a number, let's say C124, which means that it is the 124th book on the small book stack (C). You can then go straight to the place where it should be (only to find that someone else has taken it; but that's another story). An index, then, is a list that holds current information on the location of objects—books, in this case.

Tech Talk

DBMS: A database management system (DBMS) is a program, or more likely a group of programs responsible for data storage. Additional DBMS functions include a programming language to facilitate data entry and retrieval, a report writer, analysis tools, and so on.


Tech Talk

Database index: In a database, those fields that are specially designated for fast sorting and retrieval are the indexed fields. Normally, the index is kept in memory for the fastest retrieval.


Computer databases store information on disk (or on tape, for older systems) and include one or more indices that allow any item to be found quickly. To illustrate some of the key issues in databases, let's pretend that we are opening a medium-sized public library and need to develop a database to automate finding and circulating books.

Planning a Database

The first point to be made in this illustration is that it's just illustrative—no sane person would develop new library software these days because there are already a dozen or so high-quality systems out there available for purchase. Anyway, let's think about the things our library database needs to do. First, as described earlier, it needs to provide information about books. We'll say that means the following: author, title, subject, and call number (where it is found on the shelf). Since our library also circulates books, it needs to know something about its patrons: name, address, Social Security number (to avoid conflicts when two people have the same name), and it has to have some record of a book's availability status: in, out (until when), and not-on-shelf-probably-lost. Finally, we have to have some mechanism for the staff to add and remove books and patrons from the database.

Carded by Dragnet

Older computers didn't necessarily have an index. Even without one, it was much faster and much more accurate for the computer to sort through tens of thousands of cards than for humans to do it. Think of the old TV series Dragnet, which featured shots of the computer at the California Bureau of Motor Vehicles sorting through mountains of cards ("Well Sergeant, I think, it was a green car, probably a Ford, and the license had the letters A, B, and J in it.") Given this information, the computer would sort through all the cards, first to find ones with the license numbers (the smallest group), then sort through this subset for green cars, then that subset for Fords, and so forth. Once the information was received, Sergeant Friday would likely learn that it was false information supplied by the killer. But I digress. Note one thing that is much easier with magnetic storage than with the paper-based systems—the computer can make copies of data with the greatest of ease. Once it has identified a set of records from a database, it simply copies them to a new location for further sorting. The main database is then free for other searches.


This is a very bare bones description of a library database. It's simple for purposes of illustration. But let's use the opportunity to point out that a serious database planner would (should) spend a great deal of time thinking about what information is included in the database. Depending on the system, it's usually very difficult to go back and add something. Moreover, the organization of the database, which we'll discuss next, is often dependent to a considerable extent on what information it contains. Careful planning is critically important.

The Relational Model of Database Organization

The most important thing to think about in organizing a database is how to separate information for the greatest efficiency. For example, our library database could be thought of as three independent, but linked, databases. One, the circulation database, would include patron names and SS numbers, the call number of the book, and the latter's circulation status. This database would be in constant use, with frequent changes made as patrons checked out and returned books. This kind of database, which assumes constant and rapid change of information, is called a transaction processing database (the usual acronym is OLTP, for online transaction processing). Because writes to disk require a number of separate steps, OLTP places a heavy demand on hardware, and will be most efficient if isolated from other activities.

Tech Talk

Relational database (RDBMS): A relational database is one in which separate tables (usually in separate files) can be linked through common fields. Keeping information in separate, logically related groups most of the time, while retaining the ability to create multiple links when needed, provides a great deal of flexibility.


Tech Talk

Transactional database (OLTP): Transaction processing, also known as online transaction process (OLTP), refers to a database that is accessed for the purpose of making changes to its records. For example, a library circulation system, which changes the status of a book from checked in to checked out (or vice versa), is responsible for transactions.


Another part of the library database is the catalog system. This includes author, title, subject, call number, and (maybe) circulation status. This database is used for queries—for example, does the library have books on a certain subject by a particular author? Queries to a database, especially complex ones that link a number of different fields, also make considerable demands on hardware. Typically, the computer will have to sort through one set of records to select those that match certain criteria, then sort through this subset to look for other criteria, and so on. If you can isolate queries from OLTP, you will have much better performance in both areas. One way to do this is to have the databases on a multiprocessing machine and/or on separate machines. But what if we want our query to also return information on the circulation status of the book (as is now usually done with library software)? Well, you can have the query software wait until it has found the records that the patron requested, then the catalog database can request the needed information from the circulation database. This adds some burden to the OLTP part of the system, but not nearly as much as if it had to handle queries and transactions at the same time.

Modes of Database Operation
Transaction processingIn this mode, the database is able to make changes to its records. For example, the clerk at a library desk checks a book out, and in so doing, changes the database record to show that it is out and when it is due. The term OLTP (for online transaction processing) is often used, but this is redundant, since essentially all such databases are on-line. Fast and reliable OLTP systems, for example those used by airlines, require significant processing power and very fast I/O systems.
Query or decision supportSome databases allow you to look up information, but not to change it. For example, library patrons can find if a book is in, but cannot check it out. In a more sophisticated use, such databases are used to sort through vast data sets to provide "what if" kind of information to executives. If much analysis is to be done, this database mode will require a great deal of computational power.
BatchUnlike the others, batch mode is not interactive. Instead, the database is given some instructions (find patrons who owe fines; calculate electric bills) and runs unattended until the work is finished. This is the least demanding function as far as the hardware is concerned.

A final part of your system will be one that periodically runs through the circulation database; finds books that are overdue; associates these with patrons, including their name and address; calculates a fine for each item; then creates a mailing list. This part of the system runs in batch mode. It doesn't have to respond immediately to anyone and can do its work every night after the library is closed.

The examples below illustrate the database just described. The columns of a table are called fields, as in the "Author" or "Fines owed" field. A row in a table is a record; the patron table would have a record for each patron.

The type of database described here is known as relational (other types—hierarchical, network, and object—are discussed later in this section). That is to say, instead of a number of separate databases, it is structured as a series of tables that can be either manipulated individually for simple operations or can be related to each other or joined, for more complex ones. For example, the catalog table doesn't contain availability information and the circulation table doesn't contain subject data. But, because both tables have call numbers associated with other information, it's possible to find out, for example, the circulation status of all books on a certain subject. Relational databases are dominant today (at least for new applications) because they offer both tremendous flexibility (separating information for different purposes) and tremendous power (because all the data are related, it's possible to ask questions about them from almost any perspective). Relational databases have their own special language for queries, Structured Query Language (SQL, sometimes pronounced sequel). While SQL first emerged in somewhat standardized form (from IBM), it has since evolved into a range of dialects—mostly extensions of the original rather than modifications. Organizing a relational database in a way that makes the most efficient use of tables and linking fields is called normalizing. While this wouldn't be hard in a simple database like the one described above, it becomes a formidable task for highly complex databases.

Circulation Table
Call no. of bookDate duePatron SSN
zx34.5p3/11/97123-45-6789
cv52.9  

Catalog Table
Call no. of bookTitleAuthorSubjectPublisher
zx34.5pUsing Upper CaseCapslock, C.ComputersASCII Press
cv52.9Speaking HexadecimalEngineer, A TypicalProgrammingEEEE Press

Patron Table
Patron last namePatron first namePatron SSNAddressFines owed
cummingse333-33-4190forest lawn 
GoodwinArchie1886-00-1975W. 35th St.$1.50

Tech Talk

Structured Query Language: Structured Query Language, abbreviated SQL, is a language that is employed for querying relational databases. Most implementations of SQL are specific to a particular database package.


Deciding which fields to index is also a major issue. Indexes do make a search on a field much faster; instead of having to sort through the entire set of records to find a field with a certain value, the database can use the index to find it. One downside of indexes is that they require additional storage space. Every index represents a new list of information, in effect a sub-database. Adding indexes makes the database larger and (potentially) slower. Further, indexes have to be maintained; new entries are added to the end of the index or perhaps in a separate file. If the entry isn't found in the main section, the software then has to do a second search. Once the new area or file gets big enough that second searches are common, the system slows down and the index needs to be "rebuilt" by having the new entries placed in the appropriate places in the main index. Large systems usually do this overnight (or whenever usage is lowest). If an index is really used often, it should be kept in memory rather than on disk. In the example above, frequently used fields like Call Number, Title, and Author would be indexed. A field like Fines owed would not; the rare occasions when this would be searched directly (e.g., a list of patrons with fines over a certain amount), would be done in batch mode and would not require the speed of indexing.

Tech Talk

Database tuning: Deciding how a particular database should be organized for a certain machine, for example, whether some or all indexes should be held in memory, is called tuning the database. For complex systems, especially those with multiple processors, careful tuning can make a huge difference in performance.


Nonrelational Databases

The relational type of database discussed above is not the only type in wide use. Of the four other kinds, the most primitive are flat-file databases. These databases store information sequentially and are really just modernized versions of the original card file systems. They are not normally used today—at least for new applications. The next types in order of chronological development and degree of sophistication are hierarchical databases, network or CODASYL databases, and object databases.

Hierarchical and Network Databases

Hierarchical database systems organize information in the same tree-like structure as computer file systems. Network databases are very similar to the hierarchical kind with the exception that they make it easier to have very complex "many to many" relationships of data—for example, the kind shown in the tables of our library database in which links can be made between almost all elements. Since network databases are an extension of the hierarchical model, from here forward we'll lump the two together under the latter name, which is the more widely used. The disadvantages of these older types of database are that they are extremely complex to develop and to maintain. The latter point is particularly important and has two principal dimensions. First, changes to the underlying database structures are very difficult and require a lot of programming effort. This has always been a problem because businesses are in a constant state of flux (new divisions, new products, different procedures, and so forth) so that parts of the database are continually being rewritten. Second, asking even simple questions of these databases requires the assistance of a programmer. So even when the structure of the database hasn't changed, significant programmer effort is required to produce something as straightforward as a new report. Obviously, ad hoc queries are pretty much impossible.

Query Stuff

To say that amateurs can query relational databases with some success is accurate but misleading; few businesses would leave their database vulnerable to open-ended queries. While queries can't damage the database (they don't make changes to it; they just read from it) they can "bring the database to its knees" by asking the wrong kind of question. For example, innocent queries that want to find just a few records but also want to know what the value of the sum of those records is as a percentage of all records, can cause the database to have to read and copy information from every record. If the database is large, you can safely go out for a cup of coffee after starting this kind of query. Just don't let your fellow users add the cream or sugar for you. Solutions to this problem include limiting the kinds of questions that can be asked, allowing queries only at certain times of day (when transaction activity has dropped), or (best) forcing queries to be run only on copies of the database.


But these databases are still widely used. David Vaskevitch, a senior technologist at Microsoft, says that well over half of corporate databases still use the hierarchical model. One reason for this persistence is that, when skillfully written, hierarchical and network databases are much faster than relational systems. This is still an issue with mainframes, but isn't much of a factor for microprocessor-based systems, where additional speed is easily achieved with cheap hardware. Another reason is that while relational systems are much more flexible, this factor is most important for programs that do analysis. For transactional databases, by contrast, flexibility is less of a problem and the greater speed of hierarchical systems is even more of a benefit.

Object Databases

Describing the difference between object-oriented and relational databases is more complicated than explaining the difference between hierarchical and relational. One problem is that many people have trouble articulating exactly what an object-oriented database is. How is it different from its relational sibling? The relative immaturity of the technology makes it difficult to answer the question, but two examples can be given. One advantage of object-oriented databases is in dealing with unusual types of data. A normal database handles only limited types of data, which must be predefined and used in a consistent way; typical data types include text, integers, currency, and the like. A relational database can retrieve data that it doesn't understand—for example graphical data such as photos or film clips—by putting them in a special field called a BLOB (binary large object). The problem with BLOBs is that the database can only store and retrieve this kind of information; it can't manipulate it in any way because it doesn't understand it. An object-oriented database, on the other hand, includes a method, a way of manipulating the data, in each object. So an object-oriented database could not only retrieve, but also perform an analysis on, a BLOB-type object. For example, in a database that includes photographs, the object-oriented database's method could return all photos with certain combinations of colors.

Tech Talk

BLOB: A Binary Large Object (BLOB) refers to a collection of data, usually video or sound information, that a relational database is able to access, but not manipulate.


The second major benefit of object-oriented databases, according to their partisans, is the ease of integration of the database with programming languages. Most large, sophisticated projects require the writing of custom code. Because the languages are very different, it is quite difficult for programmers to make a relational database's SQL work smoothly with the programming language they are using (probably Cobol, or perhaps C). Object-oriented databases, on the other hand, interface naturally with an object-oriented language—C++, Smalltalk, or Java. This integration makes for faster, less buggy software. While object-oriented database companies are getting a fair amount of attention in the stock market, they aren't yet conquering the business world. A principal problem, as noted earlier, is that companies are unwilling to tinker with what works. Businesses that have just made the transition to relational systems are certain to stand pat, while those planning to make a move from old-fashioned hierarchical systems are obviously pretty risk-averse and are going to be reluctant to amplify their exposure by embracing what they consider to be an unproven technology. And, for those who perceive that object-oriented technology offers important potential advantage to their business model, vendors like Oracle, Sybase, and Informix are blunting the market for pure object-oriented databases by offering hybrid relational/object-oriented database products. In summary, while object-oriented databases are likely to be more widely adopted every year, the conservative nature of the database market, where terms like "mission critical" and "bet the company" are used in earnest, means that this new type of software won't have the same sweeping impact as other new technologies, such as Java.

The Power of Legacy Database Systems

It seems clear that the newer types of database—relational and object—are superior to the old time hierarchical model. So why are these "legacy" software systems still so important? The major reason is that successful database systems are very difficult to build and even harder to replace—so much so that businesses quickly find themselves adopting the "if it ain't broke, don't replace it" philosophy. If a database functions well, which means that it provides accurate information quickly, businesses are very unlikely to invest the time and money needed to swap it for something new—especially given that all big software projects carry considerable risk. It's this latter issue that explains why so many older systems are still functioning with no plans for replacement.

The factors that managers have to consider in comparing risk to reward are complex and changing. For a while, the high cost of mainframe hardware, together with the even higher cost of maintaining mainframe software, drove many corporations to develop new systems based on relational databases and minicomputer or similar hardware. To emphasize the maintenance issue, remember that while an informed amateur can access a relational DBMS for a previously unasked question with reasonable prospect of success, it takes a programmer (most likely writing in COBOL) to retrieve information in a form not previously requested from a commercial hierarchical database. If your company has this kind of database, and you want a question answered, you have to go to Information Services (IS) and get a programmer to help you. Based on the user support issue alone, most any RDBMS system, whether client-server or not, costs much less in personnel than its traditional equivalent.

So, the later 1980s saw a vast rush away from mainframe-based hierarchical databases and toward smaller platforms running relational databases. The rush was mostly for new applications, though. Conversion of existing programs occurred at a much slower pace. The transition from old systems would have been faster if everyone had succeeded. Generally speaking, smaller companies did well, but many larger ones experienced tough times. Failures typically resulted from overly sanguine estimates of how much a new system would cost and how soon it could be ready. Managers pulled the plug on these projects not because they were impossible, but because they couldn't be accomplished in a reasonable time for a reasonable cost. A secondary factor was the fear that these problem-plagued projects would lead to problem-plagued software—something that few companies can afford. The industry term for core database software is mission critical and companies take its stability as seriously as NASA does its launches. If there is a possibility that your billing software won't work reliably, or that you can no longer accurately track inventory or maintain sales records, or that your system won't always be available when needed, then your new database project has taken on a bet the company dimension.

Tech Talk

Mission critical: The term mission critical, borrowed from NASA and with obvious meaning, refers to database systems that, should they fail, would bring the business or organization down with them.


Problems with Multiuser Databases

We can't go into the details of database operations here, but it is useful to give an example of problems that occur when multiple users are accessing the same data source. Assume that you are in Cleveland trying to get a seat on a certain flight to Phoenix. The clerk tells you a seat is available and starts to enter your reservation, only to find that someone else has gotten it in the few seconds since he first called it up. When you consider all of the people who can be accessing the database at any one time, this sort of thing can happen often. One way to deal with it is to lock records when they are accessed. While clerk A is looking at seat 24C, others are denied access to this record. Simple enough. One thing that can happen, though, is that multiple users can lock a series of adjacent records. The software, which is designed to move sequentially from record to record, can then find itself surrounded by locks and unable to move. This form of software gridlock is called deadly embrace. There are ways of dealing with this, but it illustrates the kinds of problems that have to be resolved when a database must support multiple, simultaneous users. These problems become much more challenging when there are multiple users and distributed data; we'll talk about this in Part 3.


Faced with corporate and personal peril, managers examine alternatives very closely, and standing pat has strong appeal. A popular industry saying is "No one ever got fired for buying IBM." Helped by the corporate giant's indefatigable sales force, managers observed that while relational databases are more flexible, with appropriate effort their hierarchical brethren can be made to do anything they can do. Further, while relational systems can run on much less expensive client-server systems, the falling price of mainframe hardware since about 1990 has lowered what was once a huge gap. The bottom line, then, is that the major reason for abandoning a legacy system has been that the cost of maintenance (mainly people) is perceived to be unacceptably high in an increasingly competitive environment. Corporations that have converted to RDBMSs and/or to client-server have considered that the risk and up-front cost was offset by significantly lower costs over the long term. But the lure of downsizing all those expensive jobs has been followed only when corporate resources allowed for the time and expense needed to ensure that a replacement could be implemented with minimum risk.

Database Markets

Let's now turn to a discussion of databases according to the kinds of businesses and organizations that use this type of software. There are three principal markets: small office/home office, small-to-medium business, and medium-to-large business.

Small Office/Home Office

The first and lowest level of activity is the small office/home office (SOHO). These users normally employ desktop (workgroup) packages. The major workgroup products at the moment are Access (Microsoft), Paradox (Corel), and Approach (Lotus, IBM). With a little effort and determination, such packages can be turned into what appear to be applications. The user starts the system not by launching Access or Paradox or whatever, but by starting a special standalone program, probably including data entry, query, and report functions, that the user created with the parent application and that now runs (sort of) independently using resources provided by the parent (the user has to have a copy of Access or Paradox or whatever on his machine). In another approach, the parent program could be used to write a completely separate application that is then compiled and run independently. This method was common back when dBASE was popular (in the far off 1980s). People would develop an application, let's say an inventory product for a small business, using the dBASE language, then run it through a compiler like Nantucket Software's Clipper. They could then distribute or sell copies to others (maybe even with a shrink-wrap). Writing in dBASE was much faster and produced a product almost as efficient as starting from scratch with a basic language like C. While some people still write programs for databases, the SOHO market mostly uses applications that are built on, and dependent on, the parent software. The programming languages embedded in these databases (for example, Visual Basic is a part of Access) allow a high degree of customization. There are inherent limitations to this approach, though; software developed for single desktop use usually doesn't scale well—if your business experiences explosive growth or goes into a completely new area (or both), the software probably won't be able to handle the new demands placed on it. Likely, you'll have to start all over.

Tech Talk

Workgroup: The term workgroup is used for software that supports a fairly small collection of users who are working on the same project or who are responsible for a common area of a business or organization. A workgroup is usually connected with a local area network (LAN).


Tech Talk

dBASE: One of the first DBMS for the PC was a program called dBASE. Published by a company called Ashton Tate, dBASE created a standard both for a data storage format and for a database programming language. dBASE survived into the mid-1980s, but its developers were too slow to adapt to the graphical environment presented by the Macintosh and Microsoft's Windows.


Small-to-Medium Businesses

The next type of market is for small-to-medium businesses. As we move up the database ladder, it's important to note that whether these larger companies develop their own database or outsource, few today actually write the underlying database code themselves. Instead, they employ one of the enterprise level databases from companies like Oracle, IBM (DB2), Sybase, Informix, Compaq/DEC (RDB), or Microsoft (SQL Server). As described in Chapter 8, these databases come with what amount to fourth generation programming languages that greatly simplify the development of modules for every aspect of database operation and maintenance.

Tech Talk

Enterprise: Software systems that are used across a large, complex business are called enterprise systems. For example, Oracle's software works at the enterprise level, but Microsoft's Access database is a workgroup product.


Businesses and organizations in this segment belong to generic categories, such as medical, automobile dealership, home service, and so forth, and therefore benefit from software that has already been created for that type of activity. One example would be a doctor's office. No sane medical practice, especially in these challenging times, would develop its own database system—the market is full of high-quality offerings, closely customized for specific practices, that an office can simply buy and install. The office can then outsource support and maintenance—the vendors of the product (or perhaps a third party) provide both support (assistance with problems that occur in daily operations) and maintenance. Maintenance refers to revisions to the software that are either generic (for example, inclusion of a new Medicare claim process) or customized (for example, changes if the practice adds its own laboratory).

The software produced for companies in this category may not come in a shrink-wrap, but the organizations using it know very little about computer technology and feel they are much better for their ignorance. Companies that buy off-the-shelf software have two important characteristics: their business model is fairly traditional (so the software they use doesn't require much customization) and they don't see information technology as a key aspect of their competitiveness. They are content to stay with the pack in their systems, focusing competition exclusively on products or services.

Medium-to-Large Businesses

The final market tier includes both medium- and large-sized businesses. These organizations normally require custom software. They may need it because their business model is one that doesn't fit any standard category or because they see information systems as a critical element of competitiveness—or both. Companies like this take one of two approaches. In one, they set up an Information Systems (IS) office and develop and maintain the needed software. This is a costly approach, but it can pay huge dividends. Perhaps the best example is the Boeing Corporation, which has been a consistent early adopter of the most advanced computer technology. Its efforts have resulted in extraordinary productivity in aircraft design. There are thousands of other examples of this kind, covering manufacturing, banking and finance, and services. One of the most unusual technology developers is Domino's Pizza, which has pioneered fast links between telephones and computers for handling its order processing. This effort has included the idea of having a single telephone number for the many stores in a large region, with calls being automatically routed to the store nearest the caller's residence as indicated by his phone number. Projects like these are extraordinarily challenging and carry high risks, but offer huge potential in both cost savings and customer satisfaction.

Large businesses are generally too complex to be able to benefit from the kind of off-the-shelf software that medical practices, automobile dealers, and the like can use. But that doesn't mean they have to follow Boeing's path and do it all on their own. There are two basic alternatives. One is to use Enterprise Resources Planning (ERP) software to create an integrated database; the other is to outsource all, or nearly all, operations. In the first approach, the most common way today for a large company to develop an integrated database environment is to go to a software house that specializes in this. The market leader, SAP (founded by ex-IBM programmers in Germany), has prewritten modules for things like inventory, sales, human resources, and manufacturing that it uses to connect an array of databases running on different platforms from PC to server to mainframe. SAP's programmers and analysts design the integrated system and customize the software. This can be a huge task, but the gains in productivity usually cover costs in a few years. Other major players in the ERP game are PeopleSoft, Oracle, J.D. Edwards, and a few others. The current market size is in excess of $7 billion per year. Businesses normally hire consultants, such as Andersen Consulting, to assist in installing the software.

Tech Talk

ERP software: Database software that is designed to integrate the diverse functions of a large organization (sales, payroll, inventory, marketing, etc.) is called Enterprise Resource Planning (ERP) software. Vendors such as SAP and PeopleSoft have made fortunes helping businesses replace a heterogenous mix of databases with a single interface and a high level of interactivity.


What's (Really) New?

There have been frequent horror stories about huge cost overruns in the installation of ERP software. Many assume that this means that the software vendors and attendant consultants grossly underestimated the cost of developing the new system. In fact, the cost increases are normally attributed to the fact that the company installing the software uses the occasion to change their business processes. Since such rethinking can have very complex consequences, it is very hard to estimate the resources it will require.


The other alternative, outsourcing, is today usually a variation on the same theme. The idea is that the company gets out of the computer business entirely and has someone else do everything for them. Unlike the situation with generic business software, there will be a lot of custom programming required. But the company doesn't worry about this—they focus on selling widgets, and the outsourcing company is responsible for keeping the code up to date and servicing the hardware systems that it runs on. Outsourcing is a big business Ross Perot's charming personality has graced our political landscape because, after leaving a sales job at IBM, he made billions with an outsourcing firm, Electronic Data Systems (EDS). This giant has garnered both medium and large customers (General Motors, which owned EDS for a while, fits in the latter category). IBM also plays in this market (a few years ago, Kodak turned over all its computer operations to IBM), and Andersen Consulting is a major power as well. Often, companies decide to outsource when they move to ERP software. They require consultants to assist with the installation in any case, and it's easy at that point to ask the outsourcing contractor to hang around and keep things humming along. Needless to say, there are a lot of variations on this theme. Some companies maintain their core operations internally, but outsource, for example, their e-commerce system or their network operations.

Even for businesses that don't outsource, consulting is a critical benefit in the development and maintenance of large corporate database and related systems. As software gets more and more complex, it doesn't make sense for a business, except for a few like Boeing, to try to maintain experts in every area on their staff. Consulting services, while expensive, offer companies the opportunity to stay at the cutting edge without making too big an investment in permanent IS staff. Consulting also offers fat profits and is becoming increasingly competitive; AOL's Netscape division, for example, has a business that will handle Internet services for customers, either consulting in specific areas or in managing the whole enchilada.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.27.178