The Pendulum Swing of Distributed Computing Versus Centralized Computing

Given the ideas that we wanted to offer for the next generation of computing power, it's worth asking, Why did HP decide to pursue the concept of computing as a utility? There are two main reasons for this shift: the move back toward the idea of central computing management and the growth of the World Wide Web and its attendant shift of users' perceptions about where their computing dollars can be spent. It's worth exploring both of these ideas in more detail.

Curiously, the mainstream idea about how to make computing power available to an organization swings back and forth like a pendulum between two models. At one end of the spectrum is the distributed computing idea, which spreads processing power out among as many machines as possible. On the other is the idea that it is better to centralize everything in one large machine for efficiency and ease of administration. This cycle is primarily driven by developments in technology.

In the 1970's, mainframes ran the vast bulk of computing power available. In fact, companies even ran on a more centralized business model. Since personal computers did not exist until 1981, users were restricted to access over a slower networking link via terminals. But with the rise of personal computers in the 1980's the pendulum swung the other way towards what we now call distributed computing. Eventually, the idea that you could replace one large machine with a lot of smaller machines became popular. This was the genesis of the cluster concept that we still use today by combining several small machines to work like one large one. In fact, the furthest outgrowth of this trend shows up in what is called cooperative computing, where the cycles on many different machines run simultaneously across the network.

The genesis of this trend truly began in 1984 when the PC–Mac wars started in earnest. The development of relatively inexpensive and reliable desktop computers moved many of the utilities that had previously been restricted to the mainframe directly onto the user's workstation. Only when the limitations of chip development came about due to Moore's second law (discussed in detail in Chapter 6) did another movement commence in the IT community.

PC Center of Expertise

By the late 1990's, these limitations led to a program at HP called the PC Center of Expertise, or PC-COE. We still use this term and follow the processes developed under this program. Several other large companies adopted similar plans. Under this idea, companies tried to reduce some of the technical complexity and difficulty in managing distributed computing power in their enterprises. During this time, enterprises were looking for better ways to manage the thousands of PCs on users' desks. Software and hardware upgrades and maintenance costs were spiraling out of control. HP developed an internal set of programs that run on both the users' PCs and the central server to better manage all of the computing resources in an enterprise.

To control and administer all of an organization's PCs, it was standard to have an image on each machine to indicate what applications can be run and supported. This decision was usually made for easier support and training services.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.245.196