Foreword

The human desire to understand things is insatiable and perpetually drives the advance of society, industry, and our quality of living. Scientific inquiry and research are at the core of this desire to understand, and, over the years, scientists have developed a number of formal means of doing that research. The third means of scientific research, developed in the past 100 years after theory and experimentation, is modeling and simulation. The sharpest tool invented thus far for modeling and simulation has been the computer. It is fascinating to observe that, within the short span of a few decades, humans have developed dramatically larger and larger scale computing systems, increasingly based on massive replication of commodity elements to model and simulate complex phenomena in increasing details, thus delivering greater insights. At this juncture in computing, it looks as if we have managed to turn everything physical into its virtual equivalent, letting a virtual world precede the physical one it reflects so that we can predict what is going to happen before it happens. Better yet, we want to use modeling and simulation to tell us how to change what is going to happen. The thought is almost scary as we are not supposed to do God’s work.

We all know the early studies of ballistic trajectories and code breaking, which stimulated the development of the first computers. From those beginnings, all kinds of physical objects and natural phenomena have been captured and reflected in computer models, ranging from particles in a “simple” atom to the creation of the universe, with modeling the earth’s climate in between. Computer-based modeling systems are not just limited to natural systems, but increasingly, man-made objects are being modeled, as well. One could say that, without computers, there would not have been the modern information, communication, and entertainment industries because the heart of these industries’ products, the electronic chips, must be extensively simulated and tested before they are manufactured.

Even much larger physical products, like cars and airplanes, are also modeled by computers before they go into manufacturing. An airbus virtually assembles all of its planes’ graphically rendered parts every night to make sure they fit together and work, just like a software development project has all its pieces of code compiled and regression tested every night so that the developers can get reports on what they did wrong the day before to fix their problems. Lately, modelers have advanced to simulating even the human body itself, as well as the organizations we create: How do drugs interact with the proteins in our bodies? How can a business operate more efficiently to generate more revenue and profits by optimizing its business processes? The hottest area of enterprise computing applications nowadays is business analytics.

Tough problems require sharp tools, and imaginative models require innovative computing systems. In computing, the rallying cry from the beginning has been “larger is better”: larger computing power, larger memory, larger storage, larger everything. Our desire for more computing capacity has been insatiable. To solve the problems that computer simulations address, computer scientists must be both dreamers and greedy for more computing power at the same time. And it is all for the better: It is valuable to be able to simulate in detail a car with a pregnant woman sitting in it and to model how the side and front air bags will function when surrounded by all the car parts and involved in all kinds of abnormal turns. More accurate results require using more computing power, more data storage space, and more interconnect bandwidth to link them all together. In computing, greed is good—if you can afford it.

That is where this book will start: How can we construct infinitely powerful computing systems with just the right applications to support modeling and simulating problems at a very low cost so they can be accessible to everybody? There is no simple answer, but there are a million attempts. Of course, not all of them lead to real progress, and not all of them can be described between the two covers of a book. This book carefully selected eight projects and enlisted their thinkers to show and tell what they did and what they learned from those experiences. The reality of computing is that it is still just as much an art as it is science: it is the cleverness of how to put together the silicon and the software programs to come up with a system that has the lowest cost while also being the most powerful and easiest to use. These thinkers, like generations of inventors before them, are smart, creative people on a mission.

There have certainly been many innovations in computer simulation systems, but three in particular stand out: commodity replication, virtualization, and cloud computing. Each of these will also be explored in this book, although none of these trends have been unique to simulation.

Before commodity replication, the computing industry had a good run of complex proprietary systems, such as the vector supercomputers. But when it evolved to 30 layers in a printed circuit board and a cost that could break a regional bank, it had gone too far. After that, the power was in the replication of commodity components like the x86 chips used in PCs, employing a million of them while driving a big volume discount, then voilà, you get a large-scale system at a low(er) cost! That is what drove computing clusters and grids.

The complexity animal was then tamed via software through virtualization, which abstracts away the low-level details of the system components to present a general systems environment supporting a wide array of parallel programming environments and specialized simulations. Virtualization allows innovation in computer system components without changing applications to fit the systems. Finally, cloud computing may allow users not to have to actually own large simulation computers anymore but rather to only use computing resources as needed, sharing such a system with other users. Or even better, cloud resources can be rented from a service provider and users only pay for usage. That is an innovation all right, but it feels like we are back to the mainframe service bureau days. In this book, you will learn about many variations of innovations around these three themes as applied to simulation systems.

So, welcome to a fascinating journey into the virtual world of simulation and computing. Be prepared to be perplexed before possibly being enlightened. After all, simulation is supposed to be the state of the art when it comes to complex and large-scale computing. As they say, the rewards make the journey worthwhile.

Songnian Zhou

Toronto, Canada

March 2011

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.119.157