Chapter 2. Ten Technology Trends

All major technology trends both impact and are impacted by NDC development. The extent to which the Internet is bound to exponential growth phenomena is reflected by the innovations coming from the communities that have embraced it.

To identify a trend, we must establish a measurable threshold of participation by agents within a fitscape. The adoption of a given technology is a function of a complex set of relationships among agents, governed by economic principles and assumptions and measured accordingly.

All major technology trends today are either direct beneficiaries of the metatrends (the Nth Laws) cited in Chapter 1 or are cousins of those trends, and all have direct impact on NDC development. Each trend is also enabled by NDC, with community-building technologies greatly increasing productivity. The attributes of a fitscape apply to the communities that constitute the autonomous agents affecting each trend, institutional and individual.

In an effort to create a list of ten and only ten major technology trends, I've erred on the side of inclusion and lumped a few together. While this list is not meant to be complete, it suggests the scope and depth that accelerating metatrends engender.

  1. Wireless and Mobile Computing

  2. Web Services and the Semantic Web

  3. Robotics

  4. Genomics and Biotechnology

  5. Material Science and Nanotechnology

  6. Internet2, Pervasive and Ubiquitous Computing

  7. Globalization, COTS, and Increasing Competition

  8. Real-Time and Embedded Systems, Grid Computing, Clusters, and Composability

  9. Security, Global Transparency, and Privacy

  10. Competing NDC Frameworks, the Emerging Global OS, and Recombinant Software

Thorough exploration of each of the trends would require at least an essay, if not a small library. For our purposes, a terse note for each must suffice.

Wireless and Mobile Computing

Wireless technologies are transforming the computer industry and the very concept of NDC. No longer bound by wires to homes and offices, wireless datacom promises not only freedom of movement for connected users but also increased likelihood of finding connections where hard-wired infrastructures are not yet globally competitive. Indeed, developing nations may find that skipping the copper phase of datacom growth in favor of wireless is not only faster but much less costly. Coupled with increasingly effective data compression, which allows greater quantities of information to be squeezed through limited bandwidths and space-based satellite networks to serve remote locations, the wireless/mobile trend in datacom is drawing high levels of investment, and therefore NDC developer interest, worldwide.

One near-term technology of consequence for high-speed wireless access is IEEE 802.11b, aka WiFi.[1] WiFi is a relatively short-range technology; a single base station may cover, say, a small building. But base stations are cheap—around US$100 each—making deployment of wireless Internet infrastructures relatively inexpensive given sufficient population density, as in most cities. Bill Gates has announced a commitment to WiFi going forward, and for better or for worse, when Bill Gates speaks, the industry listens. According to Gates:

802.11, we think, is a fundamental technology that every business, every home, every convention center is going to be wired up with high capacity 802.11. And that's finally the way that we'll have information wherever we want it.[2]

A number of problems remain to be solved before Gates' cut on WiFi becomes reality. -For example, how does an ISP charge for services for network connections that are by definition short and transient? Will meta-ISPs emerge, aggregating services transparent to users, akin to the early cell phones providers in the United States? Will microtransactions be required for such services to work? Or is Microsoft itself planning on becoming the WiFi service provider of choice, which could spark yet another round of legal challenges for the Redmond giant? Clearly, once these kinds of problems are solved, the ability to provide wireless customers with a mobile Internet connection independent of cell service provider issues could trigger a renaissance in high-speed wireless access.

Another wireless technology of consequence to consider is Bluetooth.[3] Unlike WiFi, the Bluetooth specification includes both link layer and application layer definitions for product developers supporting data, voice, and content-centric applications. Radios that comply with the Bluetooth specification operate in the unlicensed 2.4 GHz radio spectrum—a situation that may one day become a problem if locale-specific band licensing becomes reality. Bluetooth radios use a spread-spectrum, frequency-hopping, full-duplex approach to provide a high degree of “interference immunity,” that is, it should enable several personal Bluetooth devices to operate simultaneously without concern for interference from other local users. Bluetooth competes with WiFi but also complements it, being likely more appropriate as the personal area network (PAN) technology of choice, aggregating future personal NDC devices; for example, my PDA, wearable GPS system, implanted cardiovascular monitor, and Internet goggles all share data via Bluetooth with my WiFi connected cell phone, which also also functions as my personal web server and soul catcher!

WiFi and Bluetooth are just two examples of the investments currently being made in wireless and mobile technologies. Wireless and mobile datacom are changing technology usage patterns, the computer industry, and NDC development.

Web Services and the Semantic Web

That Web Services is a trend should be obvious. A growing list of vendors, tools, press releases, and books is sufficient witness to that fact. But for the promises of Web Services to materialize, the Semantic Web must also be considered.

In the May 2001 edition of Scientific American, Tim Berners-Lee, James Hendler, and Ora Lassila articulated a succinct and impressive vision for the future of the Internet from the perspective of meaningful information and the impact it could have.

The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. The first steps in weaving the Semantic Web into the structure of the existing Web are already under way. In the near future, these developments will usher in significant new functionality as machines become much better able to process and understand the data that they merely display at present.[4]

In addition to the invention of the transistor in 1947 by William Shockley, John Bardeen, and Walter Brattain at Bell Labs, the publication of The Mathematical Theory of Communication by Claude E. Shannon (also of Bell Labs) the following year effectively enabled the unfolding era of computing in which we all now participate.[5] Information theory is as confusing to the nonmathematician as ANSI C code is to the nonprogrammer, with counterintuitive cuts on (nonthermodynamic) entropy, information content, and reversibility of information. A detailed discussion of information theory is beyond the scope of this effort; suffice it to say that Shannon's work has enabled a highly mathematical and tractable approach to the idea of information and communication, and this approach has enabled modern datacom even as the transistor provided a basis for implementation of really cool computational devices.

Shannon effectively removed information from its substrate; once mathematically independent of physical constraints (such as matter and energy), information could flow over a wide array of “carrier” modalities as abstract and independent as mathematics itself. Arguably, the path to cybernetics[6] was paved with Shannon's information theory, which itself was perhaps another misunderstanding of the relationship between people and technology in the Einsteinian century of ethical relativity.

With the emergence of the Semantic Web, information is once again potentially grounded in meaning and therefore no longer divorced from context. Paradoxically, the Semantic Web is enabled by XML,[7] promising self-describing data, which metaphorically salutes information theory's illusion of separation even as it enables the emergence of inherent meaning, which can only be grounded in context, which can only be grasped by acknowledging the connectedness of the information. Thus, a fine paradox is embraced.

According to the aforementioned article in Scientific American,

the Semantic Web can assist the evolution of human knowledge as a whole . . . [it] lets anyone express new concepts that they invent with minimal effort. Its unifying logical language will enable . . . concepts to be progressively linked into a universal Web. This structure will open up the knowledge and workings of humankind to meaningful analysis by software agents, providing a new class of tools by which we can live, work and learn together.

This idea has great potential and is a major technological trend (and challenge) that will have considerable impact on NDC programming. By the same token, for Web Services to rise to the hype, as it were, much will be required of the Semantic Web.

Robotics

 

Computational power is to a mind what a locomotive engine is to a train. The train can't move if the engine is too small. But engine power is effective only if properly coupled to the load. Locomotive engines of the eighteenth century learned the relationship between speed, pulling power, engine size, and transmission ratios by trial and error, no doubt overturning many horsecart-derived intuitions. Two centuries later, robotics is learning analogous lessons.

 
 --Hans Moravec

Where did the idea of a robot begin? It depends on the context of the discussion. As early as the third century BC, a Greek engineer named Ctesibus made organs and water clocks with movable figures. Did that mark the beginning of modern-day robotics? Or was it even earlier, with the ancient Egyptian water clocks that purportedly foretold the future? Perhaps the mind-body problem as formulated by René Descartes at the beginning of the Enlightenment marked a turning point in the way we humans categorize the nature of being and the potential for “mind children”—as Moravec put it,[8] intelligent creatures created with our own hands and minds. Certainly, Mary Shelley gave us reason to pause with Frankenstein in the early 19th century. But then Isaac Asimov turned fear once again into hope in 1942 when he wrote “Runaround,” the story that first stated his “Three Laws of Robotics”:

  • “A robot may not injure a human, or, through inaction, allow a human being to come to harm.

  • “A robot must obey the orders it is given by human beings except where such orders would conflict with the First Law.

  • “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”[9]

In 1948, when Norbert Wiener published Cybernetics, affecting and affected by Shannon's work on information theory and sharing temporal influence with the invention of the transistor, the Information Age was born.[10] Perhaps that was indeed the year we became post-human and robotics found traction.

Key to any meaningful advance in robotics beyond that of Ctesibus and his clocks is computer science. Before the birth of the Information Age, robots or any serious notion of animated extrabiological forms was simply puppeteering, not robotics in the strictest sense. But with computer “intelligence,” robots are now not only imaginable, they are becoming common.

Robots routinely aid in manufacturing today and have done so for the better part of the past decade, if not longer. As with the adoption of any technology, the early phase has been marked by slow, often stuttered, steps. At some point a “knee” is reached and the sky-pointing view of the S-curve is achieved. Economic blue sky continues until market saturation occurs or at least until another technology comes along that promises less cost, more utility, or both.

Have you driven or been a passenger in a car that was built since 1998? That automobile was built by processes that involved quite a few robots. And since robots work with a much greater level of predictability than we humans do, have far fewer sick days, require no vacation or overtime pay, and are generally not prone to join labor unions, in the long term, robots have the clear edge when it comes to the manufacturing job market. Eventually, too, even the lowest paid sweat-shop workers will find themselves unable to compete with their robotic counterparts; the metatrends and the ever-mutating world economic fitscape ensure that technology will one day make slave-wage humans redundant too.

The implication for NDC developers? All those robots will be needing a whole lot of code going forward. Supervised and controlled by networks, as upgradeable as the next version of code they host, robots and NDC developers will find an increasingly greater need for each other over this next decade, and probably well beyond—at least until we too are made redundant by robots better able to program themselves by the very software frameworks and computer science we are today exploring. But perhaps by then we will have worked out this interesting economic paradox that is so clearly implicit in the metatrends.

Genomics and Biotechnology

Yes, we cracked the human genome, and that too was enabled by computer science. But it was just the beginning; biotechnology will revolutionize our view of living organisms as we continue to learn to engineer DNA. Of all the major technology trends, the revolutionary effects of biotechnology may be the most shocking that we will encounter over the next decade. Collective breakthroughs in biology and medicine, including a complete rethinking of geriatrics, may improve both the quality and the length of human life, even as engineering of our environment reaches unprecedented levels of intervention and granularity of control.

What is the impact of hyper-biotech on NDC development? Enormous! What if you knew your potential lifespan was approaching 500 years? Would that have an impact on your day-to-day activities? Might you behave differently? Perhaps write a little better code? Study harder? Might the short-term fluctuations of the stock market seem as insignificant as they really are once the broader view is adopted? Might education and respect for others improve?

Game theory teaches us that the optimal strategy for success depends on an assumed frequency of interaction—if I know our encounters will be many, which I can logically assume if our lives encompass centuries, then I must also know that cooperation is the optimal strategy for my own personal success. And since we all share virtually all information, you would know that too.

These ideas don't touch on the application development opportunities inherent in the data-intensive, CPU-intensive demands of neo-biotech. As venture capital seeks the obvious rewards of innovating the cure for aging, for example, many dollars will be spent on NDC applications (and NDC application developers). And in what other ways will cracking the human genome be of value?

DNA analysis machines and chip-based systems can potentially accelerate the proliferation of genetic analysis practices, improve drug search capabilities, and enable biological sensors. The genomes of plants (from food crops to new forms of fuel) and animals (from bacteria such as anthrax to mammals) will continue to be decoded and understood. To the extent that genes determine function and behavior, extensive genetic profiling could provide an ability to better diagnose human health problems, provide designer drugs based on individual problems, and provide better predictive capabilities for genetically bound diseases.

Genetic profiling could also have a significant effect on security and law enforcement. DNA identification may complement existing biometrics technologies (such as retinal scans) for granting access to secure systems and eventually become the norm, eliminating the need for credit cards and drivers' licenses. Biosensors (some genetically engineered) may also aid in detecting biological threats, improving food and water testing, providing continuous health monitoring, and executing medical laboratory analyses. Such capabilities could permanently change the way health services are rendered by greatly improving disease diagnosis and monitoring capabilities.

These incredible possibilities are not unfolding without issue, however. Just mention cloning or genetically modified food at your next cocktail party and watch the sparks fly. Numerous ethical, legal, environmental, and safety concerns will demand resolution as humanity comes to grip with the potential effect of the incredible biological revolution that is now immediately upon us—a revolution seasoned and blessed by myriad incantations from the magic of computer science.

Material Science and Nanotechnology

In 1959, Dr. Richard Feynman gave a talk at the annual meeting of the American Physical Society at the California Institute of Technology that is considered by most nanotechnology researchers to be the inspiration for their work.[11] But it wasn't until K. Eric Drexler published Engines of Creation: The Coming Era of Nanotechnology in 1986 that the general public began to get wind of this promising approach to materials and technology.[12]

Imagine, for example, the respirocyte.[13] The respirocyte is a hypothetical device, about 1 micron in diameter, designed to efficiently bind with CO2 and oxygen. Approximately 5 cubic centimeters of respirocytes will replace every red blood cell in your body and do a better job of facilitating a metabolism that has taken millions of years to evolve. With respirocytes, you'll live longer, breathe easier, and generally feel a whole lot better. The respirocyte doesn't exist—yet. We may be 10 years away from respirocytes, or 20. But we are most certainly moving very quickly in a direction that will ultimately bring respirocytes to a pharmacy near you and that brings up a number of interesting questions.

Will respirocytes be considered a prescription drug? A therapy? A prosthesis? If I ingest respirocytes and you don't, are we physiologically different? Are there ethical implications? What if my potential life span increases by at least 100 years simply by my taking respirocyte therapy, but the cost of respirocyte treatments are such that only the wealthiest 2 percent of world citizens can afford it? What happens then? And this is just one example of the unimaginable array of applications for material science as modified by the creations of applied nanotechnology.

Composite materials design uses computing power (sometimes together with massive parallel experimentation) to screen different materials possibilities in order to optimize properties for specific applications like catalysts, drugs, ceramics, polymers, and ultimately the assembly of very small devices like the respirocyte.

Nanoscale materials (those with properties that can be controlled at submicron or nanometer levels) are an increasingly active area of research; properties in regimes of these sizes are fundamentally different from those of ordinary materials. Examples include carbon nanotubes, quantum dots, and biological molecules. We are discovering that these materials can be prepared either by purification methods or by tailored fabrication methods, both of which require copious computer control mechanisms and computational resources.

Nanotechnology too promises quantum computing, which lives at the bleeding edge of research. With nanoscale engineering, at the very least Moore's law should continue unabated until at least 2015, by which time computing may very well become fully ubiquitous. In the near term, it is likely that by the time this book reaches shelves (or becomes available on virtual shelves) some nanoscale materials may have found their way into next-generation PCs and other computing devices.[14]

I still recall my first real job writing code for a fledgling UNIX startup in Park City, Utah, in 1983, when there were literally hundreds of UNIX startups around the world. In that small shop, I typically developed code on a small system with maybe 256 KB of memory (as I recall) and a 2- or 3-MB hard disk drive, best case; such systems I would routinely share with at least two other developers. That was our typical shared environment for writing code, and we were lucky to have it. Tonight I'm writing this paragraph in my home office on Sun's UltraSPARC 10 workstation, which boasts 256 MB of RAM and an 8-GB hard drive. I've had this system for at least three years now, so it may be a little behind the times.

Next to my trusty UltraSPARC workstation is a newer Toshiba Tecra 8200 laptop, which has 512 MB of RAM and a 20-GB internal drive . . . oh, and hooked to it is a 32-GB external drive that I bought last year to store WAV and MP3 files for when I produce my own audio presentations for the Web. There are two other older laptops on my home LAN, both of which serve ancillary functions like handling the parallel printer or accommodating the occasional visitor who would surf the Web while I work. And that's not counting the Apple iMac that is on back-order for my last birthday present, the two Apple systems my wife uses in her studio in the basement, or the myriad embedded processors we routinely use each day whenever we watch cable television, heat coffee in the microwave, or answer the phone.

The point is, in less than 20 years, my personal access to computing systems has increased at least 1000-fold and I'm no longer required to share any of those basic productivity resources. Oh, and I've got all those systems on the Internet now too. The footprint for all the systems I have at home is considerably less than the typical shared system we used in Park City way back when. And the cost for all these systems today is maybe half of what it was in actual dollars, and is less than a quarter of what it would be if inflation is considered. That's how dramatically things have changed since 1983.

The systems I use today, which allow me to be considerably more productive than I might have been in 1983, will seem just as antiquated and provincial in 2015 as that modest shared system in 1983 does today. Nanotechnology ensures that the rate of change between now and 2015, as promised by Moore's law, will inevitably continue. NDC programmers need to be well aware not only of the dramatic increase in computing resources but also of the dramatic proliferation of computing systems they will want to network with going forward.

Internet2, Pervasive and Ubiquitous Computing

 

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

 
 --Mark Weiser

When the seminal article on ubiquitous computing was first published in Scientific American in 1991,[15] it was as visionary and important to the genesis of “ubiquitous computing” as Feynman's 1959 speech was to the genesis of nanotechnology. But Mark Weiser's article didn't take nearly as long to germinate traction and active participation by a significant portion of the research community.

What is pervasive or ubiquitous computing? It's a lot more than simply the appearance of computing resources everywhere. Processors have become so small and inexpensive that the idea of embedding a processor in your shoe is no longer far-fetched.[16] But what good are processors everywhere if some ensemble of systems cannot or does not produce something actually usable from a human perspective?

Weiser has articulated several principles that we need to consider as we embark upon the mission of providing intelligence in just about anything you can think of. This is, on the surface, the teleology of ubiquitous or pervasive computing. But as with so many other aspects of learning, superficial characteristics often mask deeper organization or meaning. Consider Weiser's principles of pervasive computing:

  • The purpose of a computer is to help you do something else.

  • The best computer is a quiet, invisible servant.

  • The more you can do by intuition, the smarter you are; the computer should extend your unconscious.

  • Technology should create calm.

The era of pervasive computing is one is which computers should simply disappear. The challenges such a proposition presents to NDC developers are considerable, to say the least.

At the same time, if we consider the possibilities, benefits, and challenges of providing integrated intelligence everywhere, the potential of a next-generation network enters the equation. Internet2 is a collaborative effort that today involves at least 200 universities and businesses, all working toward a network capable of providing bandwidth several orders of magnitude greater than what even the best Internet connections can provide today.

At the heart of Internet2 are optical transmission technologies—theoretically capable of delivering a data stream that can approach a limit of 30 terabits per second per fiber once an all-optical network is in place (based on optical switching and routing devices that are envisioned but not yet commercially real).[17] With the Internet2 project come visions of a future that includes telepresence, extremely high bandwidth collaborative processes, personal broadcasts of HDTV-quality video, and more . . . the possibilities of extremely high bandwidth make even the StarTrek holodeck seem possible within the next decade.

To provide not just sheer bandwidth but some assurance of quality of service (QoS), researchers involved in Internet2 have found it necessary to consider issues of middleware, one of the themes of this book. The ideas that are surfacing regarding a future middleware that might provide more reasonable assurances of QoS for Internet2 are discussed throughout the book.

The implications of ubiquitous, pervasive computing for future NDC development are very clear because this technology will be entirely dependent on NDC.

Globalization, COTS, and Increasing Competition

 

Global economic integration will be the means by which the consequences of overpopulation in the Third World are generalized to the globe as a whole.

 
 --Herman E. Daly

If you are reading this book, one fact can be assumed: you know how to read. We live on a planet, according to UNESCO, where the assumption of adult literacy is not entirely valid. UNESCO estimates that there are about 1 billion nonliterate adults on earth today—more than 25 percent of the world's adult population. Two-thirds of all nonliterate are women; 98 percent live in developing countries. In the least developed countries, half the adult population cannot read.

Global economic integration is an undeniable fact today. Some argue that integration is a regional rather than a global phenomenon. I'm hard-pressed to understand how this could be so, given the proliferation of computers around our planet. The economics of the computer industry have increasingly taken on a global flavor in the past 20 years; clearly, the sourcing of computer components transcends regional boundaries, as does the assembly of systems, the creation of software, and the integration of services through networks. I believe the same is true of the automotive industry. While some segments may remain local or regional, much of the world's economy is now truly a world economy.

The utilization of commercially available, off-the-shelf (COTS) technologies has also been a trend in economic sectors where procurement was once accomplished through well-defined supplier “silos.” Military, mission-critical, and real-time implementations[18] (which include the largest consumer of computer chips in the world, the automotive industry) have all become adopters of COTS technologies over the past decade. Why? Economic pressures.

In a global economy, capital generally flows to those places in which the costs of doing a particular business are the lowest. Once it was good enough to be the best (which often means the most efficient) purveyor on your block of whatever product or service you may provide. But as transportation and communication capabilities improve, the reach of your service or product can extend into your immediate town, then to your county, then to your state or nation, then to your region. It naturally follows that extending transportation and communication capabilities to a global reach will give rise to a globally integrated economy, one in which competition must increase as a result of a wider competitive framework.

The inevitabilities of a global economy serve the trend of ephemeralization—doing more with less—just as they are disruptive to economic relationships based on preglobal models. The impact on NDC developers should be quite clear: we will be integrating COTS components into larger applications, and we will be competing with other NDC developers from literally all corners of the globe in doing so. By the same token, our products and services will need to address potentially a global audience.

Real-Time and Embedded Systems, Grid Computing, Clusters, and Composability

To remain competitive in a global economy, any firm (except perhaps a monopoly) must employ one of two fundamental strategies:

  1. Become the lowest-cost (bargain) provider.

  2. Provide added value through continuous innovation.

All marketing, branding, sex appeal, e-nonsense, and lies aside, one or the other of these strategies, or a combination of the two, is the basic teleological assumption of any business competing in world markets today. Any other approach can be categorized as a function of one of these two. The lowest-cost strategy requires doing more with less (ephemeralization)—production costs must continue to fall in order to survive fitscape pressures. The added-value strategy requires continuous investment in knowledge-based processes to create that value, the requirements for which are also dictated by the fitscape.

Earlier we spoke of the automobile sold in 1998, built in some part by robotic labor. That same automobile also enclosed you in an EmNet, a network of embedded computers, some of which were real time, which means that one very important aspect of the system's performance profile is predictable execution with respect to time. In fact, probably most or all automobiles manufactured in the 1990s featured a growing list of EmNet components as well as a growing pool of robotic laborers. Through the research I've done in the real-time space, it has become clear to me that the automotive industry has had to be an early and eager adopter of COTS technologies to compete in the emerging global economy. The “world car” has been a concept firmly entrenched in business education since I labored over case studies in the early 1990s.

The average automobile today (with the clear exception of SUVs[19]) weighs less, is more fuel efficient, is made of smarter composite materials, and provides a safer driving experience than the equivalent automobile sold just 10 years ago. This improvement is due both to the influences of the metatrends cited earlier and to the interaction of these two basic business strategies in the global fitscape that rules automobile production practices. In this dynamic system, the fitscape itself is altered by the metatrends as it drives the two strategic pressures to the benefit of the automobile consumer.

In this sense, automobile manufacturing is indicative of the direction in which all industry, being affected by the same globalizing trends and forces, must proceed. COTS technologies, embraced by fitscape pressures, will inevitably be enthusiastically adopted by NDC development going forward. Indeed, other aspects of real-time and EmNet adoption in a changing fitscape will ultimately be reflected in NDC development as well.

Composability

Consider the notion of composability. Remember our musician with her clarinet? Let's use her as an example here. She may be well versed in reading sheet music and very well practiced, able to play well in any ensemble. But for her to “constructively integrate” in an ensemble, she must be cognizant of (a) those with whom she is playing, (b) her part in the whole of the composition with respect to volume, timing, tone, and timbre, and (c) the overall timing of the piece. In other words, the context of her performance is just as important as her ability to read, understand, and play the composition, which itself must be well constructed in all its parts.

A composable software architecture is something like that ensemble of musicians (which may also include a conductor, depending on the complexity and demands of the piece in question). In the NDC space, composability is still an alien creature—not yet seriously recognized as a need, and hence not seriously considered as a requirement for frameworks that would facilitate the development and deployment of useful and reliable NDC programs. But if indeed real-time and embedded computer research and practice is a harbinger of things to come, NDC will one day too reflect a composability requirement in order to survive fitscape pressures.

Grid Computing and Clusters

Another trend in this category (and an argument for NDC composability) is the emergence of an organizing principle called grid computing, which is actually a form of NDC. The topic of grid computing is more properly included in the discussion of competing NDC frameworks, later in this chapter. But a brief mention here is useful in further illustrating the concept of composability.[20]

Grid computing shares many attributes with the concept of computer clusters. Both approaches would harness the capabilities of multiple systems with increasingly COTS components (that is, integrated hardware/software commodity systems) which communicate through a network (private or public) to present a unified view of compute resources to an arbitrary application. From a composability perspective, research in grid computing echoes the needs now recognized in the real-time and EmNet worlds.

Connections

To remain competitive given the three metatrends, the two primary business strategies and the inevitable consequences of globalization, composable architecture, regardless of the difficulties in achieving it, will soon be a requirement in our future. Research in many aspects of computer science, including real-time, mission-critical, and grid-computing systems, reflects that realization today.

Security, Global Transparency, and Privacy

Just two decades ago, computer security was a specialized field that seemed important only to a small percentage of customers, most of whom represented governmental agencies. For others, nothing more than a password or a simple encryption scheme was deemed necessary. Indeed, the U.S. government has long had export restrictions on encryption technologies, restrictions that have relaxed to some extent over the past few years but that still affect those who are engaged in next-generation encryption research and development.

Today, however, the need for high-quality security is much better understood by a growing population of computer users. In an age when Internet-distributed computer viruses, denial-of-service attacks, and bad hacker[21] cultures abound, to say nothing of the impact that one dark day in 2001 had on global zeitgeist, it is not surprising that many who have ignored the need for serious computer security are now vocally supporting such efforts. Deutsch's Fourth Fallacy (The network is secure) cannot be ignored any longer by NDC developers, as even those companies so willing to gain “features” at the expense of proper functional awareness in a networked world seem to be coming to the table.

Juxtaposed with security is the notion of global transparency, which has at least three meanings:

  1. Global business transparency (how do we avoid future Enron-type debacles in a global economy without standardized practices and disclosures?)

  2. Global data transparency (grid computing requires data-set name transparency, data location transparency, access protocol transparency, and so on)

  3. Global activity transparency (increasing satellite capabilities, proliferation of COTS networked cameras, and so on)

Global transparency is often at odds with security requirements. Security demands that information be protected; transparency demands that information be disclosed. The threat to personal and organizational privacy is clearly evident in the trend toward global transparency.

Another paradox looms in any discussion of security, global transparency, and privacy, however, especially insofar as computer science and computer systems are concerned. Consider “A Globalization Paradox.”

Perhaps this observation is too simplistic to express the forces and dynamics that might more adequately describe a world fitscape. But the prudent NDC developer will at least note the superficial validity of the logic above and perhaps weigh design implications and market impact accordingly.

Competing NDC Frameworks, the Emerging Global OS, and Recombinant Software

 

Entia non sunt multiplicanda praeter necessitatem [Entities should not be multiplied unnecessarily]

 
 --“Occam's Razor” William of Occam, 1285–1349

Ernst Mach (for whom supersonic travel was named, and perhaps even the CMU microkernel) was a contemporary of Einstein's who advocated a version of Occam's Razor which he called “the Principle of Economy.” This principle basically stated that scientists should always use the simplest means of doing their work and exclude everything not perceived by the senses. Taken to its logical conclusion, this approach becomes “positivism,” which is the belief that there is no difference between something that exists but is not observable and something that does not exist. Mach influenced Einstein when he argued that space and time are not absolute, but he also applied the positivist approach to molecules—claiming that molecules were metaphysical because they were too small to detect directly.

The moral of Mach's story is that Occam's Razor should not be wielded without qualification, lest we cut away the potential for a more complete understanding of our universe, regardless of the discipline involved. Certainly the same argument applies to computer science.

Attempts to measure complexity are by their nature complex. In mathematics, economics, biology, physics, chemistry, cognitive psychology, geography, games, groups, and computer science, complexity measures range from simplifying linear assumptions to hands-in-the-air prayer. The emerging science of complexity may one day yield formal methods for expressing universal organizing principles which we can intuitively appreciate but not fully yet comprehend. Stephen Wolfram has documented as much in A New Kind of Science.[22] Mathematics has aptly named simple expressions that seem to betray an innate order we cannot yet fully comprehend as transcendental; nonalgebraic, nonrational, yet as essential as any more comprehensible expression. There is clearly an underlying order in our universe that we are still only beginning to appreciate, despite Mach's advice to the contrary. Indeed, computer science is helping us understand and extend the very senses Mach would venerate.

But for computer scientists, especially those tasked with the day-to-day need to create valuable NDC applications that compete in real-world fitscapes, managing growing complexity is a daily chore and not a philosophical diversion. We must, therefore, do something to enhance our capabilities. Last, therefore, in this incomplete list of major technology trends is a discussion of NDC software frameworks and likely directions we may take in attempting to cope with network expansion and ever-increasing complexity.

Competing NDC Frameworks

As previously noted, considerable investments have been and are being made by many firms today in the general area of Web Services. Suffice it to say at this juncture that competing NDC frameworks already existing, address the growing complexities we face in different ways.

More importantly for this discussion, today's competing NDC frameworks also shed light on the data and software legacy we will be facing a decade from now. On the surface, if we would embrace the observations of “A Note on Distributed Computing” in toto as well as the often good advice of Occam's Razor, the idea of a global or networkwide operating system would be dismissed out of hand; no central authority we can imagine can viably be implemented on any network that would introduce communication-induced indeterminacy. Yet we have in existence proof of efforts to thwart the very conclusions we would reach a priori.

Global Operating Systems

Projects that allude to a “global operating system” include efforts by Microsoft, IBM, the University of Virginia, the University of California at Berkeley, and others.[23] If efforts like these bear fruit, it may be that within 10 years, operating systems will have characteristics that can facilitate worldwide scalability, within which one logical system may be partitioned across any number of nodes on an arbitrary network. And perhaps seamless, transparent distribution as well, in which an operating system decides where data resides and where computation occurs agnostic to the geographic or organizational location of compute resources. NDC fault tolerance and self-configuration are also implied by these approaches. The lack of a central authority may become less limiting than previously believed. But then again, research does not always yield viable implementations. Time will tell.

Recombinant Software

As complexity in NDC continues to rise, we may find it necessary to adopt radically new approaches for software development. “Growing” applications, as opposed to designing them, may become the only strategy that makes sense, as the level of complexity exceeds that which even the most devoted groups of experts can adequately comprehend. Recombinant software refers in a general sense to research in computer science that does not specify particular ends before a given experiment is launched. This approach suggests, “Let's see what happens if I try this,” deriving lessons from the results, which again, strictly speaking, turns a blind eye to the scientific method.

Genetic algorithms are a part of “evolutionary computing,” which is a growing area of research in artificial intelligence. Simply stated, a solution to a problem solved by a genetic algorithm is evolved rather than designed. The idea of evolutionary computing was introduced in the 1960s by I. Rechenberg in his work Evolution Strategies.[24] His idea was then developed by other researchers, including John Holland, who invented and developed genetic algorithms.

Holland's approach, which incorporates genetic algorithms and autonomous digital agents, has demonstrated that, “such systems are particularly prone to exhibiting emergent phenomena.”[25]

Might such systems, when their digital autonomous agents properly motivated and rewarded, behave in a manner akin to Kauffman's fitscape? Moreover, will such an approach finally bring artificial intelligence to a place beyond the oxymoron category it has inhabited since it was envisioned? As with every other question that faces NDC development and engineering in general, Goff's axiom applies: it all depends on context.

Commentary: The Context of Context

There are a number of other contextual considerations going forward if prudent we would be. One aspect of our work that we must acknowledge is that of the unintended consequences of human activities, especially technology. To that end please consider my own view regarding the context of our work in the following paragraphs.

Technological advances have always borne consequences which were not envisioned or intended at the moment of innovation. Euclid could not have envisioned the utilization of large prime numbers to encrypt and decrypt data. Gutenberg, I'm sure, did not labor to enable pornography. While Einstein may have seen the potential for nuclear weapons as part of his legacy, did he also predict the ethical left turn the relativistic 20th century would take? If all points of view are equally valid, then doesn't it also follow that “anything goes?” Perhaps not . . . but the consequences of technology and innovation clearly cannot be predicted. The fitscape, as Kauffman suggests, cannot be finitely prestated, regardless of Newton's assurances to the contrary.[26]

One of the best metaphors we have today for unintended consequences is the hole in the ozone layer, as shown in Figure 2.1, which is now believed to be due to the ignorant release of excessive chlorofluorocarbons as a by-product of refrigeration technologies. The hole as metaphor reminds us of unintended consequences, of friction and heat loss (metaphorically speaking) due to inefficiencies in resource utilization, of our lack of understanding, and of the care we must take in the work in which we are engaged. This one ambiguous hole serves to silently communicate both the warning and the wonder we must acknowledge in these most interesting times.

The ozone hole: a metaphor for unintended consequences and poor utilization of resources

Figure 2.1. The ozone hole: a metaphor for unintended consequences and poor utilization of resources

What is the purpose of software if not to remedy inefficiencies in processes and improve resource utilization? What developer of software hasn't recognized the need to optimize in some fashion, for pure speed, or memory utilization, or code maintainability? What is the ultimate aim of software if not to “do more with less”?

Buckminster Fuller gave us our term for this grand trend: ephemeralization. Human beings have been ephemeralizing since the invention of the wheel. How many ancients left large stone reminders of generations of labor, entire civilizations devoted to developing the simple predictive capabilities of an ordinary calendar? These are trivialities today.

How wealthy would you be if the laser printer everybody now has access to were at your disposal in a pre-Gutenberg era? How many men do you imagine died laying the first copper cables beneath the Atlantic to facilitate communications in the early 20th century, which then cost the equivalent of billions of dollars in today's funds? A quarter-ton satellite carries so much more data at a fraction of the cost.

We didn't invent ephemeralization with software; but we may very well be perfecting it. Such is the legacy and opportunity, the elation and the terror which is especially ours as computer scientists and software developers in the early 21st century.

The hole, in form, is also a reminder of the Zero Dollar Bill, infinity paradoxically expressed on the low end, where we might not expect to find it; is nanotechnology also implied here? The hole is also our goad then, if we would be ritually mindful of the complexities we face when studying complex NDC systems; a significant theme in the context of the study of context.

One final point: If Metcalfe's law is to be believed, if the potential value of any network is a function exponentially related to the number of nodes on the network, then clearly our networks will not approach a maximum potential value until all nodes are connected. Which is another way of saying we need to include everyone. Paradoxically, it is only by including everyone that we operate in our own best self-interest. These are interesting times indeed.

Notes

1.

grouper.ieee.org/groups/802/11/

2.

www.microsoft.com/billgates/speeches/2001/11-30mvp.asp

3.

www.bluetooth.com, www.bluetooth.org

4.

www.sciam.com/2001/0501issue/0501berners-lee.html

5.

Claude Elwood Shannon, The Mathematical Theory of Communication (Urbana: University of Illinois Press, 1949).

6.

Strictly speaking, cybernetics is the comparative study of the internal workings of organic and machine processes, to understand their similarities and differences. Cybernetics often refers to machines that imitate human behavior, for example, robots. For an interesting and informative view of the history of cybernetics and its relationship of information theory see How We Became Post-human: Virtual Bodies in Cybernetics, Literature, and Informatics by N. Katherine Hayles (Chicago, IL: University of Chicago Press, 1999).

7.

Extensible Markup Language (XML) is the universal format for structured documents and data on the Web, enabling self-describing data that is not bound to a particular platform or language. See www.w3c.org/XML/.

8.

Hans Moravec, Robot (New York: Oxford University Press, 1999), p. 51.

9.

Isaac Asimov, Runaround (New York: Faucett Crest, 1942).

10.

Norbert Wiener, Cybernetics: Control and Communication in the Animal and the Machine (New York: Wiley, 1948).

11.

Transcript at www.zyvex.com/nanotech/feynman.html

12.

Engines of Creation was originally published in hardcover by Anchor Books in 1986.

13.

www.foresight.org/Nanomedicine/Respirocytes.html

14.

www.techreview.com/articles/rotman0302.asp

15.

Mark Weiser, “The Computer for the Twenty-First Century,” Scientific American, 265 (September 1991), pp. 94–104.

16.

Neil Gershenfeld, When Things Start to Think (New York: Henry Holt, 1999).

17.

David D. Nolte, Mind at Light Speed: A New Kind of Intelligence (New York: Free Press, 2001), p. 118.

18.

www.rtcgroup.com/cotsjournal/index.shtml provides just one example of COTS technologies creeping into implementations that were once thought to be immune from the otherwise indeterminate nature of commercial products.

19.

The popularity of SUVs in the United States is, in my view, a function of sociology and an interesting ecological/economic denial, which perhaps too is a reaction to the creeping globalization that has occurred over the past 20 years.

20.

While it may be intuitively obvious why the problems associated with composable components need to be addressed if viable grid computing architectures are to emerge, it would be helpful to consider the inevitability of partial failure in any networked configuration. Because components of the grid will inevitably fail, to provide for a robust grid solution, facilitate dynamic recovery, and achieve maximum availability, composability of components on some level must be addressed. This is not to say that composability itself is germane only to grid computing or that grid computing is the only domain in which composability must be addressed. But it should be clear that the problems associated with composable components must, at some level, be addressed as part of the grid computing problem space.

21.

When I began programming, the term hacker meant something different from what it does today. While I personally dislike the use of the term to describe those who would maliciously or illegally abuse computer systems owned by others, the general use of the term is more or less accepted by a broad range of people.

22.

Stephen Wolfram, A New Kind of Science (Champaign, IL: Wolfram Media, 2002).

23.

research.microsoft.com/sn/Farsite/; www.research.ibm.com/bluegene/; legion.virginia.edu/; endeavour.cs.berkeley.edu/.

24.

I. Rechenberg, Evolutionsstrategie: Optimierung technischer Systeme und Prinzipien der biologischen Evolution (Stuttgart: Frommann-Holzboog, 1973).

25.

John H. Holland, Emergence (New York: Perseus Books, 1998), p. 184.

26.

Stuart Kauffman, Investigations (New York: Oxford University Press, 2000), p. 125.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.181.166