Chapter 18. The Ideal of Ubiquitous Technology

I sometimes wonder whether the folks at the MIT Media Lab are pulling our legs.

It seems that a lot of energy at the prestigious lab (which claims to be “inventing the future”) has gone into the redesign of the American kitchen. For example, one project involved training a glass counter top

to assemble the ingredients for making fudge by reading electronic tags on jars of mini-marshmallows and chocolate chips, then coordinating their quantities with a recipe on a computer and directing a microwave oven to cook it.

Dr. Andrew Lippman, associate director of the Media Lab, said that “my dream tablecloth would actually move the things on the table. You throw the silver down on it, and it sets the table.”

One waits in vain for the punch line. These people actually seem to be serious. And the millions of dollars they consume look all too much like real money. Then there are the corporate sponsors, falling all over themselves to throw yet more money at these projects.

Nowadays this kind of adolescent silliness is commonly given the halo of a rationale that has become respected dogma. After all, don’t many inventions find unexpected uses in fields far removed from their first application, and doesn’t a spirit of play often give rise to productive insight?

Certainly. But somehow it doesn’t all add up.

In the first place, the likelihood of serendipitous benefits is not a convincing justification for trivializing the immediate application of millions of research dollars. No one would argue that non-trivial research is less likely to produce valuable off-shoots than trivial research, so why start with triviality?

In the second place, the Media Lab researchers voice their comic lines with a strange seriousness and fervor, devoid of the detachment underlying a true spirit of play. Michael Hawley, an associate professor of media technology at MIT, laments that the kitchen is

where you have the most complex human interactions and the most convoluted schedule management and probably the least use of new technologies to help you manage it all.

And of this degrading backwardness Lippman adds:

Right now, your toaster doesn’t talk to your television set, and your refrigerator doesn’t talk to your stove, and none of them talk to the store and tell you to get milk on your way home. It’s an obvious place screaming out for connectivity.

Those sponsors must love it. Where else but in an academic computing laboratory could they possibly find adult human beings seriously willing to propose such laughable things in order to start creating an artificial need where none was recognized before? By slow degrees the laughable becomes conventional.

Which explains why those corporate sponsors don’t appear to be just waiting around for the occasional, serendipitous “hit.” Clearly, they see the entire trivial exercise as itself somehow integral to their own success. I don’t doubt their judgment in this at all.

Thirdly, there are signs of a pathological flight from reality in all this. Hawley tells us that

in time, kitchens and bathrooms will monitor the food we eat so closely that health care will disappear. We will move from a world in which the doctor gets a pinprick of data every blue moon to the world in which the body is online.

“Health care will disappear.” If his words are meant to be taken even half seriously, this is a man with severely impaired judgment and with the most tenuous connection to reality. One wonders how many of these kitchen technicians have ever done some serious gardening, and how many of them can even grasp the possibility that preparing food might be an important and satisfying form of work—at least as satisfying as interacting with the digital equipment they would inflict on the rest of us (and, for that matter, a lot healthier).

No, the kind of fluff the Media Lab all too often advertises is not really comic. Looked at in its social context, it is sick and obscene. It is sick because of the amount of money spent on superficialities; it is sick because of the way corporate sponsors have been able to buy themselves an “academic” facility at a major educational institution to act as their “Consumer Preparation Department”; and it is sick because a straight-faced press corps slavishly reports these “inventions of the future” without ever administering the derisive smile so much of this stuff begs for.

The above quotes, by the way, come from the New York Times (Hamilton 1999). The author of the article does at least quietly give notice that Hawley is “a bachelor who rarely uses his kitchen.” Hardly surprising. The man’s passion has a lot more to do with computing for its own sake than with entering into the meaning and significance of the food preparer’s task.

Digital Servants Everywhere

The idea at work in all this has seized the engineer’s imagination with all the force of a logical necessity. In fact, you could almost say that the idea is the idea of logical necessity—the necessity of embedding little bits of silicon logic in everything around us. What was once the feverish dream of spooks and spies—to plant a “bug” in every object—has been enlarged and re-shaped into the millennial dream of ubiquitous computing. In this new dream, of course, the idea of a bug in every software-laden object carries its own rather unpleasant overtones. But unpleasant overtones are not what the promoters of ubiquitous computing have in mind. On its web site, the Media Lab claims to pursue “a future where the bits of the digital realm interact seamlessly with the atoms of our physical world, and where our machines not only respond to our commands, but also understand our emotions—a future where digital innovation becomes the domain of all.”

I suppose Bill Gates’ networked house is the reigning emblem of ubiquitous computing. When the door knows who is entering the room and communicates this information to the multimedia system, the background music and the images on the walls can be adjusted to suit the visitor’s tastes. When the car and garage talk to each other, the garage door can open automatically whenever the car approaches.

Once your mind gets to playing with such scenarios—and there are plenty of people of good will in academic and industrial organizations who are playing very seriously with them—the unlimited possibilities crowd in upon you, spawning visions of a future where all things stand ready to serve our omnipotence. Refrigerators that tell the grocery shopper what is in short supply, shopping carts that communicate with products on the shelves, toilets that assay their clients’ health, clothes that network us, kitchen shelves that make omelets, smart cards that record all our medical data, cars that know where they’re going—clearly we can proceed down this road as far and fast as we wish.

And why shouldn’t we move quickly? Why shouldn’t we welcome innovation and technical progress without hesitation? I have done enough computer programming to recognize the inwardly compelling force of the knowledge that I can give myself crisp new capabilities. It is hard to prefer not having a particular capability, whatever it might be, over having it.

Moreover, I’m convinced that to say we should not have technical capability X is a dead-end argument. It’s the kind of argument that makes the proponents of ubiquitous computing conclude, with some justification, that you are simply against progress. You can only finally assess a tool in its context of use, so that to pronounce the tool intrinsically undesirable would require an assessment of every currently possible or conceivable context. You just can’t do it—and if you try, you underestimate the fertile, unpredictable winds of human creativity.

But this cuts both ways. You also cannot pronounce a tool desirable (or worth the investment of substantial resources) apart from a context of desirability. Things are desirable only insofar as a matrix of needs, capacities, yearnings, practical constraints, and wise judgment confirms them.

The healthy way to proceed would be to concern ourselves with this or that activity in its fullest context—and then, in the midst of the activity, ask ourselves how its meaning might be deepened, its purpose more satisfyingly fulfilled. Only in that meditation can we begin to sense which technologies might be introduced in appropriate ways and which would be harmful. If you want to know how to make food preparation in the kitchen a more satisfying experience, then find the most deeply committed gardeners and cooks you can and apprentice yourself to them.

But it’s difficult to overestimate the appeal of purely technical challenges—or the hard work required to integrate a technical achievement into a fully human context. Nothing illustrates this more clearly than the way we like to speak of “solutions.”

There Are No Solutions

As I remarked a moment ago, you can always find problems for which your new gadget is the solution. If a single buzzword has outweighed all others in advertisements for high-tech products over the past decade, surely it is “solutions.” When you are convinced you have a nifty answer, everything begins to look like a problem demanding your answer. But it is worth keeping in mind that engineers, precisely because they are the quintessential problem-solvers, always try to address narrowly conceived, exactly defined problems—problems they call “well-behaved.” This requires stripping the problems so far as possible of complicating context. Without the reduction of context, there is no precise solution.

But, of course, the reduction tends to leave most of the important considerations out of the picture. This is why Amory Lovins of the Rocky Mountain Institute reminds us, “If you don’t understand how things are connected, the cause of problems is solutions.”

This truth is not one you are likely to come across in contemporary journalism dealing with science and technology, where a standard formula runs this way: “Dr. Jones’ new discovery (or invention) could lead in time to [your choice of solved problems here].” The discovery of this new genetic technique could eventually lead to a cure for such-and-such a disease. The further development of this new implantable device might some day enable the terminal heart patient to gain new life. How standard this formula has become is a good measure of how technocentric our society has become.

The enumeration of possibilities is usually reasonable. They could happen. The deception is in the one-sidedness. One way or another, it seems, the technical achievement just must translate into a social good. There is no equivalent standard formula that routinely acknowledges the risks of the new development. There is no recognition of the difference between solving a problem and contributing to the health of society. Yet solving problems is one of the easiest ways to sicken society. A technical device or procedure can solve problem X while worsening an underlying condition much more serious than X. How, for example, do you solve the problem of the harmonious meeting of minds? Here’s one approach, taken from the call for papers circulated ahead of the Third International Cognitive Technology Conference held in San Francisco in August 1999:

Human minds are becoming increasingly networked. We are steadily approaching the Optimal Flow Point, a theoretical point in telecommunication when the technology allows any mind on the planet to reach any other mind in a minimal amount of time. Developments in satellite and cellular technologies are moving us to the point of spatial ubiquity, when any spot, no matter how remote or primitive, can be connected with any other in a worldwide telecommunications system.

As spatial ubiquity is approaching, time needed to contact any human being is being steadily reduced. . . . The networking of minds proceeds apace.

By all means, let the networking of minds proceed apace. And, yes, the “time needed to contact any human being” may be steadily decreasing. But it’s worth remembering that this contact refers essentially to the performance of a technical system, not to the performance of minds. There is a difference.

By contrast with this contact time, the time needed to get in touch with another human being has not decreased at all. It requires the same cultivation of mutual caring and understanding, the same painstaking exploration of shared and unshared meanings, as it always has. In fact, there’s a good case to be made that the time required for getting in touch is lengthening as we approach the Optimal Flow Point: it takes more effort to break through the busyness, distraction, and habits of detachment encouraged by all those technically enhanced possibilities of contact.

Here are a few other brief examples of the way solutions can work against the deeper requirements of health:

  • There’s already wide recognition of the danger in solving the problems presented by medical symptoms. Aspirin, by eliminating pain, can mask an underlying illness or cover for bad habits that in the end may prove fatal. And by lowering fever, it can counter the body’s healing processes. We may well be doing worse because of the means we have chosen for feeling better.

  • One reason for the huge amounts of time we spend watching television is that, as one commentator wrote, “It’s a way to stop conflicts between kids and adults.” Yes, in the heat of the moment you could say that television is an effective answer to the problem of family conflict. But won’t this truce of convenience, this mutual disengagement, very likely lead to an even more radical parting of the ways somewhere down the road?

  • The same commentator remarked that “there are a lot of neighborhoods where you’re better off staying in watching TV than going out on the street.” In such neighborhoods the television may indeed be at least a partial solution to the problem of personal safety. But in a deeper sense you will find that television has helped to make the street what it is, if only by sucking what was once the vigorous communal life of porch and street, first into the family living room, and then into the isolated dens of individual family members.

  • The technical mechanisms for linking documents on the World Wide Web are thought by many to solve the problem of providing adequate context for documents. And they do help us to aggregate and structure a collection of texts, relating one passage to another, regardless of where the documents may reside physically. But, as all Web users have discovered by now, this solution can work against any effective grasp of context. Being a click or two away from everywhere is disconcertingly like being nowhere at all, which is the ultimate loss of context. True context arises from the conceptual threads we are enabled to weave through our reading, and this requires an inner work for which no information technologies can substitute.

Not Solutions, But a Strengthening of the “I”

However much we may find the reduction to manageable problems a necessary, temporary expedient, it is vital to keep in mind the larger context. Society presents us with evolving conversations we must participate in, not problems to be finally solved. Only when we remain aware of what we are doing and continually allow the larger context to discipline, dissolve, and re-shape our narrowly focused problem solving do we remain on safe ground.

I don’t know of any truth more worthy of contemplation in our society today than this one, startling as it may appear: no problem for which there is a well-defined technical solution is a human problem in any full sense. It has not yet been raised through imagination and will and self-understanding into the sphere where we can participate meaningfully in it. And what is this sphere? It is, above all, the domain of the “I,” or self. The “I,” as Jacques Lusseyran remarks,

nourishes itself exclusively on its own activity. Actions that others take in its stead, far from helping, serve only to weaken it. If it does not come to meeting things halfway out of its own initiative, the things will push it back; they will overpower it and will not rest until it either withdraws altogether or dies. (Lusseyran 1999, Chapter 4 )

All problems of society are, in the end, weaknesses of the “I,” and it is undeniable that technologies, by substituting for human effort, invite the “I” toward a numbing passivity. But by challenging us with less-than-fully-human problems and solutions, technologies also invite the “I” to assert itself. This assertion always requires us to work, in a sense, against the technology, countering it with an activity of our own—countering it, that is, with something more than technological. It requires an inner wrench, a difficult, willful arousing of self, to accept active responsibility for what technologies do to us. But when we succeed in this, the technology becomes part of a larger redemptive development. When, on the other hand, technology as such is seen to bear “solutions,” the disastrous abdication of self has already occurred.

What we should ask of the technology pushers, whether they reside as engineers at the MIT Media Lab or as employees at high-tech companies or as consumers in our own homes, is a recognition that the primary danger today is the danger of this reversal, where the strengthening activity of the “I” is sacrificed to the automatisms around us. For every technology we embrace, we should require of ourselves an answer to the question, “What counter-force does this thing require from me in order to prevent it from diminishing both me and the social contexts in which I live?”

Automating on Principle

One way to express an ideal of ubiquitous computing is to say, “Anything we do that can be automated should be automated.” It’s a principle that appeals to the common sense of many people today, and complements the notion that machines can unburden us of the more tedious and mechanized work, leaving us free to occupy ourselves with “higher” and more “human” tasks.

Appealing as this may be, I’m convinced that it readily promotes an unhealthy relation to technology. Here’s why:

First, it obscures the truth that nothing we do can be automated, not even when we are merely adding two plus two. Yes, I know that computers supposedly achieve this feat all the time, but what the computer does is not what we do. It does not bring consciousness to the act. It is not exercising and therefore strengthening certain skills and cognitive capacities. It requires no focusing of attention, no motivation, no supportive metabolism, no memory, no imagination, and no sympathetic muscle movements. Nor is it engaged in any larger purpose when it carries out the computation—or any purpose at all. It is not deriving satisfaction from the exercise of a skill. It brings neither an aesthetic sensibility to the task nor a mobilized will. It does not reckon with the possibility of error.

It is amazing to see how readily we forget these things today and equate a computer’s action with human performance. Actually, the more relevant fact is that the machine displaces and eliminates from the situation much that we do, leaving us to consider how we might compensate for the disuse of our own capacities, and how the entire context and significance of the work has been altered by its reduction to a few formal, computational features.

It’s all too easy for the facile calculations of the spreadsheet software to begin narrowing the entrepreneur’s conception of his own work, even though the business may have begun with a richly meaningful and idealistic set of intentions. Intention doesn’t enter into the software’s calculations, and as that software plays an ever greater role in the business, the question is, “Where will the guiding intentions come from—or will we simply allow them to disappear as we yield to the machine’s empty guidance?”

“Anything we do that can be automated should be automated.” If the first problem with this rule is that nothing we do can be automated, the second problem is that everything can be automated. That is, once you equate mechanical activity with human activity in the superficial manner just indicated, there’s no line separating things that can be automated from those that cannot. So our rule provides no guidance whatever. In the reduced sense that applies, everything can be automated. If a calculator “does what we do,” then a computer can in one sense or another do what a judge or composer or physicist does. If we do not pay attention to the difference between the computational abstraction and the human reality in the simple cases, nothing will require our attention to those differences in the “higher” cases.

Further, the more you automate, the more you tend to reduce the affected contexts to the terms of your automation, so that the next “higher” activity looks more and more like an automatic one that should be handed over to a machine. When, finally, the supervisor is supervising only machines, there’s no reason for the supervisor himself not to become a machine.

So the idea that automation relieves us from grunt work in order to concentrate on higher things looks rather like the opposite of the truth. Automation tends continually to drain significance out of the higher work, reducing it to mechanical and computational terms. At least, it does this when we lose sight of the full reality of the work, reconceiving it as if its entire significance lay in the few decontextualized structural features we can analogize in a machine. But if, on the other hand, we do not lose sight of the full reality of the work, then the lower-level stuff may look just as much worth doing ourselves as the higher—in which case we have to ask, “What, really, is the rationale for automating it?”

This is not to say that, for example, endless hours spent manually adding columns of numbers would prove rewarding to most people. But where we typically run into such tasks is precisely where reductive technologies (such as those involved in the machinery of bookkeeping and accounting) have already shaped the work to be done. In general, the grunt work we want to get rid of is the result of automation, and while additional automation may relieve us of that particular work, it also recasts a yet wider sphere of work in terms seemingly fit only for automation. After all, the ever more sophisticated accounting software requires ever more extensive inputs, so more and more people in the organization find themselves caught up in paper shuffling (or electronic file shuffling).

It’s where automation has not already destroyed the meaningfulness of the low-level work that we discover how high-level it can really be. The organic farmer may choose not to abandon his occasional manual hoeing—not because he is a hopeless romantic, but because there is satisfaction in the simple rhythms, good health in the exercise, and essential knowledge of soil and crop conditions in the observations made along the way. What will provide these benefits when he resides in a sealed, air-conditioned cab fifteen feet off the ground?

A Strengthened Inner Activity

You may ask, then, “Should nothing be automated?” I didn’t say that! I’ve only suggested that we avoid deluding ourselves about automation freeing us for higher things. Have we in fact been enjoying such a release? Any investigation of the matter will reveal that the machine’s pull is most naturally downward. It’s hard to relate to a machine except by becoming machine-like in some part of ourselves.

When we yield ourselves to automatisms, we become sleepwalkers. But if instead they serve as foils for our own increased wakefulness, then they will have performed a high service. After all, downward forces, too, can be essential to our health. We couldn’t walk upright without the force of gravity to work against, and our muscles would atrophy without the effort.

It is, I think, inescapable that we should automate many things—and there are many pleasures to be had in achieving this. When I said above that an automating mentality will not find any clear stopping place, I did not mean to imply that there should be such a stopping place— certainly not in any absolute sense.

Everything is potentially automatable in the restricted sense I have indicated, and pretending there is a natural stopping place only encourages the kind of mindless automation that is the real problem. What is crucial is for us to be aware of what we’re doing and to find within ourselves the necessary compensations. We have to struggle ever more determinedly to hold on to the realities and meanings our automated abstractions were originally derived from. That is, we must learn to bring the abstractions alive again through a strengthened inner activity—a tough challenge when the machine continually invites us to let go of our own activity and accept the task in reduced terms!

The limits of our compensatory capacities will always suggest wise stopping places, if we are willing to attend to those limits. But not absolute stopping places; they will shift as our capacities grow.

Are we currently setting the bounds of automation wisely? You tell me. Has the accounting software and the remarkable automation of global financial transactions been countered by our resolve to impose our own conscious meanings upon those transactions? Or, rather, does the entire financial system function more and more like a machine, merely computing an abstract bottom line?

Well, if you’re looking at the dominant institutions, I imagine your answer will be pessimistic. But perhaps the most important developments for the future are the less conspicuous ones—for example, the alternative food and health systems, the growing interest in product labeling, the investing-with-a-conscience movement. What’s essential in these is the determination to restore the automated abstraction—for example, the nutrient in the processed food, the number in the accountant’s spreadsheet—to the meaningful context it was originally ripped out of.

I suppose the sum of the matter is that the restoration of human context entails a gesture exactly opposite to the one expressed in “if it can be automated, it should be.” It’s more like “if it can be re-enfleshed, it should be.” This rule seems to me healthier than the purely negative call to “stop automation and technical innovation.” To focus merely on stopping automation is already to have accepted that the machine, rather than our own journey of self-transformation, is the decisive shaper of our future. Yes, we urgently need to find the right place for our machines, but we can do so only by finding the right place for ourselves.

As long as these two movements—to automate and to re-enflesh—are held in balance, we’re probably okay. We should automate only where we can, out of our inner resources, re-enliven. For example, we should substitute written notes and email for face-to-face-exchanges only so far as we have learned the higher and more demanding art of revivifying the written word so that it reveals the other person as deeply as possible and gives us something of his presence.

We have many such arts to deepen, and I suppose we can thank the ubiquitous technology pushers for that extreme imbalance so easily reminding us of our own necessary work, without which all solutions become destructive.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.221.113