Chapter 15. Invisible Tools or Emotionally Supportive Pals?

Try juxtaposing these two thoughts:

Researchers are telling us that, emotionally and intellectually, we respond more and more to digital machines as if they were people. Such machines, they say, ought to be designed so as to be emotionally supportive. (“Good morning, John. You seem a little down today. Bummer.”) Stanford social researchers B. J. Fogg and Clifford Nass propose this rule for designers of human-machine interfaces: “Simply put, computers should praise people frequently—even when there may be little basis for the evaluation” (Fogg and Nass 1997). Leaving questions of sincerity and ethics aside, this is thought to be quite reasonable, since machines are obviously becoming ever more human-like in their capabilities.

The common advice from other human-computer interface experts is that we should design computers and applications so that they become invisible—transparent to the various aims and activities we are pursuing. They shouldn’t get in our way. For example, if we are writing, we should be able to concentrate on our writing without having to divert attention to the separate requirements of the word processor.

The two pieces of advice may not necessarily contradict each other, but their conjunction is nevertheless slightly odd. Treat machines like people, but make them invisible if possible? Combining the two ideals wouldn’t say much for our view of people. It sounds as though we’re traveling down two rather different tracks. And, in the context of current thinking about computers, neither of them looks particularly healthy to me. But perhaps they can help us to explore the territory, leading us eventually to a richer and more satisfactory assessment of the human-machine relationship.

We Need to Recognize Our Own Assumptions

Surely there is something right about Ben Schneiderman’s advice when, in a promotional interview for his book, Leonardo’s Laptop, he calls for “truly elegant products that facilitate rather than disrupt,” adding that “effective technologies often become ‘invisible.’” Who would prefer disruption to invisibility? But then, invisibility itself is also problematic. As information technologies become ever more sophisticated reflections of our own intelligence, it seems fair to say that our thoughts and assumptions get built into them in increasingly powerful ways. Their whole purpose, after all, is to embody our own contrivings. If we would not want the contents of a book to act on us without our full awareness of the intent and import of the author’s recorded thoughts, neither should we want the much more aggressive contents of a computer to act on us without full awareness of the intentions the programmer has invested in the device.

So ... the fact that we meet human intentions in our machines is already reason enough for caution. Do we really want all those strivings and contrivings—all those thoughts and assumptions someone has cleverly etched into the hardware and software we are using—to remain invisible? When employing a search engine to sift through news items, should we be content to remain ignorant of the criteria, commercial or otherwise, determining the engine’s presentation of hits? When recording a business’ numbers on a spreadsheet, should we forget the meanings and values we had in mind when we started the business—meanings and values that the spreadsheet is designed, by virtue of its designer’s preoccupation with manipulable data, to put out of sight? This is not to say we don’t need the spreadsheet, but we also need to remain aware of the ways it can skew our thinking.

A vital necessity for all of us today is to remain conscious of the assumptions and unseen factors driving our thoughts and activity. To give up on this is to give up on ourselves and to hand society over to unruly hidden drives. But if we must remain conscious of our own assumptions, it can hardly be less important to prevent others from surreptitiously planting their assumptions in us. Granting (simplistically for the moment) that we are in some sort of conversation with intelligent machines, it seems only natural that we would want to keep in view our conversational partner’s contribution to the dialogue, rather than let it slip into invisibility. The alternative would be for the machine to influence or control us beneath the threshold of awareness.

Keeping the other person (or thing) in view disallows invisibility as a general ideal. In human exchange we may hope the other person’s presence will not prove downright disruptive. But in any worthwhile friendship neither do we want the friend simply to disappear. And we can be sure that, at one point or another, the requirements of friendship will move us disturbingly out of our path. I cannot enjoy the meanings a friend brings into my life without risking the likelihood that some of these meanings will collide with my own. If computers are like people, I can hardly expect, or even want, to escape the unsettling demands they will impose upon me to rise above myself. Thankfully, true friends can on occasion be disrupters of the worst sort.

Complementary Errors

But are computers like people? I have already suggested that they embody many of our assumptions, and now I have been drawing an analogy between human-computer interactions and person-to-person friendships. Does this mean I buy into the first view stated at the outset—the view that it is natural for us to respond emotionally and intellectually to intelligent machines as if they were persons?

Not at all. If we cannot accept the ideal of machine invisibility in any absolute sense, neither can we accept the ideal of machine personality. The problem with both ideals, at root, comes from the same source: a failure to reckon adequately with the computer as a human expression. The two ideals simply err from opposite and complementary sides: a striving for invisibility encourages dangerous neglect of the tendentious expressive content we have vested in the machine; on the other hand, trying to make the machine itself into a person mistakes the machine-as-an-expression for the humans who have done the expressing.

I am convinced that rising above these complementary errors would strikingly transform the discipline of artificial intelligence, not to mention the entire character of a machine-based society.

The world is full of human expressions that are, in part, manifestations of intelligence. The intelligence is really there, objectively, in our artifacts—in the sound waves uttered from our larynxes, in the pages of text we write, in the structure and operation of a loom, automobile, or computer. It is impossible to doubt the objectivity, given that anyone who attends to these artifacts can to one degree or another decipher the intelligence that was first spoken into them. We do this all the time when we read a book. Something is there in the physical pages of the book giving me access to an elaborate world of inner, intellectual experience. That’s just the nature of the world through and through: it is receptive to, and a bearer of, the intelligence we imprint upon it.

But, as I pointed out in the last chapter, it is nonsense to mistake the artifact for the artificer, or the intelligence spoken into the world as product for the speaking as productive power. The endemic preoccupation with the question whether computers are capable of human-like intelligence is one manifestation of this confusion. But if we are willing to step back from this preoccupation and look at the computer in its full human context, then we can gain a much more profound appreciation of its intelligence. At the same time, such a contextual approach can guide us toward a more balanced view of the human-machine relationship.

The Computer in Context

When, instead of trying vainly to coax signs of life from the computer as a detached and self-subsistent piece of machinery, we examine it as an expression of living beings, then immediately our flat, two-dimensional picture of it becomes vibrant and vital. We see analysts reconsidering almost every human activity, asking what is essential about it and imagining how it might be assisted or even transformed by the elaborate structuring potential of digital devices. We see designers and engineers applying their ingenuity to achieve the most adequate implementation of the newly conceived tools. And we see consumers and employees struggling to use or not use the devices they are handed, weighing how to adapt them to their own needs, perhaps even sabotaging them in service of higher ends.

All this is, or at least can be, creative activity of the highest sort. But preserving the creative element depends precisely on our not viewing the computer as a merely given and independent reality. For the irony is that only when viewed as making an independent contribution does it become an absolutely dead weight, and therefore a wholly negative factor in human society. Removed from the context of continual design and re-design, use and re-imagined use, sabotage and re-invention, it presents us with nothing but a mechanically fixed and therefore limiting syntax. To celebrate the machine in its own right is like celebrating the letters or the ink on the page, or the grammatical structure of a great literary text, rather than the human expression they are all caught up in.

It may seem odd to cite the computer’s “fixed and limiting syntax,” given the complex and infinitely refined elaboration of logic constituting this syntax. But that’s just the problem. We find in every domain of life that an elaborate, precise, and successful logical structuring of things is not only the glorious achievement of past effort, but also the chief obstacle to future effort. All life is a continuous development, a maturing, an evolution, an overcoming or transformation of inherited structures—and a computer program is exactly such a limiting structure.

Owen Barfield is referring to this problem in connection with the renewal of the expressive power of language when he observes how the great literature sooner or later threatens to become a dead weight,

growing heavier and heavier, hanging like a millstone of authority round the neck of free expression. We have but to substitute dogma for literature, and we find the same endless antagonism between prophet and priest. How shall the hard rind not hate and detest the unembodied life that is cracking it from within? How shall the mother not feel pain? (Poetic Diction,Chapter 10)

And how shall the corporate reformer not despise the stewards of legacy software! This problem only becomes greater as the inexorable drive toward interlocking global standards gains momentum.

The attempt to find a principle of life within the computer as such, detached from its human context, is damaging precisely because the machine itself is almost nothing but the hard rind in need of cracking. The continuous process of living renewal must come from us, and from our commitment, as designers and users, to transform the rigid syntax we have received from the “dead hand of the past.” We rightly strive for flexible software, but there remains a crucial sense in which every piece of software, once achieved, becomes a dead weight.

There is a fine line between healthy adaptation, on the one hand, whereby a tool is made to serve our own highest purposes, and “going native”—giving in to the dead weight and the alien intentions—on the other. In a healthy adaptation we always sense a certain resistance from the tool, however subtle. This is just to say that the boundary between the tool and ourselves remains available to our awareness even as we work continually to transcend the boundary through our own mastery. Without such a resistance and awareness, we cannot summon the work necessary to remain masters of the technologies we employ. Putting it paradoxically: we have to be aware of the tool’s difference from us, its opposition to us, in order to work effectively at making it “part of us.” When we lose altogether the awareness, we have no way to direct this work, and we can’t know whether we are using the tool or it is using us.

The glitches, vexations, and failures of technology at least have this virtue, that they occasionally jolt us out of our mesmerized, lockstep conformity to the machinery around us and into remembrance of ourselves as distinct from the machinery.

But to remember ourselves in this way is at the same time to elevate the machine—not through the crazy imputation of emotions and thoughts to it, but rather through the recognition that our conversation with the machine is, in the end, a conversation among ourselves—just as we converse with ourselves (and not in any primary sense with paper and ink) when we read a text.

This conversation can always be ennobled. We ennoble it, for example, by shaping the computer’s outer form with the artistic sensitivity of a sculptor, and by deriving its frozen, internal logic from an inspired vision of this or that human activity, just as we can abstract a bare logical structure from an orator’s high and passionate meanings. And we can then recognize that recovering worthy activity and high purpose from this frozen structure depends upon our ability to warm it with our own passions, enlighten it with our own meanings, enliven it with our willful intentions. And so, finally, our fascination with the evolution of “spiritual machines” will be transformed into our own evolving sense of spiritual responsibility for those aspects of ourselves we bring to bear upon our mechanical creations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.139.87