19
Code and You’re Done: Implementing Interfaces

Objects And Interfaces

Programmers love to program, which is fortunate—after all, you cannot invoice customers with the paper prototype for a billing system anymore than you can drive the engineering blueprints for an automobile. Designs are not ends in themselves, but routes to superior software solutions. Even an excellent plan can be spoiled by inept execution, however. Construction techniques matter no less when it comes to software than in building airframes or office buildings. In this chapter, we will explore some of the issues in implementing usage-centered designs.

How does one implement good usage-centered designs? What is the best, most effective internal architecture for the code that supports the user interface? The best architecture for the user interface is, as we have argued, one closely molded to the structure of whatever work is to be supported. The best internal architecture for the programming behind the scenes is one that is, in turn, simply and systematically connected with the externally defined needs. It is integrated with the user interface architecture, although separable from it. It contains the least amount of code that will suffice for the fullest support of user requirements. Internal components and interface components are easily related to one another, and there are no superfluous components at or behind the interface. How does one achieve this systematic integration? This is where object-oriented development enters.

It is not our intention to explain object-orientation or get into detail about object-oriented design techniques. For those unfamiliar with the subject or wishing a deeper understanding, many excellent books are available. Among these, one that we might particularly recommend is the concise and practical introduction by Page-Jones [1995]. What we want to do here is explore some connections between object-orientation and good usage-centered design.

What has object-orientation to do with user interface design, much less software usability? If we look at the literature of object technology, we might conclude that it has little if anything to do with these. Although some products have been laying claim to object-oriented user interfaces for decades, the world of objects has paid scant attention to users, usability, or user interfaces other than to offer libraries of graphical user interface classes to programmers. For example, of more than 6,000 pages of text in 15 popular books on object-orientation, only 161 pages deals with users, usability, or user interfaces, and most of that is found in just a few books [Constantine, 1996c].

Object-orientation promises many things, among them a seamless software development process that uses a consistent vocabulary and a single set of concepts throughout, beginning with the outside world, the so-called real world of the problem domain, and proceeding smoothly and without discontinuity all the way through to code. An important term or concept that might appear in the requirements specification will be reflected in the analyses and design models and will even be found in the code itself. The classes of objects and use cases comprising the problem definition will structure the internal software architecture, shape the user interface, and organize the code. Anyway, this is the fantasy. The reality is, of course, always a rougher road with more gaps and detours [Constantine, 1997d].

True converts and charismatic gurus of methods have hailed object-orientation as a programming panacea, the salvation of software development, and the answer to the “software crisis.” Hype and hope aside, the simple truth is that object-oriented software construction has become one of the genuine success stories of modern software engineering. Not everyone is a convert, of course. Good work can still be created using classic structured methods, and junk continues to be cranked out under the newly raised banner of object-oriented analysis, design, and programming. Nevertheless, as a rule, object-orientation is a Good Idea, more beneficial than harmful, more utilitarian than irrelevant.

Objects For Implementation

Whatever the elaborate philosophical and pedagogical raiment in which the object-oriented programming paradigm may sometimes be dressed for sale, the naked truth about objects is really remarkably simple. Objects—strictly speaking, object classes—are convenient chunks of programming. Objects allow programmers to package into a single, comprehensible collection, a number of operations along with the data upon which these operate. Well-conceived object classes are powerful, readily comprehended components that bring together only the most strongly associated bits of function and data [Constantine, 1995d]. Because well-designed objects effectively hide their inner details and support simplified use through messages from the outside, they promote component-based construction and reuse. Naturally, success in software construction through reusable components depends on much more than just the choice of language or programming model [Constantine, 1992g], but having the right model for the programming infrastructure definitely helps.

Software objects can be first-rate packages for implementing good user interfaces. Because they hide implementation by encapsulating data with associated operations, objects become a convenient package for facilitating reuse of both interface and internal components. To the extent that object-oriented programming is used effectively for component-based development, it can be a major contributor to user interface consistency through the Reuse Principle. Reuse is, we maintain, far and away the most effective route to achieving consistency in user interfaces.

Object-Oriented User Interfaces

You will no doubt encounter the term, so it is probably necessary to say something about object-oriented user interfaces. What is an object-oriented user interface? Some have suggested that it means any user interface implemented with object-oriented programming. Object-oriented programming, however, merely describes the internal construction techniques, and these are, or should be, invisible to the user anyway. Others have claimed that object-oriented user interfaces refer to point-and-click, drag-and-drop on-screen manipulation of objects, but these interaction idioms are just the standard fare of all modern graphical user interfaces. Some even associate object-oriented interfaces with representations of physical objects from the real world on the interface, but we already know that the appropriate use of icons and other graphics has to be suited to the intended communication between system and user and guided by the tasks performed, not predetermined by the programming paradigm [Constantine, 1993c].

One detailed definition has been offered based on three characteristics of object-oriented user interfaces [Collins, 1995]:

1. Users perceive and act on objects.

2. Users classify objects based on how they behave.

3. All the interface objects fit together into a coherent overall conceptual model.

The first characteristic is about users, not user interfaces, and about aspects of users that may well be hard-coded in the human brain, at least when it comes to ordinary everyday objects. If we are considering software objects, then we are only characterizing software developers working within an object-oriented framework. The second characteristic, also about users and not interfaces, is almost certainly not true across the board. People classify ordinary objects in terms of many features and factors other than behavior. You might, for example, observe that all politicians seem to act pretty much alike but that you have a friend who looks like Bill Clinton. As to the third so-called characteristic of object-oriented user interfaces, by this point in the book, it should be obvious that it merely describes a well-organized user interface of any variety that fits the work and conforms to the Structure Principle.

Superficial Objects

Yet another view of object-oriented user interfaces is that they bring to the surface of the interface the constructs and interrelationships of the object-oriented paradigm, presenting users with objects and class hierarchies and with methods and messages as the medium of exchange between users and systems. This view is, perhaps, more defensible as being strictly object-oriented, but it may also lead to some unfortunate designs. In its most rigorous interpretation, users would be forced to interact with an object-oriented interface by moving little messages around from object to object on the screen, which is unlikely, in most applications, to have much to do with effective support of the relevant use cases [Constantine, 1997c].

An obsessive preoccupation with making everything object-oriented can lead the user interface designer astray. For one thing, work is behavior. It is made up of actions, steps, and activities interconnected by other activities. At its heart, work is operations, not objects, which is one reason why use cases are so effective for modeling work.

Object class models can be useful for representing a domain of application but not usually for the work to be carried out within that domain.

Ordinary users would never describe what they do with a graphical user interface as “sending messages to objects.” Only a programmer whose mind has been warped by too many years of small talk with object-oriented programming systems would conceive of interaction in this way. Users do not send messages to objects. They do things with and to those objects by means of various interaction idioms. (See sidebar, This Do!)

Nevertheless, it is legitimate to wonder whether it ever makes sense to expose any of the machinery of object technology at the interface with end users. For the most part, inside-out design, when the logic and structure of the program show on the user interface, is a sign of failure. Outside-in design, by contrast, means that the internal components and their external manifestations both reflect genuine user needs rather than programming preferences.

Some object-oriented constructs may serve as inspiration for new visual components, however. One example that may sometimes be useful is the concept of “factory objects.” A factory object is one that instantiates other objects of another

Figure 19-1 Factory Object for New Notes, Software Post-Its.
(3M)

Image

class when sent the appropriate message. Visual components that when clicked or swiped create new instances of a class might be a useful addition to the repertoire of visual design concepts. For example, a “pad” of “software notes,” as illustrated in Figure 19-1, can be clicked to open a blank note ready for completion and placement on the desktop or within a document. Such a component may be consistent with the trend toward document-centered user interfaces, but not everyone will be sold on the power of objects. To the user, there may be little difference between dragging-and-dropping from a factory object and selecting File|New or even just typing Ctrl+N to get a new instance of a class.

Object Architecture

Object technology also supports separating the interface from supporting internals while keeping them interrelated through an appropriate “object glue.” The key to this “separate-but-integrated” implementation architecture is the use of collections of objects separated by function into interface objects, control objects, and entity objects. Such distinctions are now referred to as object stereotypes. Interface, entity, and control stereotypes were introduced by Jacobson and colleagues [Jacobson et al., 1992]; other stereotypes have been described by Wirfs-Brock [1994].

Interface objects model the interfaces and interactions with users, mediating between users and the software. They encapsulate capability that is specific to particular interface devices or to particular kinds of users. Entity objects, also sometimes referred to as domain objects, model the tangible or conceptual objects within the application domain, holding the information retained by the system over time. Control objects model complex behavior that involves multiple objects, especially behavior not naturally tied to any other objects in isolation. They are especially useful for encapsulating policies or procedures spanning many objects or for managing the interaction among other objects.

When first introduced, control objects were controversial. Purists saw the use of components of this ilk as a corruption of object-oriented concepts by the intrusion of outdated and superceded notions from procedural programming and classic structured design. The important issue, however, is not the purity of the paradigm or of the motives, but the simplicity of models and the resulting code. Control objects and other specialized object stereotypes may look more like functions or collections of functions than like well-conceived objects, but, used appropriately, they can simplify software. Fortunately, sanity has slowly prevailed, and fewer and fewer fanatics rail against the use of “procedurelike” objects.

The separation of responsibilities into three groups of objects proposed by Jacobson bears a close resemblance to another well-established architecture: the model-view-controller pattern [Krasner and Pope, 1988]. Both concepts serve to separate internal information (the model, the entity objects) from the various ways in which this information may be presented or manipulated (the view, interface objects), with separate responsibilities for coordinating the relationship (the controller, control objects).

Object stereotypes (and the model-view-controller architecture) help to localize the impact of any subsequent changes in the design and implementation. The appearance and behavior of the user interface can be changed without having to change the underlying data model. Changes in the data model can be restricted to only the affected parts of the interface. Changes in the relationship between presentation at the surface and the underlying data can be made within control objects. This, of course, is the ideal situation; in practice, a good architecture, at best, reduces the odds of any given change being reflected in too many different places within the code.

Use cases typically include elements related to all three kinds of stereotypes, so the capability represented by use cases has to be distributed among various objects. The partitioning of the bits and pieces of use cases into object stereotypes proceeds in steps. Functionality that is directly dependent on the environment and that is specific to the user interface or to users is first allocated to interface objects. Anything dealing with information storage and handling or with the fundamental classes or entities of the application domain that does not naturally fit into interface objects is allocated to entity objects. Functionality that is specific to only one or a few use cases, that requires communication or coordination with multiple objects, and that does not fit naturally into entity objects or interface objects is placed within control objects. In other words, the first preference is to stuff things into interface or entity objects, using control objects only where there is no natural fit with either of the other stereotypes. In this way, control objects are not unnecessarily proliferated.

Accelerated Development

Modern software development is not only shaped by technology but also driven by the need for speed or at least by the prevailing perception that the pace of change keeps escalating [Constantine, 1994d; 1995e]. To meet the pressure to deliver more in less time and, often, with fewer resources, various streamlined software development life cycles (SDLCs) have come into popularity as models for accelerated software development. Rapid application development (RAD), rapid product deployment, time-boxed development, and “good enough” software development have all been proposed and tried.

The common feature of most accelerated development strategies is a spiral life cycle model, whereby development cycles through a series of activities or phases, converging on the delivery of a working system. With each complete cycle, a more expanded or refined version of the software is delivered, but even the first completed system is usable. In such an iterative environment, use cases can be a formidable aid to organizing the delivery cycle and ensuring that maximal value is delivered at each successive iteration.

Concentric Construction

Essential use cases can serve many functions, not only in organizing the user interface and even the internal architecture but also in organizing the implementation process itself. Use cases become the appropriate unit of product delivery because each use case represents one useful piece of work, one meaningful task to some users. Phasing the construction and delivery of systems based on use cases and collections of related use cases assures that users receive the most useful collection of features and capabilities with each release. Basing versions and releases on features rather than use cases can result in systems that incorporate superfluous, little used, or unused features or, worse, can lead to delivering a system where some use cases of interest cannot be enacted because of missing features.

If the relative priority of use cases has already been established earlier through Joint Essential Modeling or otherwise, it is fairly simple to stage the implementation process, starting with a central core of the basic and most important capability and expanding outward from there. This kind of staging can be used for planning successive releases or as insurance against project overrun or premature cutoff of resources. Concentric construction from core capability outward ensures that, whenever development is stopped, the last working version is maximally likely to be usable.

Multiple versions of the same system are also readily configured based on use cases. The “lite” edition of a software package may cover a fully usable subset of use cases, while the “professional” edition might incorporate various extension cases as well as advanced uses. This is often the most economical way to realize multiple versions since the deluxe variation is just a superset of the economy system. Less typically, the same use cases may be implemented in basic and deluxe forms within different versions. For some products, this may have the greatest appeal to users and in the marketplace, but it should be recognized as needing more programming than versions based on subsets of use cases.

Thinking of versions and editions based on use cases and user roles helps developers and marketing folks to target different user populations and markets more successively. Use case analysis can avoid serious misallocation of features and resources to successive releases or to low-cost and expanded versions. For example, one manufacturer of equipment for factory automation wanted two versions of a programming system for certain parts of the control systems. A stripped-down version of the software was to be offered at a reduced price to purchasers of the least expensive control systems. One approach considered was to provide all the functionality of the advanced software, supporting the same range of equipment, for example, but with a more primitive user interface that required users to program in what was roughly the equivalent of raw machine language. Looking more closely at the low-end users made it obvious that they were precisely the ones who most needed the more advanced programming facilities that would simplify the design and setup of their programs. A better marketing strategy was made possible by providing both versions with the same more advanced programming interface, but restricting the low-cost version to supporting only the less expensive target equipment and leaving out some functionality. In this way, both versions had appealing interfaces, yet the advanced version was clearly a superset with added capability to justify its higher price tag.

Architectural Iteration

Entropy is the enemy of all software. After a certain point, as software continues to be revised, refined and expanded, the code inevitably becomes more convoluted, more chaotic, and more complex. The forces of entropy that plague all programming are no less powerful when software is developed through rapid iterative prototyping or any other form of successive refinement.

At the outset of all software projects, the developers must make some basic assumptions and establish an overall framework within which to construct the system. To begin with, the basic internal architecture, the core assumptions on which the program is organized, may be sound and well suited to the problem being solved. In iterative development, the problem being solved is, of course, the first version or the first release of the software. Smart developers will draw on their experience and their best crystal-gazing powers of projection to try to anticipate evolving needs. They will establish a sound data model and a robust object architecture, make a good partitioning among application layers, devise a versatile messaging structure, and invent the necessary internal languages and protocols that might be needed to support the immediate and subsequent programming needs.

Nevertheless, with each successive round of iterative refinement, as the application grows and the requirements grow and change, those early assumptions will fit less and less well. It will become harder and harder to shoehorn in new features, to accommodate new data types, or to find a place to plug in new components. Eventually, all evolving software—which is all software—reaches a point of brittleness and instability where almost any attempt to revise or even correct one part of the software brings the whole thing crashing down, where nobody even knows the complete structure anymore. To remain responsive to user needs and competitive in the marketplace, it becomes necessary to start over, with a new architecture and fresh code.

Is it possible, if not to repeal, at least to suspend temporarily the laws of entropy and delay the onset of architectural collapse? It will not do to require that developers must anticipate in more depth and detail in order to arrive at the best long-term solution at the outset of the first iteration. We would once again risk inflicting upon programmers that endemic affliction of old-style waterfall development life cycles, the dread disease of analysis paralysis.

The solution, we believe, is to refine, repeatedly, the architecture along with the code that is based upon it [Constantine, 1996a]. In this approach, which might be termed architectural iteration, on each round of refinements, the basic architecture of the software is reviewed to ascertain its continued viability. The developers reexamine a variety of basic architectural decisions, such as the structure of the code, the partitioning into packages, the organization of the database, the form of internal messaging, the class hierarchy and use of foundation classes, communication techniques, client-server partitioning, and distribution of methods among classes.

Wherever the developers call into question the continued validity and viability of earlier architectural choices, they will then need to consider possible redesign and revision. Thus, the job for the next release cycle or round of iterative refinement consists of two parts: In addition to those corrections and functional enhancements that would otherwise be part of the project, any requisite refinements to the architecture are included. Perhaps some additions need to be made to the reusable component library, and some existing code needs to be updated to make use of the new components. Perhaps the internal file format for preserving user preference profiles needs to be elaborated. Perhaps the partitioning among client software, middleware, and server should be refined. Perhaps some new entity classes or data types ought to be introduced and reflected in the code. Whatever is indicated as of immediate and long-term value is added into the pot for design and implementation. Although this adds to the effort in the next development iteration, it ultimately makes the software more robust and more amenable to further refinement, thus reducing the cost of future iterations.

Architectural iteration will not lead to eternal software, but it can stave off the day of reckoning when the underlying assumptions fail and the foundation crumbles. Legacy systems have been kept pliant and efficient by architectural iteration. Instead of having to be rebuilt from scratch every two or three years, software products may continue to be successfully refined and polished into contemporary competitiveness for many more years.

Visual Development Of Visual Designs

In recent years, a revolution has taken place in the tools and techniques used for developing software. Revolutions are common in software. They are being declared at every turn by journalists, methodologists, and public relations people. On closer examination, especially in retrospect, most such “revolutions” or “paradigm shifts” turn out to be little more than old wine with new labels or a minor reformulation in the mix of vintages. Scholars might argue the details yet agree that the genuinely revolutionary article is rare—the advent of high-level languages, the structural revolution that gave us structured programming and structured analysis and design, and the object-oriented paradigm, to name some of the pivotal ones.

The emergence of visual development environments is one of the most creative and energetic strands in the skein of current development practices and products [Constantine, 1995c]. Visual development refers not only to the technology, the software development tools themselves, but also to the way in which these tools are used. Just as nail guns and power handsaws do not change carpentry but do change how carpentry is carried out, visual development tools lead to a new style of software development and to new development processes.

The tools used to construct user interfaces matter a great deal. Primitive tools not only lead to clumsy interfaces but also encourages developers to avoid revisions or refinements. Where programmers must tailor messages and procedure calls to display user interface elements one at a time, even small improvements to layout or to the appearance of components can require significant reprogramming and testing. Hampered by inadequate tools, programmers for one of our clients repeatedly omitted recommended changes to user interfaces, for example.

Among the oldest and best known of the modern tools are Visual Basic and various graphical application builders, such as PowerBuilder. The idea of visual development is not entirely new, of course; academics have been devising visual programming schemes for decades. One could even say that the commercial forerunners of modern visual development tools were early report program generators that allowed much of the programming of simple applications to be done through forms that were laid out to look much like the printed pages of the desired report. This may seem to be a far cry from advanced tools like Visual C++ or J-Builder, but the idea is the same—programming driven by arranging the interface as it appears to the end user.

Visual development environments allow developers to create complete working systems largely or exclusively by moving visible objects around on a monitor screen. Instead of writing out the code to display and activate all the components in a dialogue box, the programmer merely selects visual controls from a toolbar and drags them into place on the dialogue box being created. Earlier, screen-painting techniques had taken over some of the clerical overhead but still required programmers to enter absolute screen coordinates to place fields, labels, and command buttons. Direct manipulation of visual components is an all-but-obvious improvement for designing graphical user interfaces, but, with many of the early tools, once the surface appearance was shaped, the programmer was forced to dig behind the scenes into the messy backstage chaos. There often lurked some of the ugliest Basic or scripting language imaginable, strewn around in an undisciplined clutter of references and messaging that interconnected all the scattered bits of functionality hung on the back of user interface widgets and forms.

Such early approaches were merely the first skirmishes in the revolution. The true revolution, which we have been predicting for more than a decade, is based on two additional innovations that are now being glimpsed in the more advanced tools. To make development truly and completely visual, the developer needs to be able to manipulate elements of the design models right along with the components of the user interface, and they need to be guaranteed the equivalence among the various views into the software. In other words, modeling capability needs to be fully integrated into the visual development environment [Constantine, 1995c]. File import and export is not enough. Switching between a CASE tool and a visual programming system, for example, is not enough. It is not application switching, but view switching, that is called for.

In a completely integrated visual development environment, all the various views of the software under development would be maintained together and in perfect correspondence. Not only could the developer instantaneously switch from one view to another, but any change in one view also would be immediately reflected in the underlying software model and, hence, in every other view. Change the code, and the interface changes; change the interface, and the properties are updated.

Some currently available products realize parts of the full visual development paradigm. Borland’s Delphi and successors maintain synchronization between a code view, an interface design view, and an object property inspector. IBM’s VisualAge line of products incorporate visual representation and direct manipulation of some aspects of object models. Third-party vendors have provided file swapping to “integrate” Rational’s Rose and Borland’s Delphi. All of these products or combinations still fall somewhat short of what is possible and needed, although the trend is clearly established. (See sidebar, Galactic Dimensions.)

Model Or Not

Many programmers have argued that, using visual development tools, they can actually create a working prototype or even a fully functional system in less time than they could build a content and navigation model. In some cases, for relatively simple problems that have already been thought through by a skilled and experienced programmer, this may well be true, although content modeling by experienced practitioners is also a pretty speedy operation.

Facile manipulation of the visible user interface, however, is both a strength of visual development tools and a liability. Because it becomes so easy to grab a visual component and drop it onto a form or dialogue box, because visual components are so readily moved around on the interface design, visual development tools can encourage a kind of visual hacking in which just any old control is selected and thrown onto the interface without benefit of design or fore-thought. Sometimes, the design at best, may be given a little spit and polish to bring the interface controls into alignment and to correct spelling errors in labels.

More often than many die-hard “coding cowboys” would like to admit, the fastest way to solve a problem is to slow down and work more methodically and thoughtfully. The abstract models of usage-centered design impose a thoughtful order on what can be a chaotic frenzy.

Time pressures have often been cited as excuses just to start cutting code, ignoring requirements, skipping over analysis, and omitting design, which is particularly ironic because savvy managers have been proclaiming for decades the time-saving value of modeling. The less time there is, the more important it is to build the system right to begin with by thinking before leaping into the code. The weight of experience is on the side of modeling and smart managers [Constantine, 1994d; 1995e]. When the meter is running and the deadline looms large, an hour spent modeling can save days of disorganized programming.

The usage-centered design activity model, introduced in Chapter 2, is adaptable to a variety of implementation strategies, including so-called RAD approaches and rapid iterative prototyping. Under extremely tight delivery cycles, say, of 60 to 120 days, several tricks can help get the maximum leverage from the time spent in usage-centered modeling. It can be useful to time-box any modeling, allocating a fixed amount of time for each activity. It is also vital to keep the modeling process moving without getting bogged down in lengthy discussions or debate. The focus should be on the immediate project needs without getting off into speculation or futuristic fantasies.

When push comes to shove, the core of a usage-centered approach is the task model. Essential use cases are easier to identify and to detail when the groundwork of role modeling has already been completed, but, for some small systems developed on short schedules, starting with the task model may suffice. Developers already proficient in usage-centered development may also be able to create good working prototypes or operational software directly from use cases using modern visual development tools. The very best and most experienced developers may even be able to model and design in their heads as they code with their hands. For the rest of us mortals caught in the panic of impossible deadlines, we would paraphrase an old Yiddish proverb: When there is no time to think, at least stop and think.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.161.225