Interaction Design in an Engineering-Centric World
Chris Connors
Apple
Chris Connors is currently employed at Apple. Prior to working at Apple, Chris worked at NASA with a focus on designing future mission support tools for planning robotic activity during surface operations on Mars. Chris also has an extensive background with Trilogy Software in Austin, Texas, where he designed enterprise software products for the Financial Services, Telecommunications, Computer, and Automotive industries. Chris was one of two lead designers for Trilogy subsidiary carOrder.com, winner of PC Magazine's Editor's Choice Award for Best Online Car Buying site in 1999.
It's hard to believe that a decade has elapsed since Carnegie Mellon University, having reputable Design, Computer Science, and Cognitive Psychology programs, decided to offer a graduate degree program focused on a discipline coalescing at the intersection of the three. When I entered the marketplace as a newly conferred graduate with a Masters in Human Computer Interaction, I can recall trying to explain to my family and friends exactly what HCI was—something I still occasionally find myself doing. Describing our discipline to potential employers was a recurring challenge: Many were confused by a CS degree without production programming, a design degree that didn't deal primarily with product form, or a cognitive psychologist who wasn't solely focused on modeling human performance or conducting experimentally-driven usability testing.
Today's employment prospects are dramatically different. Each week, BayCHI distributes nearly 80 job listings seeking precisely the sort of multidisciplinary candidates the HCI Masters program at Carnegie Mellon University continues to produce (albeit in greater quantities) to this day—and that's only jobs in the Bay area. In my own recent experience on both sides of the job market I found hiring managers eager for competent Interaction Designers; engineers desperate for design resources to provide direction and structure, and other Interaction Designers seeking capable colleagues. Yet there are still plenty of companies in software, hardware, and aerospace that focus on engineering first and human factors second, with Interaction Design rarely ranking at all. How can Interaction Designers best integrate design process into these of organizations? Consider this trio of strategies, which have positioned teams in these types of organizations for success:
— Defining and employing a design process as a mechanism to set, manage, and fulfill expectations.
— Establishing and maintaining the design team's credibility with implementers, stakeholders, and users.
— Judiciously using prototypes of varying and often mixed fidelity to convey design intent, collect data, and create enthusiasm for your ideas.
In this discussion we'll hope to illuminate in practice how these methods can really enhance the relationships and output of integrated engineering and design teams.

Process

It's hard to imagine any design training that doesn't include process as a significant part of the studio experience. Put simply, process drives repeatability—reducing the reliance on inspiration—and creates a framework in which creative professionals can execute. Developers and engineers are also familiar with the use of process as a framework in which to increase the odds of repeatable results, and most have training and experience using a variety of software development processes. However, most have little experience with the Understand | Design | Validate | Deliver sorts of processes frequently applied in design disciplines. Their own experience using a variety of engineering processes creates common ground for the two disciplines, and creates opportunities for setting expectations between the disciplines.
The first step is, of course, to select a design process, adopt it, and use it. Most designers are used to employing a design process, most likely the one employed by their source of design or studio training. Without proposing one over another, the suggestion here is to get the design team to adopt and standardize a single process, define the range and types of deliverables for each process phase, and then, consistently apply it. By achieving consensus on process, phases, and deliverables, engineers, program managers, and stakeholders can develop consistent expectations about what sort of things they'll receive when regardless of the design resource assigned to their project.
Invest resources in educating engineers and management about your process. Many professionals outside the design discipline find the creative process to be mysterious and opaque with almost no expectations around the sort of output and deliverables they might expect from the process, save a design spec at the conclusion. Educating engineers about what sorts of activities and deliverables are part of the Understanding phase, for example, sets their expectations so that when your designers meet engineers and stakeholders to review competitive analyses and site flows diagrams, they won't immediately be frustrated at another meeting without a design spec. By clearly indicating what deliverables they can expect when, designers can manage and meet expectations within the groups.
In the design process, it's not uncommon to have to return to previous phases. Particularly in environments where clients or executives have the vision themselves to recognize that designers are not on a path to success, it's not uncommon to return to gather broader understanding about a domain, or competitors, or influences, for example. When design is linked to a delivery schedule, however, project managers and engineers can be uncomfortable with what they perceive as “starting over.” However, similar events can occur for engineers or developers. It's not uncommon to find a method or algorithm that initially seemed viable in practice might not scale adequately, or offer the required performance, causing engineers to have to re-think their approach. Should designers find themselves in a state where they need to re-assess assumptions, use this common ground to build rapport, assess schedule impact, and move on.
As designers are proposing the schedule of activities as they move through their design process, be flexible about the duration of each activity. It might be difficult to gain broad understanding of a complex domain in only a few days, but the schedules may only allow that amount of time. Rather than fight for more time, design teams are better off executing under the schedule constraints, with the caveat that there's additional uncertainty (and likelihood of a revisiting design direction). It's important to pick your battles—fight for the things that are important (like getting ahead of the development cycle) and acquiesce the things that aren't (where you physically sit relative to the other designers or developers).
It's also important to recognize that just as there are numerous design processes out there, so too are there a variety of software development processes. While the relationship between design and traditional development styles such as Waterfall is pretty well established, the growing popularity of “agile” development methods offers an entirely new set of rules, and a real opportunity for education and evangelism.
“Agile” development methods, such as extreme Programming (“XP”) and Scrum, are intended to give developers the flexibility to respond dynamically to changing requirements. However, the iterative process of designing the implementation shouldn't be mistaken for the iterative process of designing the product. What agile methods offer designers are an opportunity to design the product in a broad sense, and then the chance to execute designs in manageable sections over the development cycle. Designers may have to do some selling in order to convince developers to afford them some time up front to get ahead of the development cycle, but it's proven incredibly valuable to the HCI Group at NASA's Ames Research Center, as they've worked with the development teams at both their own center and at Jet Propulsion Laboratories in Pasadena, California. The team was approached about collaborating with developers working on the next generation of software tools for managing robotic surface operations on Mars, and the three teams have worked diligently and successfully striking a balance between the demands of XP, integrating design process, and managing remote developers for more than 3 years.
The HCI Group began by defining a framework under which the suite of tools would be developed, defining broad design direction for the application using wireframes, design documents, and even a dynamic wiki environment linked directly into the developers' bug tracking tool. The framework was vetted and validated with stakeholders and users, and set the development effort on a path towards success. Designers would then work a few weeks ahead of the developers, exploring functionality, testing designs, and developing specifications for sections of functionality (“search data,” for example). The results of this effort have been enthusiastically received by their users, and are scheduled for use on two upcoming Mars missions.

Credibility

Once a design team has codified their process, educated engineers and stakeholders about the outputs and timing of their delivery, and executed as they've promised, one of the beneficial consequences should be growing credibility throughout the organization. Setting reasonable expectations and achieving them is one of the single most important things a design team can accomplish in terms of establishing a good baseline of credibility, but it's not the only thing.
A designer can gain a tremendous amount of knowledge and respect from stakeholders and users through the embedding process, as possible. For example, if an online retailer wants their designers to better understand and espouse the same “voice” employed at their brick-and-mortar locations, what better way than to train a designer and staff him at a retail location for a week or two? If a designer wants to understand the service gaps his application leaves between his customers and their goals, staffing him to participate as phone support is an excellent way to help him feel the user's pain.
The Ames HCI Group has enjoyed good results using this strategy. In 2003, having collaborated successfully with researchers on planning tools for the Mars exploration Rovers, designers were trained and embedded to support mission scientists utilizing planning tools. Staffed as both support personnel for Tactical Activity Planning and as researchers observing the process of collaborative scientific discovery, team members had almost unfettered access to the mission participants and their tools.
Tactical Activity Planning occurred nightly during the first 90 “sols”—or Martian solar days—of the mission (the “nominal” mission). Since each rover was expected to last only 90 sols before succumbing to the buildup of Martian dust (which would gradually block the solar panels until they could no longer recharge the spacecraft's batteries), every minute of Martian daylight was precious. Martian sols are 37 minutes longer than Earth days, and, in order to maximize robotic activity during those precious 90 sols, the mission was run on “Martian time”; researchers synched their respective clocks and watches with the Martian time. This meant the researchers were moving forward 40 minutes each day—the meeting scheduled for 8 am local time the first day would be at 8:40 am local time the next day, until it was occurring at around 8 pm local time 18 sols later.
While even the slightly romanticized JPL robot mascots gave web visitors the impression that the robots used onboard autonomous planning to sort out its daily activities, the reality was that mission planners kept the robots on a pretty tight leash, handing in carefully formed plans generated each night to dictate activities for the upcoming day. Each day the spacecraft would send back all of its data from the previous day—images, spectroscopic results, and vehicle telemetry. While the rovers slept through the Martian night, scientist would review and, in domain specific groups (such as “atmospheric” or “geology”), formulate what they would like to do during the following day. These groups came together to propose the next day's activities and negotiate for the limited resources within each sol. Next they would start to develop a plan which was turned into a sequence the spacecraft could understand; this plan was ultimately transmitted to each rover for execution the following day.
Each of these steps was supported with a collection of applications, including several core tools and a handful of scripts. The systems in place for the mission were some of the specific tools that had been reviewed and redesigned by the HCI group, offering a unique opportunity for the designers: the chance to provide day to day (and sometimes night to night) support for the set of applications whose design they had a hand in, directly in the context of use. Accepting this opportunity, one of the team members was staffed on the mission in the role of Tactical Application Planner Support.
To say that they learned more than they could have imagined about the domain, the users, and their goals would be an understatement. Working day to day with planners, scientists and mission managers provided a spectacularly rich set of data, all of which the designers are currently working to fold into the next generation of data browsing and tactical planning software for interplanetary robotic exploration. But more importantly, this level of integration built trust, established credibility, and fostered relationships that have proven invaluable in the ongoing efforts; designers developed close working relationships with the team tasked with developing the next generation of software tools, which made tight integration possible (and successful) during this current development effort. They also succeeded in conveying the value of applying iterative design processes to the mission managers, many of whom are critical stakeholders (and understandably protective gatekeepers for future user access) in upcoming missions. It's important to remember how critical it is to recognize opportunities like these as a medium in which to convey the value of iterative design, and to take advantage of them.
The rovers are still chugging along, with Spirit and Opportunity still conducting science 12 times longer than their expected lives! This extended mission is now providing a test-bed for the designers and developers to test concepts being implemented for the next generation tools mentioned earlier. Soon after the rovers entered their “extended missions” (the time after that initial 90 days), developers began to contemplate ways to improve the tools and development practices at the same time. The HCI Group, having established both credibility and professional relationships with the developers, became collaborators in this new effort almost from the outset.
Designers, particularly those working in complex technical domains, should likewise never underestimate the power of data in establishing credibility for your design decisions. The Ames HCI Group, having spent time supporting the teams developing the support software used to organize evidentiary data in the Columbia Accident investigation, became interested in the systems used to collect data generated during mission anomalies. Anomalies, in this sense, refer to anything, good or bad, occurring during a mission that is unexpected. In many cases, anomalies are the precursors for mishaps. By studying these events, systems could be designed to support their collection in a way that standardized procedures and expanded the searchable data agency wide.
Through the support of the center's Chief Engineer, the HCI Group gained significant access to observe anomaly data collection in a variety of settings, and mission phases. Significant time and resources were invested collecting data using Beyer and Holtzblatt's Contextual Inquiry methodology. At the conclusion of this inquiry, the resulting models, process analyses, and prototypes were presented to the Chief Engineer, and subsequently, to the missions who participated in the observations, and finally, funding managers at NASA Headquarters. The credibility this data lent to the design decisions based on it made a powerful case for the creation of such systems, and ultimately, led to a significant funding decision as Ames is now tasked with replacing the existing anomaly resolution infrastructure based at least in part on this work.

Prototyping

If you only have one card to play when trying to appeal to the sensibilities of engineers, scientists, or developers, your safest bet is clearly “data.” While the styles of these different groups can be as varied as flakes of snow, their view of data is consistent: data drive so much of what these folks do from day to day that it provides at least a common starting point for your conversation. While they might not always concur with your data, at least it provides a common ground—a framework, and language, within which you can reach consensus.
At the same time, it's important to bring the right data to the party. Prototyping is commonly defined as either low or high fidelity—but this binary set of descriptors barely scratches the surface of the range of fidelity possibilities. In “Breaking the Fidelity Barrier” (McCurdy, Connors, Pyrzak, Kanefsky, and Vera, CHI 2006) the authors described five dimensions along which fidelity can vary, from low to high:
— Visual Fidelity
— Depth
— Breadth
— Interactivity
— Data Fidelity21
21McCurdy, Connors, Pyrzak, Kanefsky and Vera, CHI 2006
Prototypes can be high or low fidelity visually—hand drawn vs. pixel accurate renderings. The navigation can be high or low fidelity in terms of breadth or depth. They can also have high or low fidelity interactivity, and perhaps most importantly, high or low fidelity data, where high fidelity data might represent an actual data set and low fidelity data might be a few spoofed data elements—“lorem ipsum” rather than actual text, for example.
The advent of portable data formats such as XML has really opened the floodgates for high fidelity data models underlying otherwise low fidelity artifacts. Why might one go to the trouble to use a high fidelity data set? A good example to consider might be the Cable TV or PVR channel guide; it's all well and good for designers to propose flashy fully labeled lickable candy-tiles when considering the 10 shows the designer might include in their comps, but it's another thing to see that treatment in a display of 400 channels where many of the programming items might be only 30 minutes long and therefore too small to support a 50+ character label. In this example the context supplied by using the real data set might immediately illuminate the successes and “opportunities” within the design, and calls attention to the lack of scalability of the proposed design.
This demonstrates the variety of data you can collect by varying your prototypes along these five axes. A low visual fidelity prototype with high fidelity depth can help evaluators elicit user responses to an entire process through an artifact (such as a start to finish ATM transaction). Series of screens with high visual fidelity but low fidelity along the other dimensions are often used to gather reactions to the look and feel of a product. To gather data about users' ability to interact with the system, and the scalability of the data representations, it would be useful to select high fidelity along the interactivity and data fidelity dimensions when designing and assembling a mixed fidelity prototype.
Nothing will assuage a developer's fears that a designer has proposed a solution that isn't scalable or understandable faster than data showing the scalability and effectiveness of an interface demonstrated through user tests of an artifact based on real data.
The HCI Group at Ames took exactly this approach when designing the next generation of tools for Robotic Surface operations. Once the set of all current plans (both as planned and as executed) had been captured in XML, it took very little effort for one (talented) developer to create a range of prototypes demonstrating new visualizations and interactive methods that operated on the real MER data. Using this mixed fidelity prototype, the authors were able to conduct ongoing tests with actual users of the system without ever having to make excuses for using simulated data. Users also were able to focus on the interactions and the visualizations rather than irregularities in the data presented (since there were none), and they were looking at the same data they were using in their production systems in nearly real-time.
By carefully selecting these two dimensions to focus on in our mixed fidelity prototype, our team was able to gather detailed data on a timed task—something that would be impossible without the high fidelity data and interactions. This sort of data, presented in the context of development triage, makes particularly compelling arguments (especially if the audience has a healthy skepticism with respect to design processes).

What have we learned?

The industry has changed quite a bit in the last 8 years. Interaction Design, and the design disciplines in general, have enjoyed quite a surge in acceptance and popularity—call it our own “iPod Halo effect.” The success of well-designed products from OXO, Apple, Volkswagen, Target, and others has opened the eyes of engineers and accountants in a variety of industries. However, like any relatively new discipline that finds itself in demand, we must take care to integrate ourselves into existing organizations and processes—swim with the tide (or in some cases, the riptide) rather than against it. There is little likelihood of this integration if we cannot foster trust while instigating and sustaining interdisciplinary communication.
There's perhaps a cautionary tale for design practitioners to consider: recall the introduction of Information Technology into the corporate infrastructure in the late 50s and early 60s. In those days computers were mysterious props from the set of science fiction movies, accompanied by somewhat vague promises of increased efficiency and worker productivity. They were installed into special semi-hermetic, starkly lit white rooms behind glass windows, and serviced by white-robed acolytes. Eventually many organizations' IT departments became focused more on their own growth and self-perpetuation rather than whatever the broader goals of their companies. The result is still felt—IT departments frequently at odds with business managers, and endless deployments of newer and larger internal projects—many of which are consistently ranked by their own users as failures.
How can we as design practitioners avoid a similar fate? By fostering trust with external teams, stakeholders, and users, using any and all means at our disposal. Of course it's not always possible to find yourself staffed on the teams you are building tools for, but there are almost always opportunities for contextual observation, and we've found users relish the opportunity to have a voice in the design process.
Another approach designers can adopt is supporting design decisions, where possible, with data. It is important that design decisions be set in an empirical context rather than at the “whim of that designer person.”
Finally, despite having worked in a range of development environments, the one thing that consistently works well is the close integration of design resources within development teams. This includes having designers participate in bug/feature priority setting, and having design issues assigned to them as “bugs” or feature requests within the broader development tracking mechanism. By becoming part of this broader engineering team, the interPersonal relationships—and trust—are forged and solidified.
By consistently articulating and applying a design process, generating and maintaining credibility, and judiciously using prototypes of varying and often mixed fidelity to convey design intent, designers have more opportunities than ever to bring design practice into organizations—organizations that are newly receptive to design application, and are eagerly anticipating the results.
Copyright for this article is held by Chris Connors; reprinted here with permission.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.30.253