Getting Design Done
Ellen Beldner
ChoiceVendor
Ellen Beldner is the User Experience director at ChoiceVendor, a business software startup in San Francisco. She has been designing software in Silicon Valley since 2000, including four years as a UE designer at Google (when this essay was originally written) and a year leading the design of You Tube's monetization and copyright-monitoring software. Ellen gradted from Carnegie Mellon University with a degree in HCI and professional writing, and her focus is on enterprise and expert-use software.
Designers make decisions. We must make them in the face of uncertainty, of constraints imposed by medium, time, and fiat, constantly balancing immensely complex systems of interwoven attributes and elements. But the designer is a person—not the person—who makes decisions. Designers must learn not just how to make decisions about the interface, but how to navigate group decision-making processes with an essential design vision left intact.
Additionally, designers communicate. One of the biggest lessons I have learned about the practice of design is that it takes at least as much time to communicate your design to other people as it does to generate and plan the solution.
We also communicate to help our teams make non-UI decisions. Visualizations like flowcharts force the team to think about where the accounting system kicks in and where they have to get inventory to talk to shipping—even though they're writing the credit card processor and those other systems weren't obviously part of the payment mechanism. These are not interface decisions and the designer does not own them, but our skills as communicators, sketchers, and documenters make this an important secondary role.
Cooper88 finds the communication role sufficiently important and time-consuming that it hires design communicators, saying in the online job description, “[i]f our vision is to be realized, it is essential that we effectively communicate the design and its rationale.”89 The role is particularly important at a consultancy, where much of the project's value is projected through its deliverables. I once attended a talk given by a designer from Meta. He showed one of their gorgeous 11x17 color-printed design deliverable books. The book was bound and must have been 200 pages. It was impressive, and documented the output of a million-dollar branding and site design project for a startup created around 1999. He told us that they spent a huge amount of time designing and producing that deliverable, because as a consultancy working on such massively expensive projects, they learned that clients want something physical to hold on to. They needed a physical artifact to embody the value of the design work that had been done.
88The design shop formerly known as “Cooper Interaction Design.” Naming the firm with the same word that is used to refer to its founder and principal, Alan Cooper, strikes me as a bit confusing due to ambiguity of referent. There is not a chance that this ambiguity slipped by unnoticed. I wonder: what direction does the metonymy go?
Designers are familiar with the rule of sketchification: don't make your prototypes look too fancy or polished, or you'll never get really challenging feedback. Your clients, team, and users will hesitate to rip apart something that is too finished. Showing people deliberately sloppy sketch work in early-stage design ensures that you gain very high-level feedback. We forget, however, the converse: when you produce polished and thorough deliverables, you help encapsulate product decisions and give yourself and the team a sense of making decisions and having learned and progressed. Spend some time producing a few landmark deliverables on each project. They'll serve as reference points for you and the rest of the team.
Many teams and projects don't think to ask designers for deliverables not immediately related to the UI, be it early sketchwork or auxiliary flow charts to help a team figure out what it's doing. On every project that I've used these rapid prototyping techniques and IA-style low-fidelity deliverables, even without initial support, team leads have thanked me. Most engineering managers notice the difference and have specifically requested that my colleagues do more of this work. The work named “Information Architecture,” pretentious as it sounds to many engineers and managers, specifically prevents many of the horrors that design-by-VP and design-by-committee unleash on the world.
I get a sort of perverse satisfaction when engineers, totally innocent of any design methodology prejudices, come out of design-by-committee meeting shell shocked and horrified. “Is this what you do?” they ask me. “Why is getting a good UI such a painful experience?” I explain that it doesn't have to be that bad, and in fact, that design-by-committee is known in the field to be extremely painful and time-consuming. The stuff that many managers reject as nonsense—IA, flow diagrams, user story flows—actually makes life better.
I was on a major project several years ago where the product manager didn't realize the value of IA work; the tech lead thought it was silly, and I was too green to firmly stand up for the process by doing the work and demonstrating its value. As a result, we had a nightmare of a process with 6 months of daily UI review meetings in which every minute decision got revisited a thousand times and the resulting UI was confused and fairly incoherent. Not to mention the effect on team morale—two junior PMs, the UI engineer, the usability analyst, and I were miserable on that project. I suspect that the only two people who got anything out of it were the senior PM and the tech lead, who ended up dating for two years afterwards.
Good, non-bureaucratic IA work helps scope and structure decisions, and it tends to remove dependencies on having an entire UI done before engineering can start (referred to, generally with shudders, as “waterfall design”). Begin with use cases and scenarios: make decisions about what the product needs to let people do—and you get the team to agree on that part. From there, it tends to be easier to figure out what features make sense and what features are most important, and you typically get reasonable agreement on what features are needed first and which ones are nice-to-haves. Engineering can start architecting the major systems needed. Then, given the feature set, you can start planning the particular pages and interface widgets that will support those features.
This means that you will have less control over the total system but that the system will get launched faster. As you're working out the details of the design you may realize that you want, say, the rate of change in duration between user logins to drive the appearance of a help tip. The engineers tell you that it would take a massive database restructure. Had you completely designed the system before any engineering work took place and written a full UI spec, yes, the engineers would have known the requirement and could have built it into the system: but then engineering work would have been delayed by a few months.
Where the designer's word is law—or where there is an interfering VP or a scatterbrained PM—late feature revelations are disastrous. The HCI field has strenuously proven in the last 25 years that it's worth the engineering time to generally have high-level design researched and completed before engineering work begins. But in practice, particularly at software companies, the detailed subtleties of interface design are very rarely reason to restructure the database or hold back on launch. And even though product planning typically begins before significant engineering expenditure, the people who do the product planning are very rarely people who are trained in user-centered product design. Designers are often seen as more of an engineering resource: someone to call in when you start to build, not when you start to plan.
So over time you learn to anticipate the needs of your team and your product: by the time they ask for the feature, you already have a fully done design spec that is backed by usability work. The fact that the work is already done and tested is a good enough reason, in most of these fast-paced no-time-to-think situations, to follow the design.
Don't let anyone force you into particular mechanisms of design, be it a process or deliverable. All of these artifacts, all of these processes and methodologies exist because different people figured out what it takes to get design work delivered. Clever anthropologists and PhD HCI students distilled lots of these techniques, boiled them down into one process or another, and those processes folded back into the field.
Thus we all know that you start with needs and requirements gathering, do iterative prototyping based on team and usability feedback, and then develop and deploy.
Except Google. 90
90And maybe Apple, although I haven't worked there so I don't know - I just hear rumors.
It always shocks practitioners from school or other companies when I mention that most of Google's product design and development is nothing like the practice we're taught about in school—yet it gets the job done. Decisions are scrutinized from the highest level. Product managers are generally instructed that they own the user interface. Early stage field research is regarded with a solid dose of skepticism and is underutilized by PMs and tech leads who aren't trained to inform product decisions with it. In most cases, we design and build some level of prototype, and then a designer is called in: either a face-lifted product gets launched and then we do usability and start revising it to task-centricness; or we do task-centricness and it gets launched as an alpha or beta.
I find it problematic to categorically say that the interaction designer is the arbiter of all interaction and must be ultimately responsible for the user experience of a product, working with a phalanx of graphic designers, interface programmers, usability analysts, and the like. This is because lots of good products get made without someone who is a trained and practicing designer. I would love to bestow automatic and official authority on anyone who does meet the qualifications of a designer, but that's as silly as saying that only the person who graduated from culinary school should be roasting the chicken. Roasting a chicken isn't as easy as making toast, but if you follow directions you'll be fine. In reality, chickens get roasted and product decisions get made by all sorts of people for all sorts of reasons. 91
91On the other hand, if you have Jamie Oliver or Dana Stewart standing behind you, apron on, waving a pepper grinder and a bowl of cornbread stuffing, saying “Would you please get out of the kitchen so I can cook you an awesome chicken? Please? Seriously. You're pissing me off. Don't you want a yummy chicken? I'd really love to make one for you. It's my special recipe,” you'd get out of the way, wouldn't you?
If you are trained in the science of Human-Computer Interaction and in the art of design; if you are intuitive and emotive and empathetic; if you are logical and creative, artistic and mathematical; you are a designer and you need to be calling the shots. It makes sense. It is why you were hired. You have the right to the authority of your expertise. Sure, you will mess up and make bad decisions on occasion, and being the designer does not mean you are a prima-donna who is allowed to ignore feedback. But the person who has the most expertise in user-centered design (you, presumably) should be the one who bears the responsibility for design decisions. It's maximizing efficiency on your team. 92
92Not only is it really slow to have a team of 8 people making collaborative decisions on tiny UI details, but it makes you want to poke out each other's eyeballs. Missing eyeballs are not good for team morale; if there is one thing Kill Bill taught me, it's that.
Most people who study HCI or one of the design fields go into that field in the first place because they want to design products start to finish. Designers are taught processes and procedures for working with bizdev people and with engineers and marketing and writing; for doing early UI prototyping that can be tested, in order to make decisions without all the drama; for gathering requirements that are based on fact and vision; for staging decisions appropriately; and for validating and testing assumptions in a timely way. This isn't all that is needed to launch a product and I don't want to insinuate that the world's problems would be solved if only UI were in charge of everything. But you must seize decision space where you are the expert, and consequently you must take responsibility for the mistakes you will make.

Working with product managers

When I was preparing to quit a previous job, I made an outline of the reasons I wanted to quit, as a prelude to my exit interview. The list began like this:
1. Had to instruct “Vince,” my product manager, to stop touching me (“Don't worry. You'll have your say.” [pat pat pat])
a. My product manager has a bald spot, is short, drives a Camaro, and smells like garlic.
i. I know what my product manager smells like
2. I am being explicitly told to plagiarize the UI for our competitor's analogous product.
a. That software is a failed product made by a company that just got de-listed from the stock exchange. The goal of their recent redesign—for which they had paid frog design a well-deserved several million—was widget-level consistency amongst all the products in that company's massive product suite. But that product, just like their others, was not designed for the task at hand in a user-centric way.
3. Vince micromanages the UI and tells me to do things that contradict 20 years of HCI research with no reason other than “Our competitors do it.” I have expressed my frustration at this situation to him and to my manager, to no avail. 93
93The list continued with some choice remarks about the intelligence of the CEO and the motivational posters on the walls. No joke, I walked in one day before a potential client toured the office as part of a due diligence check and found “Teamwork: When we all work together, we all win together” and “Flexibility: stretch your potential.”
I quit that job because the product manager was a micromanager who didn't know what he was doing. He took no pride in designing the best software possible; he was unwilling to listen to or consider my expertise; and he told me to do things that I thought were professionally unethical. Most designers work in these conditions every day. At Google, there is one PM that many of us work with at some point or another—let's call him Richard. He has quite a lot of jurisdiction. There's a point about 3 or 6 months into each designer's stint when they start to get sort of quiet and flummoxed… and then there's the fateful day when they come back from a meeting with a glazed look in their eye. They cautiously approach a designer who's been around longer and say “So…. I was wondering… I just got back from this review….”
The designer who's been there longer nods their head and says “come with me.” You take the newer designer—and these are not novices; they have come from Stanford, CMU, Berkeley; Ebay, Amazon, Microsoft— into a conference room and you say:
“Richard's UI review meeting?”
Their eyes light up: you understand.
“Yeah… is it…?”
“Always like that? Yes.”
We had one designer on staff: experienced and talented, who had started from one of our more successful acquisitions. He went into a review one day with bullet points on each search result. Richard asked, “Why are you using bullet points? Those are too heavy on the page—could you try hyphens and come back next week?”
So that designer not only did mockups with hyphens, but with plusses, no punctuation, and quite a few other variants. The next week, I was sitting at my desk when he returned to the cubes, shaking his head with a look of peeved astonishment on his face, lip curled, eyebrows raised, mouth slightly agape in that “WTF” expression.
“What happened?” I asked.
“Richard took one look at the blurbs, with all the variants I had done, stopped me mid-sentence, and said, ‘why aren't you using bullets?’”
“You're kidding me.”
“Nope. Screw this. I'm just gonna keep doing what he tells me until he shuts up.” That designer quit not long after.
I've looked for information on the profession of product management. It is not a formalized or academic field: it's a position that exists only within industry. There is very little professional literature about what it means to be a product manager—much less a good one. The field of HCI has spent the past 20 years yelling its head off about how to work with engineers; as a result, most engineers are comfortable and happy with basic user-centric methodologies (like prototyping before implementation and doing user testing to help make decisions faster).
Yet no one teaches product managers how to do their job. This is not to say that PMs do not serve a function or are incompetent at greater rates than the general population. From what I have seen, the PM's role tends to deal with business requirements, UI themes and flows, and prioritizing technical work and features. Clearly PMs serve useful functions and do useful things, or companies like Microsoft, Google, and all the other major software development organizations wouldn't have so many people filling this role. And I've worked with PMs who I completely love; I wouldn't trade their presence on projects for anything. I usually find those engagements successful because our skills are complementary. They do work that is unique to their expertise and I do work that is unique to mine. Everyone feels like a useful and valued contributor.
Now that I work at a company with a large design staff and even more product managers, I can see how each of us works well with different product managers. Jill is a super-organized checklist-driven micromanaging machine: I go nuts when I work with her because if I'm missing one mockup or go in a direction she didn't expect, she gets upset. Other designers think this is dandy. I like working with Peter because he's chaotic and it gives me a lot of space to define the UI and decide what we need to do, but this makes some designers bonkers because they're not interested in the project planning aspects.
Working relationships can be every bit as dysfunctional and demoralizing as romantic relationships. Finding a job that you love is at least as hard as finding a great partner—harder, perhaps, because it's like dating the 5 or 6 people on the immediate team rather than just the one boyfriend.
Over time you learn to quickly identify the sorts of projects that will be a breeze and the ones that will test your skills. Be conscious and deliberate about this. Keep notes about what makes you the happiest when you're working, and where you drag your feet. When you interview for new jobs or move on to new projects, you'll get better and better at matching yourself to situations where you're doing what you want—and at least you'll have more accurate expectations going in.

Your job is to make decisions and deliver them to other people

Design is deliberate decision-making—which is sometimes ruthless—in pursuit of a vision.
Design is rhetoric. It is the act of communicating an idea to a particular audience, generally using a particular medium.
And it is the job of the interaction designer to make decisions about the product and its interface and then communicate those decisions to the people who have to build the product. 94
94This is like Sen-Rikyu, the father of the modern Japanese tea ceremony, explaining this ceremony as follows: “Tea is naught but this. First you make the water boil. Then infuse the tea. Then you drink it properly. That is all you need to know.”
One of the greatest blocks to good design is the tension between authoritative decision-making and the humility and creativity that are at the core of our profession. You cannot be a good designer or engineer unless you are always trying to solve problems amidst new constraints. 95
95In fact, the best way to get an engineer to implement a feature you want is to pose it as a problem for them to solve—“I'm not sure how we could make this really fast for the user, maybe some sort of date parser? Is it even possible to do that?” They'll start pondering and maybe they'll come up with an implemented solution to the problem in a spare hour. PMs will use this trick on you, too.
Moreover, it is inimical to your nature as a designer to not allow yourself the hubris of too much authoritativeness. You know for a fact—you must know—that at any point your design may be proven ineffective for its purported uses. No matter how much you believe in the design, if it doesn't work, you have to let it go.
So we often find ourselves hesitant to make authoritative recommendations. An engineer or PM can justifiably say “well, but what about this use case? It would suggest that we make the flow work like this, instead.” You can get stymied, circling back and forth between designs, unable to make a decision or preserve a coherent vision of the interface.
You must learn to make decisions in the face of uncertainty, always preserving your memory of the paths and solutions you did not happen to take so you can return if you need to revisit a decision. Designers always have to make decisions with imperfect data, and very often with inadequate data. When you can't make authoritative decisions because you don't have sufficient data, you have to state assumptions and make recommendations.
When you aren't sure, proactively point it out to your teammates. Bring it up for discussion and collect their feedback: one of them may have some additional data or insight. Point out the weaknesses in your own work and the gaps in your knowledge; it will mitigate risk for the project. You should also suggest a plan for what type of data you'd need to make that decision; if you work with a usability analyst, he may have better suggestions, and he'll definitely know how to get the data that you need. Engineers may be able to collect data or run a logs analysis; your PM might be able to set up a focus group.
Given the methodological squishiness of most fieldwork and usability testing, your data will never be perfectly reliable. Some data, however, is better than none. Even if you're reading a collection of interviews from CIO magazine about average costs of enterprise software installations, you will have some objective, external thing to point to and say “this is why we're going to do it this way. We may be wrong, but we'll make a note of this as an assumption to keep an eye on over time.”

Jack of all trades, master of none

If you have 2 years in the field versus 10 versus 20, you will have widely different competence at managing projects and making good decisions. And as you negotiate your role on a particular project, you will have to draw boundaries between yourself and the tech lead, yourself and the product manager.
I think many designers have a tendency to take on more than our capacities truly allow. First, you're afraid to say “no” in the workplace, for fear you'll seem like a lightweight or slacker. Second, the interaction field is multidisciplinary and we gather a fairly broad range of skills—experimental design, copywriting, info architecture, HTML, JavaScript, visual design, icon hacking, and bug fixing. On small, lean teams—the types that you see at startups or at the IT departments of not-primarily-technology companies—the interaction professional is going to have to do a lot of these things. Compare to a major technology innovator like Yahoo! or Microsoft—they have a plethora of highly specific HCI-related positions like information architect, visual designer, field analyst, product designer, and UI engineer.
If your role is very well defined, you have somewhat less metawork to do. But on a team with amorphous or ill-defined roles you often have to explicitly articulate what you need from your team members. You have the right to expect PRDs from your product managers, timely tech decisions from your engineers, and that once people agree on a design, it gets built to spec in good faith. 96
96(Not “Oh, I thought you wanted to button text to say ‘Lorem Ipsum’, and now it's in the hands of QA, so sorry, too late.”)
Recall the management shift that happened in the 80s and 90s, when you had to start thinking of coworkers as your “team.” This means that you are all mutually obliged to one another to cover your turf to the best of your abilities. Just because the PM is near the goal doesn't mean that they're the goalie—but if they happen to be near the goal, they'd better try to stop the other team from scoring. This is great when your team is gelled, and in its dysfunctional expression can lead to busybody micromanaging between all team members.
When you're building software, there's a list of things you have to do to get the software out the door. For example, let's say that list consists of the following To-Dos:
— scheduling and resource planning
— business analyses, market research—broad trends of market segments
— end user research—concrete minute needs of individuals
— designing and planning the behavior and functionality
— building the product
— finishing & cleaning up the product
— launching / deploying the product
— supporting the product
You have to figure out who is doing what. A lot of the roles are fairly lockstep with people's academic training and conventional organizational roles: You need some sort of person qualified to make business decisions, a researcher, some customer support people, and some engineers. The best way to figure out who is doing what is to sit down at the beginning of the project and flat out say, “I can be responsible for making these decisions and collecting usability data. I need you to give me some sort of requirements document. I'll translate that into wireframes, and we'll review them, and you'll give me your approval.”
If that arrangement sounds icky to the product manager, they have a chance to say so. It is very important to avoid the tendency to steamroll other team members. If the PM really, really thinks he should have a role in creating wireframes, you will make your lives miserable by sweeping that under the rug. If you decree that you get to do all the wireframes and the PM is not to be involved, the PM will feel marginalized and will have a latent bitterness at not getting to have sufficient say. If you're lucky he'll be passive aggressive and nitpick you to death during review meetings until the wireframes are what he would have done. If you're unlucky he'll ignore your work, do his own, and talk it over with the engineering team behind your back, and get them to implement his version (“Oh, it was faster that way. Don't worry, we'll test it, and if it doesn't work we can change it.”)
It is difficult to be direct and say to the PM, “I want to see your wireframes because those are an important way for me to understand your requirements and your thoughts, but let me produce the final deliverable.” This has the added bonus of pointing out that you're going to be doing the dirty work of producing and maintaining the design spec.
If you have a PM who comes from an HCI background, you'll have more overlap and therefore more potential conflict. For example, I work best with the business-analyst type of PM: someone who's keen on figuring out the product in the rest of the organization and giving me market goals. They don't need to have the interaction vision in their head; they focus on the strategic vision and leave the implementation to the professionals.
Even as a relatively junior designer in my second job out of school, the product manager (an MBA with 8 years of experience) gave me the latitude to design and deliver the UI. It was one of the few pieces of enterprise software in existence that its users could actually deal with. 97
97Oracle, I'm sorry, I know you brag about your usability labs, but each of the products that I've used ha been miserable, frustrating, time-wasting, ugly, and unpleasant nightmares. I am a trained User Interface Designer and I can barely figure out how to schedule a meeting in CorporateTime's web application—and when I do, it is such a slow process that I almost fall asleep with the frustration of waiting for each freaking pageload. Also: beige? Come on. Beige is the elevator music of colors and you know it.
The interface designer is generally the person who designs the literal user interface: the screens, the buttons, the links. This is an important part of the discipline: it's the actual finishing of the house and construction of the garment, to use metaphors from other design fields.
But you can't be a brilliant designer of user interfaces unless you have a grip on the design and function of the system as a whole. That is, unless you are also a designer of experience. Sure, I can write a more thoughtful and functional credit card input form than any I've ever seen deployed in my life (and I do a LOT of shopping on the internet), but the fact that I chose a self-correcting text field instead of a dropdown for the month and year is going to make very little difference if the overall system isn't designed in a humane way. Maybe what the person really needed right then wasn't a place to input their credit card information, but a Flash movie explaining the thing they're about to buy.
Most of the designers that I am connected to through school or work are trained to be product designers, user experience architects, or Interaction Designers. The work that we're good at doing and the work that we should be doing is a superset of “User Interface Designer.” My passion for what I do is grounded in making technology suck less for people. It is not for creating webpages—although that is my current medium and I have a lot of respect for it. To the extent that I'm stuck designing webpages in a vacuum separated from a holistic design of the total system, I'm frustrated. We cannot do our best work, as trained in current best practices of design and HCI, unless we can affect the system as a whole and the user experience in its entirety.
When you are understanding boundaries and skill sets, also consider the people who do the most minute bits of interface implementation. Interface coders are on the line between human and machine. They are the line. Gmail, 37Signal's Basecamp, and Flicker are some of the most obvious recent web products whose UI engineering played a huge role in improving the user experience. Take Gmail: it is nearly ubiquitously praised for its fluidity, in addition to its beauty, interface, and featureset. It took years of coding and polish to make everything feel that fluid. The UI coders are just as vital to the success of the project as the high-level designer. Treat them well. That kind of person often makes a fantastic teammate for a very interface-oriented product manager who loves to do flows, or for an information architect who is less detail-oriented. The UI programmer gets a lot of latitude to make minute decisions about onClick versus onMouseover; if you care about that and you're working with someone else who does, understand that you are going to spend more time in intense discussions with this person about the best way to get the job done.
This isn't a prima facie bad thing: but if you're expecting to hand off a spec and get it implemented with no questions asked, you're going to be disappointed. This individual cares about their job and takes pride in their work, and just as you strive to make the best decisions according to your professional capacity, so will they. You will have to answer their questions, particularly if you want them to go the extra mile to make the UI spitpolish perfect. Moreover: they may see it as their job to figure out whether it should be onMouseover or onClick and might resent you for trying to over-specify.
One of the other dangers when you collaborate closely with someone who has overlapping skills is that you can end up compromising too much just to get a decision made: this person really wants the picture on the left, you think it should be on the right, so you say “let's do half and half,” or “let's put it in the middle.” It's not that this automatically creates a bad design decision, but it does mean that the decision-making process is not based on product and user needs—it's based on the political expediency of getting things done. (Every product I've ever worked on that had this design-by-committee problem ended up a mess until all the competing principals buggered off and let one person pull everything back into a single holistic design vision.) Again, it can be helpful to set a ground rule up front: the designer provides recommendations and only in rare super-crucial cases do you set absolute requirements for implementation. Not only do you create a sort of constitution for decision-making on the project, but you also tell each other “it's okay to have discussions about our working style.” You're all (probably) working towards the same goal—getting the product launched—so adjust your working styles as needed. The work habits are in service of the ultimate goal: do not make the goal bend to the will of the habits.
So the point is, you have to know your passion and figure out the types of people whose own contours give you the space to do your best work. If you overlap, you will have to take extra care to work together in a way that doesn't make you hate each other or feel like you in each other's faces. And if you want to change your focus or learn something new, you'll need to work with the appropriate person: they'll challenge you.
Changing roles or expanding your role takes work. There are two obvious ways to do this. Suppose you want to have more of a role in doing early-stage fieldwork. You need to demonstrate your level of competence so managers and coworkers know what they can trust you to do. So you can either do your fieldwork for a noncrucial project, like an internal project, or apprentice with an experienced field researcher.
If no one in your organization is currently conducting field research, you're going to have to go out on a limb. You're going to have to make time or take extra time in the mornings and evenings. Skip your daily news surf for a week and spend an hour a day doing phone interviews. Hand out PostIt notes in a meeting and get team members involved in a participatory design exercise to brainstorm principles and anti-principles.

Bureaucracy versus politics

Two people are a relationship; three people are politics. Politics are the inevitable outcome of humans, primates, interacting with one another. We say “politics” with derision when these personal relationships—which create the foundations of preference, hierarchy, and prioritization—cause us to make organizational decisions that are not actually in our best interests as defined by an objective, industrialized, data-driven process.
Undoubtedly we need to have structures in place to enable decisions: where do resources get allocated? Whose project gets the homepage promo? Who gets multiple UI designers, full product support, tech writers, usability tests, marketing resources, in addition to engineering support? When will this product launch?
Politics is a way to solve problems, support decisions, and secure resources. Bureaucracy is also a way to solve problems, support decisions, and secure resources. Some cultures (whether corporate or governmental) use politics to meet these needs, and other cultures use bureaucracy. Or, more accurately, most cultures are somewhere on the spectrum between the two extremes.
I don't deride politics per se. Politics inevitably exist; we can't help ourselves; we're only human and it's our nature. Bureaucracy is the antidote and also its own poison. You could be filling out forms in triplicate; writing 800-page SEI product specs before doing any work and getting those signed off by everyone in eng and management.
Agile software development and XP are more flexible processes that developed in response to overly bureaucratic corporate structures that got in the way of good product development. 98
98Think Office Space. Think government job. Yuck. On the other hand, zero procedural structures turns us into Melrose Place.
I would surmise that upper management (and perhaps each of us) needs the greatest flexibility to make decisions whenever a decision might be appropriate. And I can see why: we each have to trust our gut, and if it's a week before launch and the product doesn't work, doesn't fit the vision, or looks icky, you can't launch it simply because the Is and Ts of formal procedure have been dotted and crossed. You have to call a spade a spade and say “no, we can't launch that, this stuff has to be fixed.”
At a small company where everyone can talk to one another, it's effective to use politics (that is: informal social like human interactions) and the flexibility is great. But at a company of a few thousand people, whether you can get someone's ear might depend largely on whether you've known them for 5 years already; are friends with their admin; are an expert on their project. This is not inherently dysfunctional. It's a natural tendency to leverage whatever resources you have available and the simple nature of team-gel and longstanding business relations means that some people are going to have better access to knowledge resources than others.
Google tries to flatten this access disparity with an open-knowledge corporate culture. However, the solution obviously doesn't scale. At some point, the employees can no longer each give their time to all others. People start making harder choices about who gets their energy. Additionally, the nature of human interactions suggests that people who have seniority, are more flamboyant, and/or are more demanding will tend to command more attention: the squeaky wheel gets the grease.
I surmise that businesses use bureaucracy to flatten natural disparities of connectedness. However, these bureaucratic structures often try to eliminate politics. You don't get approval by stopping by the boss's office; instead, you have to fill out a proposal in triplicate, submit it to the boss's admin, and wait for the committee to review your proposal. Bureaucracies act under the premise that politics can be eradicated from the organization. So instead of getting attention or code or a UI because you, say, worked with the designer on a previous project and they'll do you a favor, your access to these human resources is dictated by going to a meeting, getting on the agenda, requesting the resources, and getting the manager to sign their approval. The problem with bureaucracy is that it doesn't get rid of politics. Nothing does. You can't eliminate human interactions from anything involving humans. Perhaps the best approach is to use bureaucratic and political systems in tandem to shore up the others' weaknesses, although I don't have a great sense of how you'd do this on an institutional level.

Intuition and the art of design

Several years ago, I was snowboarding with some friends, one of whom was taking his first lesson that day. I had stopped by to say hello to him and another experienced pal joined us. The beginner, as per standard operating procedure, was falling every three feet. Our other friend started to try to explain snowboarding according to the principles of physics involved: being a math major from MIT plus a black belt in aikido, he was more qualified to explain the physics of motion than your average person. He started instructing: “lean back, then forward, as the edge turns and catches you shift your weight…”
Our beginner friend tried to follow these instructions, getting more frustrated by the minute. I interrupted the erstwhile instructor.
“Mike, what you need to do is stop thinking about it, keep practicing, fall about a million times, and feel what it's like each time. Then you'll understand how to snowboard.”
He needed to develop what we call muscle memory—an intuitive, subconscious cognitive ability to maintain and adjust his balance with muscle control in this new medium.
There's a buzz in the Valley these days that design should be a science, not an art. And indeed, in the HCI community a great deal of effort has gone into, shall I say, mathematizing usability and design work. Vividence did a fantastic job of extracting comprehensive data from sophisticated clicktracking, timestamping, and integrated survey questions. Eye trackers are similarly powerful. You get precise data about where people's eyeballs went; although it doesn't tell a complete story, it's still a rich one. You learn where people's eyes fall and how a visual design draws eyeballs across a page.
Any web-based company will be as smart as it can afford on its log analysis, clicktracking, and similar. You can see trends in how users click and navigate through your site, and particularly when they begin by searching, their motive is often quite clear. If someone goes to Amazon.com and types “gift for 5 year old,” you do gain a sense of what they're looking for, and you can recreate the story of their thoughts and choices by simply tracing where they clicked at what times.
This, of course, fails quite a lot.
Did that incomprehensible click on “View this page in Japanese” actually indicate that the user wanted to switch to the Japanese interface, or did the FedEx guy ring the bell, the dog barked, their hand twitched, and they accidentally clicked on the link? Did someone abandon their shopping cart because they changed their mind about the purchase, or because they never had any intention of making the purchase but wanted to see how much those 3 items would cost?
All of the designers I've discussed this with think that it's bollocks to try to reduce design to a science. This is precisely the point Malcolm Gladwell makes in the introduction to Blink, as he discusses the Getty museum's decision to purchase a rare statue for $10 million. It was an unusually well-preserved specimen of a particular style of ancient Greek statuary, called a kouros. The Getty spent a massive amount of energy analyzing the statue: geological samples, a zillion types of x-rays, chemical tests, you name it. The cinch was a layer of calcite on the statue's surface, a chemical change that could only have happened to the statue's marble after many hundreds of years. The statue definitely wasn't a fake. Science had proved such.
Proud of their find, Gladwell reports that the Getty curators showed the statue to a few art historians and experts in ancient Greek statuary. Every single one of those experts instinctively thought something was wrong. So the Getty shipped it to Greece where more historians took a look.
Gladwell cites comments from historians like “Anyone who has ever seen a sculpture coming out of the ground… could tell that that thing has never been in the ground,” and their feelings of “intuitive repulsion” towards the statue. 99
99Gladwell, Malcolm. Blink: The Power of Thinking Without Thinking. Little, Brown, 2005.
Turns out that the statue was, after all, a fake. The experts' intuitions told them in 30 seconds what it had taken the Getty 14 months to wrongly prove with science.
This does not mean that science is bad. But our methods of measurement are currently too gross to give us perfect answers with true accuracy. And much of science still relies on intuition and hunches; your ability to make creative leaps depends on the sum of your previous experience. In the case of Gladwell's statue, the examining scientist later learned that the aged marble can be faked with a type of mold. Since he did not initially realize this was a possibility, it of course did not occur to him to test for this contingency.
Testing user interface is as complex and multivariate as testing the effect of a new medicine: and like getting a new drug approved by the FDA, true scientific validity would take years to establish. True scientific validity in HCI is not possible. It takes too long for too little return. I see UI being tested as science without the benefit of intuition and I shake my head in amazement at the realm of knowledge the scientists are denying themselves, even as they take vastly longer to reach the obvious conclusions. You may be able to measure the effect of a change in font size on sale fall through, but you won't understand why. A moderately proficient HCI professional could quickly explain visual scanning processes and the likely effect on user behavior and rhetorical effect that would happen with the change. 100
100Caveat: when you work at a web-based company, due diligence CYA is probably a sufficiently good reason to precede major changes with click through studies. But know its limitations and know why you're doing it: validation, not inspiration.
It is easy to think that design is a science when you have at your disposal an army of engineers and analysts obligated to do what you tell them, a flexible and massive budget, the ability to set your own launch dates, and millions of users. If you operate without any of these constraints that are fundamental to other businesses, you can attempt to quantify the effect of every byte change if you want. But don't delude yourself that this is process innovation. It's a waste of time, money, and effort, because this expenditure is what at least 20 years of software process innovation has been trying to reduce.
At the CHI conference in 2004, a team of researchers from IBM presented results of a two-year study they had conducted on email. 101 They suggested that email needed some major changes, core amongst which were instant account-wide search; threaded conversations; message excerpts to summarize content; labels instead of folders; 102 and removing messages from view. It was a tightly conducted research project.
101ReMail: A Reinvented Email. Steven L. Rohall, Dan Gruen, Paul Moody, Martin Wattenberg, Mia Stern, Bernard Kerr, Bob Stachel, Kushal Dave, Robert Armes, Eric Wilcox. IBM T.J.Watson Research Center, Cambridge, MA 02142 USA.
102The IBM team used the term “collections”—but the concept is that a message can live in more than one collection, the way Gmail labels work, instead of belonging to only one folder.
But ironically, just a couple of weeks before, Google had launched the Gmail Beta. The Gmail team had reached the same conclusions that the IBM team did, as evidenced by the featureset that they launched with—except the Gmail team made most of those core decisions intuitively without conventional field research. (Email does happen to be one of those lucky cases where the developers are some users—not “the” users, but some of them.) While IBM was running their research project, Gmail got built and deployed.
The first thing you learn as an Interaction Designer is that you are not the user and you must never ever trust your assumptions. Like the rules about never beginning a sentence with a conjunction or ending one with a preposition, this stricture is at once both our golden rule and a pile of crap. I had a terrific professor at CMU, Randy Pausch. Randy had done a lot of work with Disney at its entertainment parks. He taught a class in rapid prototyping and usability-based iteration. Each project would require five rounds of user tests with five users each, and the consequent design iterations. The class was all about learning to trust user testing and nothing but. Data! Data! Put it in front of real people! I would sit at bus stops in Pitts-burgh and accost the natives; occasionally I would take my prototype to a bar on a weekend night, figuring that if a drunk could use it in the dark, anyone could. 103 A few rounds of this and the point got through—real people surprise you in ways you never ever could have imagined. And thus: design decisions should be based on observed behavior, not your intuition.
103You should try doing this. It's funny. Use paper prototypes because spilling beer on your laptop is no fun.
We'd often have to bring in the first draft of our design for feedback and status checks. At the beginning of the semester, Randy would look at our designs and say things like “okay, great, I'm looking forward to seeing your usability results” in the kind of voice that you'd say “wow, good luck with that,” when your friend tells you she's going to build a gigantic birdseed sculpture on a free-range chicken farm.
And so as the semester wore on I came to suspect that Randy could foretell every single usability problem that each of us was going to find with our projects. After class one day I stayed to ask him a couple of questions about my project. He said, “yeah, you're probably going to hit usability issues with this, that, and this other thing over here.” Sure enough, each of those problems turned up in my tests. In class we started asking him, casually: what did he think we'd find? He'd spit out a bunch of issues and sure enough, you'd test your design and people would screw up in exactly the ways he had predicted.
You sly bastard, I thought. You know exactly what's going to happen without testing.
Maybe it was because he'd taught the class before and had seen lots of approaches and results to the Alarm Clock Problem. (By the way, Randy, you were right. I finally did miss a flight because of the AM/PM issue. Even better, it was with my own clock, which I've had for about 3 years and should have been an expert with.)
Years later, after watching hundreds of usability studies, reading research, and following log experiments, I feel confident concluding: you do, in fact, learn over time. You do become a more efficient designer because you so deeply integrate humans' reactions to technology that you can start making intuitive judgments about new designs and processes. You won't always be right and you still have to test because you won't know when you're right and not, but you'll find that over time you become more accurate in predicting which designs will best solve particular problems. This means you get it right faster.
Trust your learning and knowledge. Remember that you will still be wrong, but even if you're wrong the same percentage of the time, you can spit out designs without quite so much painful deliberation and get done faster anyway.

Finger puppets, the IDEO tradition, and other UI-designer la-la land techniques

One night I was having dinner with a friend from Yahoo!—he heads up their innovation group and wrote a book on marketing a while back. He was talking about an offsite he went to with his group, which is generally made up of hardcore research scientists. He told me how funny it was to listen to them talk: algorithms, optimization, blah blah blah, and then everyone would laugh and he had no idea what the joke was.
“Someone probably said ‘Don't drink and derive,’ I replied.
“Yeah, no kidding.” He shook his head. “They're just on a different planet. Sheesh. Engineers. No, not even engineers. Scientists.”
“Yes, and you know they talk about you as ‘the marketing dude,’” I said.
“Marketing whore.”
“What?”
“Marketing whore. They call me the marketing whore.”
Apparently every time Oliver discusses market research and tries to use this sort of data to talk to the researchers about possible venues of investigation, they scoff. Oliver of course realizes what we all realize—market and usability research usually isn't hardcore and statistically valid. But what these scientists are apparently missing is that that's okay. You don't necessarily find the prevalence of problem A versus B, but you do get a sense of how users approach the task and you get some catalog of what all possible problems with the system are. You may find out that one or two problems are clearly very severe and have to be fixed no matter what. But aside from that, usability problems are usually prioritized by some combination of severity and ease to fix (“Well, people kind of got this, but we can probably improve clarity by 100% if we just use a different label, so we might as well fix that—search quality is more important but that'll take three people a month and a half.”).
I suggested that he just try to get them to watch the actual studies—then they will get the raw data themselves and won't feel like it's being mediated. They may make some erroneous conclusions, but at least they'll be making them, and they'll presumably make some good conclusions too in the process—and they'll get used to observing human behavior.
One of the best ways to convince people at your organization to do user research is to psych them up with IDEO's work and processes. As part of its Intro to HCI course, CMU shows a Nightline segment on IDEO's process that was done in the late 90s. It's a wakeup call. Everyone in the room perks up. They've been slogging through contextual inquiries and heuristic analyses for the past month, bickering with teammates about the deep essence of “real world—system match,” and suddenly all of those techniques fall together in the hands of people who are having fun and using literary rather than statistical methods of analyses.
Designers coming from colleges like Stanford and CMU are trained in the basic methodologies that IDEO uses, with some minor differences. Specifically at CMU, students are taught a lot about the value of UI process and practice a wide range of tools and methodologies. As a designer you can yell until you're blue in the face that you shouldn't be starting with mockups instead of first doing interviews, Contextual Inquiry, information architecture, and wireframes. But until your coworkers understand why and how playing with finger puppets helps to get the software done, they will think you are a flake.
IDEO makes fantastic products partially because they spend the time to do design and they understand what design is. (And partially they're really smart and talented.) In many organizations people are skeptics about the worth of traditional design techniques, and no one believes you when you tell them that field research and finger puppets work. The unconverted tend to see early-stage design work as bureaucratic nonsense, which they will put up with so long as it doesn't slow them down or require their attention. But it does slow everyone down (apparently) because instead of immediately producing HTML, you start drawing cartoons and taking Polaroid's of people using their cell phones in grocery stores.
However, in the projects where I've actually gotten to do even slices of this type of work, I get thanks from engineers, PMs, and the rest of the team. The projects go faster and more smoothly. Decision-making gets sliced into relevant chunks and dependencies are reduced.
Spend more time up front (which saves tons of time down the road), do iterative prototyping and testing, start low-fidelity and gradually gain resolution. It's not that you can't get amazing products without it, it's just that you're more likely to consistently get well-designed task-centric products in much less time and with much less argument, stress, and strife. Team dynamics tend to improve so much with good design process that even if these processes had no temporal benefit, they'd still be worthwhile.

The moral of the story

It breaks my heart to see designers, like the housewives of The Feminine Mystique, slowly grow passionless and cynical about their jobs under the weight of heavy management and busybody nonexperts. When I interview people for design jobs I ask them what makes a good designer, or why they became designers. I listen for the spark of belief that design makes the world a better place: that our work is meaningful and that we can make the world a better place through care, attention, and the right decisions. It can be so terribly difficult to do this in an environment where either people don't care about making the world better, or they don't understand that your work makes that happen: that you, the designer, given the latitude to fully exercise all of the skills that you have spent years developing, are a huge contributor to positive social change. And even if no one deserves respect without first earning it, you deserve the chance to earn it.
There is room in industry, even in the smallest enterprise shop, to believe in the value of your work and make that value known to others. One of the most remarkable days of my career thus far was when a PM whom I respect said to me, “Sure, do what needs to be done. I trust you.”
Copyright for this article is held by Ellen Beldner; reprinted here with permission.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.82.23