References

Chapter 1

[1] IDEO. Human-centered design toolkit: an open-source toolkit to inspire new solutions in the developing world. 2nd ed. IDEO; 2011: Retrieved on July 24, 2014, from: http://www.ideo.com/images/uploads/hcd_toolkit/IDEO_HCD_ToolKit.pdf..

[2] Koskinen I, Zimmerman J, Binder T, Redstrom J. Design research through practice: from the lab, field, and showroom. Waltham, MA: Morgan Kaufmann Publishers; 2011.

[3] Owen C. Design thinking: notes on its nature and use. Des Res Q. 2007;2(1):1627.

[4] Brown T. Design thinking. Harvard Business Review; 2008. Retrieved on August 11, 2014, from: http://hbr.org/2008/06/design-thinking/.

[5] Schaffer RH. Four mistakes leaders keep making. Liberty Mutual Insurance; 2010. Retrieved on August 11, 2014, from: http://responsibility-project.libertymutual.com/articles/four-mistakes-leaders-keep-making.

[6] Ibid.

[7] Reeves M, Deimler M. Adaptability: the new competitive advantage. Harvard Business Review; 2011. Retrieved on March 16, 2015, from: https://hbr.org/2011/07/adaptability-the-new-competitive-advantage.

[8] Garvin DA, Roberto MA. What you don’t know about making decisions. Harvard Business Review; 2001. Retrieved on July 9, 2014, from: https://hbr.org/2001/09/what-you-dont-know-about-making-decisions.

[9] Sanders EN, Stappers PJ. Convivial toolbox: generative research for the front end of design. Amsterdam, The Netherlands: BIS Publishers; 2013.

[10] Dunne A. Hertzian tales: electronic products, aesthetic experience, and critical design. Cambridge, MA: The MIT Press; 2005.

[11] Konno M, Fong B. Agile UX research practice in Android. San Francisco, CA: Google I/O; 2013.

[12] Nielsen J. Interviewing users. NN/g; 2010. Retrieved on July 21, 2014, from: http://www.nngroup.com/articles/interviewing-users/.

[13] Nielsen J. First rule of usability? Don’t listen to users. NN/g; 2001. Retrieved on April 21, 2014, from: http://www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-users/.

[14] Brooks FP. The mythical man-month: essays on software engineering. Anniversary ed. Boston, MA: Addison-Wesley Longman Publishing Co, Inc; 1995: p. 116.

[15] McGrath RG. Transient advantage. Harvard Business Review; 2013. Retrieved on August 13, 2014, from: http://hbr.org/2013/06/transient-advantage/ar/1.

[16] Gothelf J, Seiden J. Lean UX: applying Lean principles to improve UX. Sebastopol, CA: O’Reilly Media, Inc; 2013.

[17] Porter ME. What is strategy? Harvard Business Review. 1996;74(6):6178.

[18] Reeves and Deimler, op. cit.

[19] Edwards B. Drawing on the right side of the brain: a course in enhancing creativity and artistic confidence. New York: Penguin Group, Inc; 2012.

Chapter 2

[1] Rucker S. How good designers think. Harvard Business Review; 2011. Retrieved on August 11, 2014, from: http://blogs.hbr.org/2011/04/how-good-designers-think/.

[2] Owen C. Design thinking: notes on its nature and use. Des Res Q. 2007;2(1):1627.

[3] Kumar V. A process for practicing design innovation. J Business Strategy. 2009;30(2/3):91100.

[4] Kumar V. 101 design methods: a structured approach for driving innovation in your organization. Hoboken, NJ: John Wiley and Sons, Inc; 2012.

[5] Sato S. Using design thinking to measure design’s impact: a systemic approach. CHIFOO; 2014: Retrieved on December 12, 2014, from: http://www.chifoo.org/filestorage/CHIFOO_DTMetricsFinal.pdf.

[6] Buxton B. Sketching user experiences: getting the design right and the right design. San Francisco, CA: Morgan Kaufmann Publishers; 2007.

[7] Design Council. Eleven lessons: managing design in eleven global brands. Designcouncil; 2007. Retrieved on March 7, 2015, from: http://www.designcouncil.org.uk/knowledge-resources/report/11-lessons-managing-design-global-brands.

Chapter 3

[1] Andreessen M. Why software is eating the world. The Wall Street Journal; 2011. Retrieved on March 12, 2015, from: http://www.wsj.com/articles/SB10001424053111903480904576512250915629460.

[2] McGrath RG. Transient advantage. Harvard Business Review; 2013. Retrieved on August 13, 2014, from: http://hbr.org/2013/06/transient-advantage/ar/1.

[3] Snowden DJ, Boone ME. A leader’s framework for decision making. Harvard Business Review; 2007. Retrieved on March 6, 2015, from: https://hbr.org/2007/11/a-leaders-framework-for-decision-making/.

[4] Disruptive innovation. In: Wikipedia, the free encyclopedia; 2015. Retrieved on March 8, 2015, from: http://en.wikipedia.org/w/index.php?title=Disruptive_innovation&oldid=650169737.

[5] Christensen CM. The innovator’s dilemma: when new technologies cause great firms to fail. Boston, MA: Harvard Business Review Press; 1997.

[6] Chesbrough HW. Open innovation: the new imperative for creating and profiting from technology. Boston, MA: Harvard Business School Press; 2003.

[7] Ulwick A. What customers want: using outcome-driven innovation to create breakthrough products and services. New York: McGraw-Hill; 2005.

[8] Francis S. Phil Gilbert on IBM’s design thinking. BP-3; 2013. Retrieved on August 14, 2014, from: http://www.bp-3.com/blogs/2013/05/phil-gilbert-on-ibms-design-thinking/.

[9] Weinzimmer LG, McConoughey J. The wisdom of failure. San Francisco, CA: Jossey-Bass; 2013.

[10] Schrage M. Serious play: how the world’s best companies stimulate to innovate. Boston, MA: Harvard Business School Press; 2000.

[11] Frenken K. Innovation, evolution and complexity theory. Cheltenham, UK: Edward Elgar Publishing, Inc; 2006: p. 11.

[12] Randle W. Mutation driven innovation. TEDx talks; 2015. Retrieved on March 8, 2015, from: http://tedxtalks.ted.com/video/Mutation-Driven-Innovation-%7C-Dr.

[13] The key was to change the working relationship by dramatically slashing the time spent on requirements analysis. Rather than gathering requirements from dozens of users, and then prioritizing, circulating, and seeking approval before initiating prototype development, the team identified the top 20 or 30 requirements as quickly as possible and stopped. The goal was to present the client with a quick-and-dirty prototype within a fortnight. Why? Because it’s far easier for clients to articulate what they want by playing with prototypes than by enumerating requirements. People don’t order ingredients from a menu; they order a meal. The quick-and-dirty prototype is a medium of codevelopment with the client. Quick-and-dirty prototypes can turn clients into partners, enabling developers to manage expectations and deal with changing requirements more responsively. See Schrage, op. cit. p. 19.

Chapter 4

[1] PrD is very similar to IDEO’s notion of “sacrificial concepts.” See IDEO. Human-centered design toolkit: an open-source toolkit to inspire new solutions in the developing world. 2nd ed. IDEO; 2011. Retrieved on July 24, 2014, from: http://www.ideo.com/images/uploads/hcd_toolkit/IDEO_HCD_ToolKit.pdf. Their process is to frame an abstract question as a hypothetical scenario with two options the interviewee must choose between. PrD is similar in that the scenario doesn’t need to be feasible; its only purpose is to gain understanding. As IDEO states, “A good sacrificial concept sparks a conversation, prompts a participant to be more specific in their stories, and helps check and challenge your assumptions” (p. 60). The difference here is the sacrificial concept itself is our design idea and the physical artifact built to represent it. These are treated as provocations to stimulate stakeholder dialogue and participatory design.

[2] Spool calls this “hunkering,” which describes the process of throwing artifacts out into the world to see how they work when they’re no longer just our imagination. He notes that although it’s important to throw design ideas away, we might not want to literally throw our physical artifacts out. They can become very useful in communicating the team’s thinking process and journey. See Spool J. Design’s fully-baked deliverables and half-baked artifacts. User Interface Engineering; 2014. Retrieved on July 31, 2014, from: http://www.uie.com/articles/artifacts_and_deliverables/.

[3] Beyer H. User-centered agile methods. San Rafael, CA: Morgan & Claypool; 2010.

[4] Brooks FP. The mythical man-month: essays on software engineering. Anniversary ed. Boston, MA: Addison-Wesley Longman Publishing Co, Inc; 1995.

[5] We use the term “artifacts” instead of “prototypes.” Traditional user-centered design focuses on capturing user needs, which are then translated into requirements to guide later design generation. Typically, by the time the team is ready to even start user testing, prototypes already need to be conceptually “usable” in terms of a series of prespecified tasks. Thus, low-fidelity prototypes tend to only be “lo-fi” in terms of representation – not conceptualization. PrD, like usability, is task-based, but at the beginning of exploration the artifacts used in the Engagement Sessions have often not been designed to support specific tasks. They are often crude and confusing, which is one reason we prefer to not call them “prototypes.” Another is that “prototype” sounds less exploratory; it sounds like a representation of the final product and not one of many possible ideas.

[6] Our thinking here is very much along the lines of Spool’s. He has argued that what he calls “artifacts” and “deliverables” most meaningfully differ not in terms of fidelity but in terms of time, depending on whether they are crafted before or after the decision-made point, the point when the final design direction is decided upon. Before this time, a wireframe is just a proposition that helps us understand the problem, i.e., it’s an “artifact.” After this time, after the decision-made point, the same-fidelity wireframe is no longer a proposition but a representation of the intended final design. It’s no longer an artifact but a “deliverable.” See Spool, op. cit.

[7] Sitkin SB. Learning through failure: the strategy of small losses. In: Cohen MD, Sproull LS, eds. Organizational learning. Thousand Oaks, CA: SAGE Publications; 1996:541578.

[8] Schulz K. On being wrong: adventures in the margin of error. New York: HarperCollins Publishers; 2010: p. 27.

[9] Edmondson AC. Strategies for learning from failure. Harvard Business Review; 2011. Retrieved on November 25, 2014, from: https://hbr.org/2011/04/strategies-for-learning-from-failure.

[10] Snowden DJ, Boone ME. A leader’s framework for decision making. Harvard Business Review; 2007. Retrieved on March 6, 2015, from: https://hbr.org/2007/11/a-leaders-framework-for-decision-making/.

[11] Ibid. The authors discuss what they call the “Cynefin framework,” which is a way of categorizing and differentiating between such decision environments.

[12] McGrath R. Failing by design. Harvard Business Review; 2011. Retrieved on November 25, 2014, from: https://hbr.org/2011/04/failing-by-design.

[13] Sitkin, op. cit.

[14] McGrath R. Are you squandering your intelligent failures? Harvard Business Review; 2010. Retrieved on November 25, 2014, from: https://hbr.org/2010/03/are-you-squandering-your-intel/.

[15] Bazerman MH, Watkins MD. Predictable surprises: the disasters you should have seen coming and how to prevent them. Boston, MA: Harvard Business School Press; 2004.

[16] Kahneman D, Tversky A. Choices, values, and frames. Am Psychol. 1980;39(4):341350.

[17] Wason PC. On the failure to eliminate hypotheses in a conceptual task. Q J Exp Psychol. 1960;12(3):129140.

Chapter 5

[1] This is not as strange or foreign a concept as it sounds at first. In fact, it is a common process in fields as diverse as design, mathematics, and the sciences. It’s possible that Carroll, being a preeminent mathematician, was referencing a common practice in solving problems. Consider the process required to solve calculus integration problems: The first step in solving for the original function in an integral is to “guess.” That is, you assume you have a solution and you work backwards, or cut it into pieces afterwards.

[2] Satell G. Before you innovate, ask the right questions. Harvard Business Review; 2013. Retrieved on December 8, 2014, from: https://hbr.org/2013/02/before-you-innovate-ask-the-ri.

[3] Laseau P. Graphic thinking for architects and designers. New York: Van Nostrand Reinhold Company; 1980.

[4] Norman D. Human-centered design considered harmful. Interactions. 2005;12(4):1419.

[5] Buxton B. Sketching user experiences: getting the design right and the right design. San Francisco, CA: Morgan Kaufmann Publishers; 2007.

[6] Gothelf J, Seiden J. Lean UX: applying Lean principles to improve UX. Sebastopol, CA: O’Reilly Media, Inc; 2013.

[7] Bilalić M, McLeod P. Why your first idea can blind you to a better one. Sci Am. 2014;310(3): Retrieved on February 25, 2015, from: http://www.scientificamerican.com/article/why-your-first-idea-can-blind-you-to-better-idea/.

Chapter 6

[1] Mayo-Smith J. Two ways to build a pyramid. Information Week; 2001. Retrieved on March 7, 2015, from: http://www.informationweek.com/two-ways-to-build-a-pyramid/d/d-id/1012280.

[2] Reinertsen D. Disagree and commit: the risk of conflict to teams. Electronic Design; 2000. Retrieved on March 14, 2015, from: http://electronicdesign.com/energy/disagree-and-commit-risk-conflict-teams.

[3] Dray S. Questioning assumptions: UX research that really matters. Interactions. 2014;21(2):8285: Retrieved on March 18, 2015, from: http://doi.acm.org/10.1145/2568485.

[4] Roberts P. FDA: software failures responsible for 24% of all medical device recalls. Threat Post; 2012. Retrieved on March 14, 2015, from: https://threatpost.com/fda-software-failures-responsible-24-all-medical-device-recalls-062012/76720.

Chapter 7

[1] Beyer H. Getting started with UX inside agile development. In: UX Immersion Conference, Portland, OR, 2012.

[2] Weinzimmer LG, McConoughey J. The wisdom of failure. San Francisco, CA: Jossey-Bass; 2013.

[3] Mayhew D. The usability engineering lifecycle: a practitioner’s handbook for user interface design. San Francisco, CA: Morgan Kaufmann Publishers; 1999.

[4] Buxton B. Sketching user experiences: getting the design right and the right design. San Francisco, CA: Morgan Kaufmann Publishers; 2007.

[5] GE UX Center of Excellence. The business value of UX. geuxcentral; 2012. Retrieved on October 25, 2013, from: http://files.geuxcentral.com/wp-content/uploads/Business_Value_of_UX.pdf.

[6] Nielsen J. Why you only need to test with 5 users. NN/g; 2000. Retrieved on March 14, 2015, from: http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/.

Chapter 8

[1] For an excellent analysis of this common meme, see Clarke R. Information wants to be free … Xamax Consultancy Pty Ltd; 2012. Retrieved on March 15, 2015, from: http://www.rogerclarke.com/II/IWtbF.html. Although Stewart Brand was the first to say the words, the notion stretches back to Thomas Paine and Thomas Jefferson.

[2] Karat C. Cost–benefit analysis of usability engineering techniques. In: Proceedings of the human factors and ergonomics society. Orlando, Florida, 1990. p. 839–43; Karat C. Cost-justifying usability engineering in the software life cycle. In: Helander M, Landauer T, Prabhu P, editors. Handbook of human–computer interaction. Amsterdam: Elsevier Science; 1997; Pressman RS. Software engineering: a practitioner’s approach. New York: McGraw-Hill; 1992.

[3] Spool J. Design’s fully-baked deliverables and half-baked artifacts. User Interface Engineering; 2014. Retrieved on July 31, 2014, from: http://www.uie.com/articles/artifacts_and_deliverables/.

[4] Buxton B. Sketching user experiences: getting the design right and the right design. San Francisco, CA: Morgan Kaufmann Publishers; 2007.

[5] Boer L, Donovan J. Provotypes for participatory innovation. In: Proceedings of the designing interactive systems conference. New York: ACM; 2012. p. 388–97.

[6] Nielsen J. First rule of usability? Don’t listen to users. NN/g; 2001. Retrieved on April 21, 2014, from: http://www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-users/.

[7] Medlock MC, Wixon D, Terrano M, Romero R, Fulton B. Using the RITE method to improve products: a definition and a case study. Orlando, FL: Usability Professionals Association; 2002: Retrieved on March 17, 2015, from: http://www.microsoft.com/en-us/download/details.aspx?id=20940.

[8] Nielsen J. Interviewing users. NN/g; 2010. Retrieved on March 14, 2015, from: http://www.nngroup.com/articles/interviewing-users/.

[9] Buxton, op. cit.

Chapter 9

[1] Carey H, Howard SG. Tangible steps toward tomorrow: designing a vision for early childhood education. Ethnography praxis in industry conference proceedings. Chicago, IL: AAA; 2009:268283.

[2] Clarke AC. Profiles of the future. London: Pan Books; 1973.

Chapter 10

[1] Asch SE. Effects of group pressure upon the modification and distortion of judgment. In: Guetzkow H, ed. Groups, leadership and men. Pittsburgh, PA: Carnegie Press; 1951:177190.

[2] Asch SE. Opinions and social pressure. Sci Am. 1955;193(5):3135.

[3] Schulz K. On being wrong: adventures in the margin of error. New York: HarperCollins Publishers; 2010: p. 149.

[4] Owen C. Design thinking: notes on its nature and use. Des Res Q. 2007;2(1):1627.

Chapter 11

[1] Wason PC. On the failure to eliminate hypotheses in a conceptual task. Q J Exp Psychol 1960;12(3):129–40. Wason argued that people do not naturally seek to falsify their beliefs. Klayman and Ha took issue with Wason’s argument, noting that a positive test can still produce disconfirming results. For example, if a greater proportion of students are predicted to succeed in a graduate program than really do, selecting students predicted to succeed (and thereby conducting a positive test) will likely produce disconfirming results. This led them to argue what Wason called a “confirmation bias” was really more of a “positive test strategy.” Their point is it is the result of a test and not the type of test itself that determines whether it’s confirming or disconfirming. See Klayman J, Ha YW. Confirmation, disconfirmation, and information in hypothesis testing. Psychol Rev 1987;94(2):211–28. For our purposes, though, as Koslowski points out, the positive test strategy is still a bias toward confirmation in that people are still seeking data expected to be congruent with their hypotheses. See Koslowski B. Theory and evidence: the development of scientific reasoning. Cambridge, MA: MIT Press; 1996.

[2] Lord CG, Ross L, Lepper MR. Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. J Pers Soc Psychol. 1979;37:20982109.

[3] Lord CG, Lepper MR, Preston E. Considering the opposite: a corrective strategy for social judgment. J Pers Soc Psychol. 1984;47(6):12311243.

[4] See Brehmer B. In one word: not from experience. Acta Psychol 1980;45:223–41. Brehmer compares real-world learning with classroom learning, pointing out that in the latter, there is always a gold standard; the teacher provides a “right” answer. In the real world, however, there often is no such gold standard and, in some complex environments, when we think we’re learning from experience, we may be fooling ourselves.

[5] Einhorn H, Hogarth R. Confidence in judgment: persistence of the illusion of validity. Psychol Rev. 1978;85:395416.

[6] Ibid. Following Einhorn and Hogarth, we can represent each variable with a symbol, allowing a closer look at how these variables interact with each other: x, our judgment (here our decision to go with a specific design solution); xc, our criterion for deciding on x (here that it seems to fit our up-front research); y, our assessment of the outcome of x (here that it was a good design decision); yc, our criterion used for making our assessment (y) of x (here that there were no showstoppers); Φ, our selection ratio (here how many design solutions we actually try out and assess); br, our base rate (here how many of our solutions are deemed “good”); r, our correlation (here the relationship between our design judgments and their assessments). If, after a series of projects, we’re reinforced to think x and y are highly related, and that we’re really good at predicting what design ideas will be successful, then our experiences will compel us to defend our design intuitions and ideas since, usually, yyc. What this ignores is that y and yc also apply to many of our ideas where x < xc – ideas we wouldn’t have even considered. This means many of the ideas we didn’t consider or build and assess would have also produced no showstoppers (or satisfied whatever metric we use to assess the outcome). Our gut feel of the correlation between x and y may therefore be misleading. The probability of an error is the probability of a false-negative plus the probability of a false-positive, which is equal to P(y < yc|xxc) + P(yyc|x < xc). The whole second half of the equation here is missing, so how are we supposed to know how accurate we really are? Say we do some user research and then go with our first, best-guess solution and build it. That’s a really low selection ratio. When Φ < br, we’ll seem to be right more often than not even when the relationship between x and y is low. The base rate (br) is the proportion of design ideas that perform greater than or equal to yc. Whenever this is greater than the proportion of possible solutions actually tried, the positive hit rate can be high even when xxc doesn’t do a good job of differentiating between good and bad design solutions. In English, whenever the selection ratio of design solutions tried out and assessed is less than the base rate of how many solutions would have been good, we’ll likely be convinced of the power of our design intuition even if it’s not that great. And if yc (the criterion we’re using to assess the outcome of our design decisions) is just that there weren’t any showstoppers, the resulting base rate is likely quite high.

Chapter 12

[1] Carlos T. Reasons why projects fail. Project Smart; 2015. Retrieved on February 26, 2015, from: http://www.projectsmart.co.uk/reasons-why-projects-fail.php.

[2] The “SMART” acronym was introduced by Doran GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev 1981;70(11):35–6. Over the years, however, different authors have said the letters in the acronym stand for different things. Our presentation is more in line with Yemm G. Leading your team: how to set goals, measure performance and reward talent. New York: Pearson Education; 2012.

[3] Dozens of books are available for ideas on creating artifacts. We particularly like the following: Sketching User Experiences, the Workbook, by Bill Buxton; Paper Prototyping, by Carolyn Snyder; The Convivial Toolbox, by Liz Sanders and Pieter Jan Stappers. These three provide substantial techniques on creating interesting provocations.

Chapter 14

[1] Portigal S. Interviewing users: how to uncover compelling insights. Brooklyn, NY: Rosenfeld Media; 2013.

[2] Fischoff B. Hindsight ≠ foresight: the effect of outcome knowledge on judgment under uncertainty. J Exp Psychol Hum Percept Perform. 1975;104:288299.

[3] Snyder C. Paper prototyping: the fast and easy way to design and refine user interfaces. San Francisco, CA: Morgan Kaufmann Publishers; 2003.

[4] Peyrichoux I. When observing users is not enough: 10 guidelines for getting more out of users’ verbal comments. UXmatters; 2007. Retrieved on July 21, 2014, from: http://www.uxmatters.com/mt/archives/2007/04/when-observing-users-is-not-enough-10-guidelines-for-getting-more-out-of-users-verbal-comments.php.

[5] Ibid.

[6] Ibid.

[7] Rubin HJ, Rubin IS. Qualitative interviewing: the art of hearing data. 3rd ed. Thousand Oaks, CA: SAGE Publications, Inc; 2012.

[8] Seidman I. Interviewing as qualitative research. 4th ed. New York: Teachers College Press; 2013.

[9] Ibid.

[10] Schütz A. The phenomenology of the social world [Walsh G, Lenhert F, Trans.]. Chicago, IL: Northwestern University Press; 1967.

[11] Seidman, op. cit.

[12] Hyman HH, Cobb WJ, Fledman JJ, Hart CW, Stember CH. Interviewing in social research. Chicago, IL: University of Chicago Press; 1954.

[13] Portigal, op. cit.

[14] Young I. Mental models: aligning design strategy with human behavior. Brooklyn, NY: Rosenfeld Media, LLC; 2008.

[15] Wilson NL. Substances without substrata. Rev Metaphys. 1959;12(4):521539.

[16] Alreck PL, Settle RB. The survey research handbook. Homewood, IL: Richard D. Irwin, Inc; 1985.

[17] Ibid.

[18] Heyn ET. Berlin’s wonder horse: he can do almost everything but talk – how he was taught. The New York Times, September 4; 1904.

[19] Snyder, op. cit.

[20] Peyrichoux, op. cit.

[21] Portigal, op. cit.

[22] Stubbs D. Usability and product development tutorial. Usability Architects, Inc; 1991 [unpublished manuscript].

[23] Spool J. Three questions you shouldn’t ask during user research. User Interface Engineering; 2010. Retrieved on June 10, 2013, from: http://www.uie.com/articles/three_questions_not_to_ask/.

[24] Young, op. cit.

[25] Portigal, op. cit.

[26] Ohno T. Toyota production system: beyond large-scale production. Portland, OR: Productivity, Inc; 1988.

[27] Hawley M. Laddering: a research interview technique for uncovering core values. UXmatters; 2009. Retrieved on December 30, 2014, from: http://www.uxmatters.com/mt/archives/2009/07/laddering-a-research-interview-technique-for-uncovering-core-values.php.

[28] Ibid.

[29] Simon N. The participatory museum. Museum 2.0. Santa Cruz, CA; 2010.

[30] Engeström J. Why some social network services work and others don’t – or: the case for object-centered sociality. Zengestrom; 2005. Retrieved on August 1, 2014, from: http://www.zengestrom.com/blog/2005/04/why-some-social-network-services-work-and-others-dont-or-the-case-for-object-centered-sociality.html.

[31] Simon, op. cit.

[32] Stubbs, op. cit.

[33] Grammarist. Subjunctive mood. Grammarist; 2009–14. Retrieved November 10, 2014, from: http://grammarist.com/grammar/subjunctive-mood/.

[34] Sanders EN, Stappers PJ. Convivial toolbox: generative research for the front end of design. Amsterdam, The Netherlands: BIS Publishers; 2013.

Chapter 15

[1] Reichheld FF. The one number you need to grow. Harvard Business Review; 2003. Retrieved on March 19, 2015, from: https://hbr.org/2003/12/the-one-number-you-need-to-grow/ar/1.

Chapter 16

[1] Nielsen has made famous the claim that running five users in a usability test uncovers 85% of the usability issues present. Molich, who cocreated heuristic evaluation with Nielsen, has conducted a series of studies where independent teams of usability experts evaluate the same interface. He has consistently found that the results of different usability evaluations typically have little overlap with each other, suggesting that the five-user assumption is misleading. Running five users does not uncover the majority of issues a solution has, although it might uncover the majority of issues a single moderator will find using a specific inspection method, task set, and so on. It should be further pointed out, however, that identifying the majority of usability issues a system has is not even a useful goal. What we should focus on is quickly identifying enough issues to drive a fruitful iteration. See Nielsen J. Why you only need to test with 5 users. Alertbox; 2000. Retrieved on November 15, 2011, from: http://www.useit.com/alertbox/20000319.html; Molich R. Usability testing myths. Net Magazine; 2013. Retrieved on April 10, 2013, from: http://www.netmagazine.com/features/usability-testing-myths#comment-11227; Molich R, Dumas JS. Comparative usability evaluation (CUE-4). Behav Inf Technol 2008;27:263–81; Molich R, Ede MR, Kaasgaard K, Karyukin B. Comparative usability evaluation. Behav Inf Technol 2004;23(1):65–74; Molich R, Jeffries R, Dumas JS. Making usability recommendations useful and usable. J Usability Stud 2007;2(4):162–79.

[2] Creswell JW. Qualitative inquiry & research design: choosing among five approaches. 2nd ed. Thousand Oaks, CA: SAGE Publications, Inc; 2007.

[3] Wengraf T. Qualitative research interviewing. London: SAGE Publications Ltd; 2001.

[4] Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: SAGE Publications; 1990.

[5] If we don’t know enough about the variables of interest for maximum variation sampling, random sampling may be a worthwhile option. With samples too small for valid statistical inferences, many consider random selection a waste of time. The aim here, however, would be to use random selection (i.e., each stakeholder on the list has an equal chance of being selected) of our low-N sample to increase the odds of theoretical diversity, in light of our not being able to plan for purposive maximum variation in identified variables of interest. With PrD, however, our sample size will be so small that we shouldn’t use this approach unless we really have no hypotheses to go off of. See Wengraf, op. cit.

[6] Holtzblatt K, Wendell JB, Wood S. Rapid contextual design: a how-to guide to key technique for user-centered design. San Francisco, CA: Morgan Kaufmann Publishers; 2005.

[7] Kelley, on his personal website, shares he originally intended “OZ” to be an acronym standing for “offline zero,” which described the fact that the “wizard” must interpret the user’s inputs in real time. See Kelley JF. Where did the usability term Wizard of Oz come from? Musicman; 1980. Retrieved on March 12, 2015, from: http://www.musicman.net/oz.html.

[8] Gaver WW, Boucher A, Pennington S, Walker B. Cultural probes and the value of uncertainty. Interact Funol 2004;11(5):53–6.

[9] United States Department of Defense Education Activity. Hot wash: clean up and cool down after an exercise. DoDEA, vol. XI, issue 7; 2011. Retrieved on March 16, 2015, from: http://www.dodea.edu/Offices/Safety/upload/11_7.pdf.

Appendix A

[1] Mathew AP, MacTavish T, Donovan J, Boer L. Materialities influencing the design process. In: DIS’10, Proceedings of the 8th ACM conference on designing interactive systems, 2010. p. 444–5.

[2] Chang C. Before I die. Candychang; 2011. Retrieved on March 14, 2015, from: http://candychang.com/before-i-die-in-nola/.

[3] An adjunct mini-conference to the CHI2004 conference. Retrieved on March 24, 2015, from: http://www.chi2004icsidforum.org/.

[4] Frishberg L. Presumptive design, or cutting the looking-glass cake. Interactions. 2006;13(1):1820.

[5] Names have been changed to maintain confidentiality.

Appendix B

[1] Schulz K. On being wrong: adventures in the margin of error. New York: HarperCollins Publishers; 2010: p. 291.

[2] Ibid., p. 326.

[3] Sanders EN, Stappers PJ. Convivial toolbox: generative research for the front end of design. Amsterdam, The Netherlands: BIS Publishers; 2013.

[4] Ibid.

[5] Wallas G. The art of thought. New York: Harcourt, Brace and Company; 1926.

[6] Tuckman BW. Developmental sequence in small groups. Psychol Bull. 1965;63(6):384399.

[7] Mycoted. Brainwriting. Mycoted; 2010. Retrieved on October 9, 2013, from: http://www.mycoted.com/Brainwriting.

[8] Greenberg S, Carpendale S, Marquardt N, Buxton W. Sketching user experiences: the workbook. Waltham, MA: Elsevier, Inc; 2011.

[9] Janis IL. Victims of groupthink: a psychological study of foreign-policy decisions and fiascoes. Boston, MA: Houghton Mifflin Company; 1972.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.142.144