Why do people do what they do? In the twentieth-century West, neoclassical economics asserted that rational people want more of most things and will act to achieve this goal. The assumption is embedded in virtually every economics course taught in western universities and operationalized from there onward. One implication of said rationality is that one way to motivate people is to promise them more of something (typically money) in return for more of a desired behavior. Both the bonus culture of Wall Street, so much in the news after 2008, and the debate over competing views of chief executive compensation silently embed this set of assumptions. In the laboratory and in the wild, however, people don't actually do the rational thing all that often. The discovery of reliable behavioral patterns that defy neoclassical rationality aligns closely with a broad range of technology-related phenomena.
Beginning in the 1960s, psychologists and economists began to employ theories of human motivation to explain certain decisions and other economic actions. Previously, market behavior had been theorized in neoclassical economics to be the outcome of a) rational actors b) who have identified preferences that can be associated with a value c) acting independently with access to full and relevant information d) to improve their utility as individuals or profits as firms.1 Economic man turns out to be more complex than was previously thought, however, and sometimes he goes missing entirely.
Rational actors are hard to find; all of us employ “irrational” tools to make decisions. Many factors go into our preferences, not all of them conscious and many of them not clearly valued. Information about our options is rarely full and even more rarely perfect. Finally, we act for many reasons, not only to maximize our utility.
First, people internalize “irrational” elements (such as past experience, rules of thumb, and verbal framing mechanisms, such as the recited order of a list) as they make choices. The myth of “the hot hand” in basketball, for example, was debunked in 1985: Making a shot changes the player's prediction of the next shot's success but not the actual chances of its going in. In short, we frequently see patterns where there are none.2 This influence can be quite subtle: Experimental subjects who are asked the last digits of their social security number then go on to use those random numbers to “anchor” an unrelated operation, such as a bid on a product or a guess on an obscure fact.3 In technical terms, people can reliably be found to hold biases and employ rules of thumb while making choices.
Second, the neoclassical assumptions regarding preferences have been tested in many ways. Daniel Kahneman, a coauthor (with the late Amos Tversky) of some seminal papers in the field, won the 2002 Nobel Prize in economics “for having integrated insights from psychological research into economic science, especially concerning human judgment and decisionmaking under uncertainty.”4 A few years later, the then-University of Chicago legal scholar Cass Sunstein wrote a book with longtime behavioral economist Richard Thaler called Nudge, on “choice architectures” : The structure in which choices are presented to people changes what they choose.5 Sunstein is currently working in the Obama administration. Other mainstream authors include Duke's Dan Ariely, whose Predictably Irrational includes a chapter on the power of “free” or apparently free prices that reliably lead people to make “irrational” choices.6
Third, imperfect information, which had long been acknowledged, was found to come in many forms, with many outcomes. In a classic paper, George Akerlof showed how the market for used cars was shaped by buyers' overwhelming perception that sellers generally intended to hide defects: Information asymmetry can have the effect of crippling entire categories of transactions. It is a case of “all other things being equal” not holding true, and people behaving accordingly.7
Finally, empirical experiments and theoretical constructs found people valued fairness rather than maximizing their utility at the cost of another, visible human. Here's a deft telling of a landmark experiment from George Mason University's Center for the Study of Neuroeconomics:
Take two people and tell them they have the opportunity to split $10. Furthermore, tell one person that, as first mover, they get to make a one time offer, and tell the other person that, as second mover, they get the opportunity to either accept or reject this offer. If the offer is rejected they both go home with zero. This stylized negotiation was first studied by experimental economists in [a 1982 paper] and economists got a surprise. Game theory predicts an unequal split favoring the person who gets to make the offer. After all if I offer a ($9, $1) split where you only get $1 you should take it since a dollar is better than nothing, but instead a majority of the offers are to split equally.8
As Clay Shirky, currently a professor of media studies at New York University, noted, “[P]eople behave [in the experiment] as if their relationship matters, even when they are told it doesn't.”9 Fairness does not square with the maximization assumptions so common before the turn of the twenty-first century.
Not only in matters of fairness but also in the realm of motivation, people maximize things other than economic utility. Repeated and well-designed studies have found cognitive tasks have intrinsic satisfactions, so much so that people actually do worse at brainstorming and other jobs when incentives are raised. Instead, for brain work, extensive research suggests people are rewarded with three intangibles:
What is the place of behavioral economics in the landscape of information technology? First of all, as work can be accomplished outside firms because of the power of self-organization, some scholars have invoked the lessons of behavioral economics to explain the behavior of people who contribute, without monetary compensation, to such formidable efforts as Linux or Wikipedia. In the former case, roughly 8,000 programmer-years were estimated to be invested as of 2001;12 Wikipedia has nearly 500 million edits. The lack of monetary pay is an existence case, not a hypothetical, and both examples operate at a large enough scale that motivation must be explained somehow. Mastery, autonomy, and purpose do that job quite well, it would appear.
Second, the behavior exhibited by participants in large-scale social networks (such as Facebook, but also online games) is showing extensive investment of both play and work time for no apparent (external) reward. Behavioral “locks and keys,” as game developer Jesse Schell put it,13 are at work all around the technology landscape:
Several tendencies appear to be emerging. First, the barrier between real life and play life can get fuzzy. In 2008, two Dutch youths were convicted of stealing virtual goods from an online gamer by beating him up at school and coercing him into transferring the goods. A Chinese gamer was murdered over the sale of an online sword artifact. The Wii bowler uses a real arm motion to hurl a virtual ball toward virtual pins. People's FarmVille opponents are their real-world friends. In addition, people are powerfully motivated by symbols, just as they are elsewhere, whether those artifacts are military service ribbons, flags, or luxury cars. Finally, as always, people work assiduously to game every system, whether of grades or Facebook friend counts or stickK weight-loss programs. Leaderboards matter, whether of financial accumulation, imaginary milestones, or athletic performances, and with competition invariably comes a testing of the limits of the contest.
Given that people can work and connect from anywhere, and that fewer and fewer jobs rely on fixed physical infrastructure, corporations are no longer the assumed unit of organization. As groups grow easier to form, they can accomplish many kinds of work, and a key consideration in pursuit of a goal becomes motivating the right people rather than forming the organization. Money is no longer assumed to be the only way to drive behavior, especially cognitive effort, so new kinds of organizations are challenging the primacy of companies, schools and universities, and traditional non-profits, as we will see in Chapters 9 and 11.
Both the degree of portability and the global scale are new here: Ten years ago, no one could play Scrabble with hundreds of people while sitting on a bus. Now that we can, what comes next? With so many games now resident in the computational cloud, how will people remember or re-create them in the future? How will human relationships, whether intense or trivial, scale in these virtually physical or physically virtual settings particularly?
Finally, how will other systems, currently driven by other incentive programs, be transformed by the permeation of game and other group dynamics? Schell points to education as an obvious target, but corporate human resources, aging, personal fitness, and retirement savings are just as likely. As a result, nearly every field of endeavor could be affected by the clever application of behavioral carrots and sticks via new electronic media. Social engineering, in short, appears to be supplanting—or at any rate joining—technical engineering in the vanguard of innovation.
1. For one summary of the neoclassical position, see www.econlib.org/library/Enc1/NeoclassicalEconomics.html.
2. Thomas Gilovich, Robert Vallone, and Amos Tversky, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Cognitive Psychology 17 (1985): 295–314.
3. Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185 (1974): 1124–1130.
4. Nobel Prize citation at http://nobelprize.org/nobel_prizes/economics/laureates/2002/press.html.
5. Richard Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2009).
6. Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions (New York: HarperCollins, 2008).
7. George A. Akerlof, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism,” Quarterly Journal of Economics 84, no. 3 (1970): 488–500.
8. The original paper is Werner Guth, Rolf Schmittberger, and Bernd Schwarze, “An Experimental Analysis of Ultimatum Bargaining,” Journal of Economic Behavior and Organization 3, no. 4 (December 1982): 367–388; the summary is at http://neuroeconomics.typepad.com/neuroeconomics/2003/09/what_is_the_ult.html.
9. Clay Shirky, Cognitive Surplus: Creativity And Generosity In A Connected Age (New York: Penguin, 2010), p. 107.
10. Paul Starr, The Social Transformation of American Medicine: The Rise of a Sovereign Profession and the Making of a Vast Industry (New York: Basic Books, 1984).
11. Dan Pink, Drive: The Surprising Truth about What Motivates Us (New York: Riverhead, 2009).
12. David A. Wheeler, “Counting Source Lines of Code,” www.dwheeler.com/sloc/.
13. Jesse Schell, DICE conference presentation, http://g4tv.com/videos/44277/dice-2010-design-outside-the-box-presentation/.
14. Lance Whitney, “MIT Floats Ideas in DARPA Balloon Challenge,” C|Net, December 8, 2009, http://news.cnet.com/8301–1023_3–10411211-93.html. See also John C. Tang, Manuel Cebrian, Nicklaus A. Giacobe, Hyun-Woo Kim, Taemie Kim, and Douglas “Beaker” Wickert, “Reflecting on the DARPA Red Balloon Challenge,” Communications of the ACM 54, no. 4 (April 2011): 78–85.
15. Dan Ariely, “Gamed: How Online Companies Get You to Share More and Spend More,” Wired (July 2011), www.wired.com/magazine/2011/06/ff_gamed/.
16. Jeffrey T. Hancock, Catalina Toma, and Nicole Ellison, “The Truth about Lying in Online Dating Profiles,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York: ACM, 2007), pp. 449–452.
17. “Triumph of the Default,” www.kk.org/thetechnium/archives/2009/06/triumph_of_the.php.
18. Jim Bumgardner, “Mayor of the North Pole,” blog post and comments at www.krazydad.com/blog/2010/02/mayor-of-the-north-pole/.
3.16.137.10