CHAPTER 1

Data, Choice, and FOMO

As I prepare to begin writing this book, I’m finding it difficult to think for myself. There’s just so much information. I’m staring at a few stacks of books, seven piles of printed articles (each between two and three feet in height), three piles of magazines (sorted into business, technology, and general news publications), and two stacks of newspapers that I promise myself almost daily that I’ll skim. Every week, the piles seem to grow, despite my best efforts to make them shrink. The reality is that I’m drowning, quite literally, in information.

Who Can Keep Up?

Scientific and technical advances have overwhelmed us with information over the past two centuries—and the pace of data generation has been accelerating. The growing pool of knowledge demands constant diligence and unimaginable regular effort merely to keep up. Stop for a moment and think about the number of books that exist today. The most recent estimates suggest that there are around 135 million books that have been published.1 And it’s not just the number of books that’s overwhelming. The number of scholarly articles published since 1665, when the Royal Society first began publishing its Philosophical Transactions, now exceeds fifty million and is rising daily.2 It’s ludicrous to think anyone might be able to digest even a large fraction of this knowledge, let alone all of it.

There’s simply just too much information to process, and this fact is very distressing and depressing. The anxiety generated by this information overload has been called everything from data asphyxiation to cognitive overload to data deluge to information fatigue syndrome. But we don’t like turning the hose off, either.

Recent research suggests that some people develop a deep and debilitating anxiety from being disconnected from these sources. Forty-five percent of respondents to a recent survey in the United Kingdom noted that they feel “worried or uncomfortable” when they are unable to connect with their email or Facebook.3 Ever travel to a foreign country where wireless data services are prohibitively expensive? Whenever I do, I notice myself looking for Wi-Fi, simply to reconnect. It’s irrational, but the feeling is real. I worry about what I may be missing. Connected or not, you’re going to feel overwhelmed. It’s impossible not to. It’s life in the twenty-first century, a life in which we’re all asked to drink from the proverbial fire hose while not allowing a drop to drip. There’s simply no way to keep up.

It wasn’t always this way. For thousands of years, there have been people believed to know everything. Somewhere in the last few hundred years, as our insights and understanding grew more voluminous, the feat of knowing everything became insurmountable. So, who was the last person to know everything?

Know It All

A strong candidate is Thomas Young. Born in 1773 in Somerset, England, Young read widely from an early age. By the age of twenty-one, he was a fellow of the Royal Society, Britain’s preeminent science society with origins dating back to 1660, and had presented a paper that set the foundation of our current understanding of human vision. By his early thirties, as a practicing doctor, he had delivered a series of lectures that his biographer, Andrew Robinson, described as “covering virtually all of known science, which has never been surpassed in scope and boldness of insight.”4 Over the course of his life, Young made important contributions across a wide range of fields, including physics, physiology, engineering, music, and philology (the study of language in written historical sources—I had to look that up, not being someone who knows everything). He studied over four hundred languages, which allowed him to lay the groundwork for deciphering the Rosetta Stone. And oh, before I forget, he also took on Isaac Newton and demonstrated that light was as much a wave as it was a particle.

When not moving human knowledge forward at a breakneck pace, Young advised leaders on matters as diverse as the introduction of gas lighting in London, the proper mathematics necessary to understand risk in life insurance, and the relative effectiveness of various shipbuilding methods.

In Young’s story, we see a tension between pursuing breadth and depth. Knowing that all of us have limited time and attention, we tend to be skeptical of those who seem unfocused and contribute in numerous arenas without really committing to any. Recognizing the suspicion held by society toward those with multiple interests, Young made most of his contributions anonymously, trying to minimize the risk of being considered a jack-of-all-trades. He feared that, if his wide-ranging interests were public, they would scare patients away from his medical practice. Even then, depth was valued more than breadth.

In 1973, the London Science Museum designed an exhibit to celebrate Thomas Young’s two hundredth birthday. The organizers noted that “Young probably had a wider range of creative learning than any other Englishman in history. He made discoveries in nearly every field he studied.”5 Any wonder why Andrew Robinson titled his biography of Young The Last Man Who Knew Everything?

Specialization and Collaboration

The type of information being produced today is increasingly complicated and specialized. It’s becoming more important to have some prior understanding in order to digest new information or to make meaningful contributions. The former chairman of British Mensa, a club for those testing well on IQ examinations, notes that the sheer magnitude of knowledge today is so large that if you really want to understand your topic thoroughly, and if you want to speak with authority, then it’s important to specialize.6 A tight focus has become the ruling mantra of the day. And as piles of knowledge have accumulated, it’s taking longer for new contributions to be made.

Indeed, scientists and inventors make major contributions at older and older ages. In a paper entitled “Age and Great Invention,”7 Benjamin Jones of Northwestern University explored the ages at which Nobel laureates conducted the pioneering work that earned them their awards. He found that the mean age of great achievement for both Nobel Prize winners and great inventors rose by about six years over the course of the twentieth century. Jones also measured the time it takes for researchers to begin contributing to their fields. He found that, on average, the great minds were active in research at age twenty-three at the start of the twentieth century, but at the end of the twentieth century, great minds were active in research at age thirty-one.

But does it matter what kinds of thinking these scientists were doing? Jones found a difference between those who were theorists and those who were experimentalists.8 Conceptual innovators tend to do their best work at earlier ages, while concrete fields favored late-peaking experimental innovators. Jones explains that the most important conceptual work typically involves radical departures from existing paradigms. When an individual is first exposed to a paradigm, before he or she fully embraces it, she is best poised to identify flaws or areas for opportunity. That is, the early-stage conceptual researcher is more likely to look at the big picture, to see links between previously unconnected dots.

In another research project, Jones found that among inventors, the age at first innovation is trending upwards at 0.6 years per decade.9 Inventors are taking more time to study before producing their first innovations. The number of people listed on each patent is growing 17 percent per decade, and that specialization—measured as the probability of not switching fields between innovations—is increasing at a rate of 6 percent per decade. It takes more people, with more education and focus, to produce a single novel invention.

The trend of increasing team size is present in science as well. According to Sciencexpress, between 1955 and 2000, the number of authors listed on articles in the sciences increased from 1.9 to 3.5.10 The research also found that articles with multiple authors were cited more often at an increasing rate. Another study confirmed the trend of increasing team size in medicine.11 Across five of the field’s most prestigious journals, it found that the mean number of authors per article increased from 4.66 to 5.73 from 1993 to 2005.

At the top of this totem pole is research about large-scale complex science experiments involving multiple nations and billions of dollars in research budgets. How many authors do the research papers emanating from these efforts tend to have? Ready for it? Thousands. That’s right, some science experiments have gotten so complicated that their team size is measured in the thousands.

A gigantic international research effort has been underway in Switzerland with the Large Hadron Collider (LHC) at CERN, the European particle physics lab. Costing $10 billion to build, the particle accelerator smashes protons together, generating a billion collisions every second.12 In May 2015, members of the experiment’s two major collaborations published a joint paper in Physical Review Letters.13 Number of authors? Five thousand, one hundred, and fifty-four! That’s right. More than five thousand people. At the time, the paper broke a world record for the most contributors to a single research article. The article took twenty-four whole pages to list the contributors—and only nine pages to communicate the research! Imagine how specialized each of the 5,154 contributions must have been.

Using the same logic as above to decompose per-author contribution percentages, we find that authors writing about CERN had an on-average contribution percentage of 0.02 percent. If we believe Jones’s finding that research team sizes are growing at 17 percent per decade, it’s only a matter of time before we see papers with ten thousand or more authors. The specialization train is barreling forward, showing no signs of slowing anytime soon.

For someone who typifies this modern mantra of specialization, let’s turn to an individual who spent an entire fifty-year career focused on one pursuit—understanding why cells of some underwater creatures glow.

Osamu Shimomura, Ultraspecialist

Our flag-bearing monomath is Osamu Shimomura. He was born 155 years after Thomas Young, and six thousand miles from Young’s British birthplace—in Japan. He was sixteen when the United States dropped the atomic bomb on Nagasaki. At the time, he was living just fifteen miles from the center of the blast. Although hit with a large amount of radioactivity, he beat the odds and survived, going on to get a degree from what is now the Nagasaki University School of Pharmaceutical Sciences. He received his PhD in organic chemistry in 1960. As a scientist, Shimomura became as specialized as it gets. After receiving his PhD, Shimomura moved to Princeton University where his work focused on a single organism: Aequorea victoria, a beautiful, transparent jellyfish that gives off a green glow. Shimomura was fascinated by the jellyfish’s bioluminescence and wanted to understand how it worked. Soon, he had isolated the protein, aequorin, that was responsible for the glow, publishing his early finding in a paper titled “Crystalline Cypridina Luciferin.”14 (I don’t know about you, but a title like that is unlikely to grab my attention.)

Nevertheless, after identifying the protein, he needed large volumes of the protein to study. So what did he do? Every summer, when academic obligations were lighter, Shimomura would head to Friday Harbor in Washington state, where there were plenty of these jellyfish. “Our schedule was to collect 50,000 per summer, in one or two months,” he said. “Nineteen summers, and we collected a total of 850,000.”15

One puzzle was that the protein Shimomura isolated gave off blue light, but the jellyfish glowed green. Shimomura helped show how this shift happened. He and his colleagues isolated the green fluorescent protein, which absorbed the aequorin’s blue light and shifted its color, from the jellyfish.16 In subsequent years, Shimomura would go on to study the chemical properties of this protein. The application of green fluorescent protein (known among scientists as GFP) in molecular biology would revolutionize the field starting in the 1990s, as researchers started using it as a marker to track what’s happening in cells. According to the Royal Swedish Academy of Sciences, the protein has a “miraculous” property that lets it “provide universal genetic tags that can be used to visualize a virtually unlimited number of spatio-temporal processes in virtually all living systems.”17

To understand the magnitude of GFP revolution, let’s think about the tracking of whales. In the world before we had GPS transmitters that we could attach to the whales, we simply didn’t know what they were doing. Sure, we saw whales in Alaska in the summer and near Hawaii in the winter, but were they the same whales? Without the ability to track specific whales across time and location, it was simply impossible to know. The GPS trackers on whales are the equivalent of the GFP attached to cells, enabling scientists to track specific cells in living systems.

In 2008, for his work on the green fluorescent protein, Shimomura shared the Nobel Prize in Chemistry. In the words of the Royal Swedish Academy of Sciences, “Without the pioneering research of Shimomura it is likely that the GFP revolution would have been delayed by decades or even remained one of the hidden secrets of the Pacific Ocean.”18 Shimomura never intended to revolutionize biology. He was too specialized for that. As he put it, “I don’t do my research for application or any benefit. I just do my research to understand why jellyfish luminesce, and why that protein fluoresce.”19 The GFP revolution was a fringe benefit of Shimomura’s narrow focus.

From Data Deluge to Optimization Ordeals

Today, our society has many more Shimomuras than Youngs. Why’s that? To begin, it is a direct result of our attempts to cope with a massive stock of knowledge and its increasingly rapid growth. But it’s also a result of simple cash—our society tends to pay more for focus than broad perspective, at least in most professions. A specialized lawyer, doctor, or investment banker is usually paid more than a generalist. And part of that comes from the belief that today’s data deluge offers great chances for optimization. Let’s now turn to the choice conundrums that emerge from this ever-present quest for maximization.

For most of human history, scarcity was the norm. We battled for limited food, sought scarce shelter, and even fought for the most desirable spouse (well, I guess we still do that!). Economics—the study of how to allocate scarce resources—arose to make sense of these dynamics. In the late eighteenth century, the economist Adam Smith suggested that the “invisible hand” of self-interest would guide a diverse set of workers, merchants, and others to produce an optimal outcome for society as a whole.20 By this, Smith meant that every person would pursue their own goals, maximize his or her own satisfaction, and, in doing so, inadvertently behave in a way that helped achieve an ideal distribution of resources.

Adam Smith’s writing embraced the messiness of reality. As the discipline matured, though, economics left behind the complex stew of human interactions in favor of simpler models. For economists, people became hyperrational pleasure-maximizers. And so off we went, with every man “busily arranging his life to maximize the pleasure of his psychic adding machine,” as American economist and historian Robert Heilbroner put it.21

Our world today is complex. We have many options, and, more importantly, we are aware of just how many options we have. When we choose between options, there is a human instinct—the one that economics as a field relies upon—to try to choose the best one. But attempting to optimize in the face of uncertainty and interconnectedness is challenging and doesn’t always happen. Analysis paralysis is more common by the day.

Choice Conundrums

Choice explosion exists in every aspect of our lives, even when we’re just selecting a movie to watch—something that should be fun. Home entertainment has had a truly awe-inspiring increase in options. Surely, pleasure-seeking homo economicus is excited by these possibilities, right? There was once a time when in-home entertainment was limited to whatever was on TV. Oh, and there were only five channels. But then, as bandwidth to the home increased with cable infrastructure, the number of options for in-home entertainment grew. Weekly periodicals such as TV Guide sprung up to help consumers navigate which channels were showing which shows when. And that was when consumers had dozens of channels to choose from.

While channels were increasing, we also got more choice with the invention of recording devices such as the VCR, which allowed us to watch not only live TV but anything that had previously aired. Eventually, the video rental store came along. If you needed a movie for the evening, you were confronted with shelves upon shelves of options—thousands of videos with no differentiating qualities besides their titles. You couldn’t even see the covers, since many stores replaced them with their own generic ones. At that time, your only hope was to go to the “staff picks” shelf and defer to the advice of loyal employees. The choices were overwhelming then, but it was all dwarfed by what came next.

Today, cable companies routinely offer hundreds of channels, as well as hundreds of thousands of on-demand movies, shows, and even radio stations. There are channels catering to the unique interests of cooking, sci-fi, nature, business, and sports enthusiasts. You can get standard-definition, high-definition, or even 3-D displays. Add in the explosion of streaming content across numerous platforms (from Netflix to AppleTV to Disney+), and today’s consumer has multiple-millions of options to choose from. You could watch pretty much any movie or TV show created in human history—all without leaving the comfort of your couch.

Is all this choice making us happier? Are we really better off?

We tend not to question the value of choice. Everyone’s different—so with more options, it’s more likely we’ll find something perfect for us—right? Standard economic logic suggests that more options are always better. For many, this makes intuitive sense: you can always ignore the extra possibilities if they don’t improve your happiness, so new options can only improve satisfaction. But that doesn’t seem to be the case in real life.

Movie Melancholy

At least once a month or so, my wife, Kristen, and I decide to watch a movie together after our kids are asleep. Thanks to the selection of movies available at our fingertips, we don’t feel like we have to just accept whatever’s playing that evening. We feel empowered. We have choice and plenty of it.

Except here’s what typically happens. Whoever gets to the couch first will begin scanning new releases, inevitably watching a preview or two. Given the average trailer is around three minutes or so, approximately six minutes was spent searching. But if my wife beat me to the couch and I arrive after she’s already seen a preview or two, I insist on watching what she’s seen. I don’t want to negotiate against a Harvard Law School graduate who practiced as a litigator for the prestigious Boston law firm of Ropes & Gray until I know what she does. I simply need to be on similar informational footing.

The chance of us agreeing on one of the first three options is about as likely as winning Powerball. Nah, that would be optimistic. It’s more like the chances of winning Megamillions and Powerball in the same week of a leap year when there was a crescent moon. So the search continues. There’s this nagging feeling throughout that the perfect movie exists for our particular mood at this particular time—why wouldn’t it? After all, there are an ungodly number of options available, surely one is perfect.

So we search and search and search. Eventually, we settle—probably about forty-five minutes after we began—on a movie that doesn’t really excite either of us (how could it, given our expectations of perfection?) but doesn’t really bother us either. Because of the time it took to select the movie, though, I’m exhausted and doze off within the first hour. At which point my wife is angry. She feels abandoned.

“I agreed to this movie because you seemed more interested in it than the one I wanted to see,” she’s likely to argue. Half asleep, I tell her to just stop the movie and start the one she wanted to see.

“Fine.”

I’ve known my wife since the early 1990s when we were both undergraduates at Yale, and the one thing I’ve learned since then is that “fine” does not mean fine. It means something more akin to “I’m mad at you, but I’m too polite to tell you.” So what do I do? I go to sleep, only to learn the next morning that Kristen fell asleep halfway through the movie she did want to watch, having watched half of a movie she didn’t want to watch. She went to sleep angry and frustrated by the whole experience and, as most married people will likely understand, assumed I was the cause of this distressing evening rather than the million options offered by Xfinity. (Hmmm, I wonder if there are other couples suffering from Xfinity’s choice explosion—perhaps there’s a class action possibility here?)

At least in our case, choice wasn’t helpful. The research suggests Kristen and I are not alone. A growing body of research has shown that there are significant downsides to choice—which is why filters to narrow our focus have become so important. It turns out that, often, when presented with, say, hundreds of bags of potato chips, we don’t always automatically calculate which has the perfect size, flavor, shape, price, and healthiness, and thus pick out the option that generates the most happiness. Instead, we are often paralyzed. We hesitate and decide we didn’t want chips in the first place. Or once we choose a bag and have the chips, we can’t help but think of how another choice may have been better. Choice, anxiety, and regret are close cousins.

A number of experiments have bolstered this insight. In the most famous one, researchers set up two different displays of jam in a grocery store. One offered six types of jams; the other twenty-four. More shoppers stopped when there were more jams, with 60 percent of passersby sampling one of the twenty-four offered, compared to 40 percent for the six-sample display. But more choice was paralyzing. Thirty percent of those exposed to six options made a purchase, compared to only 3 percent of those who saw twenty-four samples. With too many options, shoppers didn’t sort through and find the perfect one. Instead, their eyes glazed over and they moved on. Further, there was likely a nagging suspicion among the 3 percent that they chose poorly.

It’s not just shoppers who act this way. In one study, researchers offered students the option of writing an essay for extra credit. They were divided into two groups. One was offered six possible topics to write about; the other was presented thirty ideas. Fourteen percent fewer students in the latter group (presumably overwhelmed by choice) chose to complete the optional assignment.22

Analysis paralysis also occurs in domains other than shopping and school. Even when people are managing their money—where you’d think they’d be at their most rational and patient—they fall victim to analysis paralysis. Companies that offer retirement savings plans often provide their employees with a set of mutual funds in which to invest their money. One researcher found that for every ten additional mutual fund options that were offered, participation in the program fell by 1.5 to 2 percent.23

A proliferation of options does not equate to freedom or wellbeing, as economists once believed. A horde of choices overloads our focus systems, leading to paralyzing anxiety. It’s why the super-supermarket in the Simpsons is called Monstromart and has the tagline, “Where Shopping Is a Baffling Ordeal.” Or why the movie Idiocracy portrays its dumbed-down future as containing a Costco as large as a city, filled with miles of shelves, a law school, and even its own internal mass transit system.

A Tempting Tyrant

Barry Schwartz is a psychologist who writes about the paradox of choice. One day, Schwartz walked into a store to buy new jeans. He was asked if he would like slim fit, easy fit, relaxed, baggy, or extra baggy jeans. He was then asked if he’d like stone-washed, acid-washed, or distressed. Button fly or zipper fly? Faded or regular? And what color?

Schwartz decided to try them all. Soon, he had become so focused on finding the ideal perfect fit that the act of buying a pair of jeans turned into a daylong affair, a complex decision in which he was forced to invest time, energy, and no small amount of self-doubt, anxiety, and dread. Before these jeans options were available, buyers settled for an imperfect fit, but at least they’d buy a pair of jeans within five minutes. Now, without a filter to narrow his focus, Schwartz became overwhelmed in the face of countless options. Schwartz’s dilemma sounds identical to how my wife and I feel when picking a movie.

Schwartz suggests the costs of trying to make the best decision often outweigh the possible benefits. He goes further to suggest overwhelming choice probably also affects our mental health: “Clinging tenaciously to all the choices available to us contributes to bad decisions, to anxiety, stress, and dissatisfaction—even to clinical depression.”24

Anxiety arises when we have an overwhelming set of options. But it also can come from a seemingly opposite situation: when we’re confronted with nothing. At these times, we’re reminded that there is an almost infinite number of choices lurking; we simply must be missing them. Our awareness of all the possibilities we might never even get a glimpse of—let alone have the privilege of dismissing—summons the omnipresent specter of modern living, the fear of missing out, also known as FOMO.

Having some choice is doubtless better than no choice. Indeed, choice and freedom are related concepts. Free choice is what enables markets to work and has powered human ingenuity for thousands of years. It generates competition and is, in many ways, the bedrock upon which liberal democracy has been built. But have we taken a good thing too far? Schwartz suggests so: “At this point, choice no longer liberates, but debilitates. It might even be said to tyrannize.”25

Social (Media) Unrest?

During 2019, incidents of social unrest sprung up across the globe. Consider a sampling of the events: Chile experienced mass protests in the streets of Santiago, Bolivia’s military deposed its leader, Catalonians took to the streets to seek independence, yellow jacket protesters filled the streets of Paris, anti-Iranian demonstrations took place in both Lebanon and Iraq, Russia endured mass protests, and citizens of Hong Kong pushed back against pro-Chinese policies in the city-state. Other countries rocked by social protests included Algeria, Britain, Guinea, Kazakhstan, and Pakistan.

While most of the attention on the topic of social media and social unrest tends to focus on the mobilizing and coordinating power of the technology, might it be possible that the social media is exacerbating society’s FOMO? Yes, economic inequality reached a tipping point, inspiring the poor and disenfranchised to rise against the wealthy and powerful. And of course social media brings like-minded people together: The Economist magazine suggests “its use tends to create echo chambers and thus heighten the feeling that the powers-that-be ‘never listen.’ ”a

But might social media be having an impact far greater than mere validation of feelings? Could social media actually be intensifying the protesters’ feelings of being left behind? Just think about the topics you might share on social media. I’d bet you’re very unlikely to post about losing a job, while you’d be very likely to share news of a promotion. If everybody does this, social media presents an image of the world as being far more positive than reality. Anyone living in reality that fails to understand the biased world presented by positive-only posts will then feel like they’re uniquely worse off than most. Might that increase anxiety and a feeling of desperation, leading some into the streets to reset the social system?

a. “Economics, Demography, and Social Media Only Partly Explain the Protests Roiling so Many Countries Today,” Economist, November 14, 2019, http://www.economist.com/international/2019/11/14/economics-demography-and-social-media-only-partly-explain-the-protests-roiling-so-many-countries-today.

FOMO

While information overload may have defined the 2000s and choice overload the 2010s, my bet is that the ever-dreaded FOMO may become the anxiety that characterizes the 2020s. It’s a natural consequence of our interconnected lives, social media, and the overwhelming choice that is bluntly and obviously available to us with little to no effort. You see, the more options we have and the more options we are aware of, the more likely we are to regret the choices we make. Just as with my movie melancholy, the allure of nirvana remains ever-present, even if unattainable.

And while more choice inevitably leads to FOMO, our highly interconnected lives (through social media and other communication and sharing platforms) intensify this problem. MIT Professor Sherry Turkle notes that FOMO drives a constant fear that something better exists somewhere else. In her book Reclaiming Conversation: The Power of Talk in a Digital Age, Turkle describes the feelings of Kati, a young woman who seems to typify this condition: at almost any party Kati and her friends attend, someone seems to always be texting friends at other parties to figure out whether they’re at the absolute best party. As Kati describes it, “Maybe we can find a better party; maybe there are better people at a party just down the block.”26 Recalling the exploding choice problem, Turkle concludes: “Nothing Kati and her friends decide seems to measure up to their fantasy of what they might have done.”27

The social lives of many today exist on the back of Silicon Valley infrastructure, which structures our motivations. When we act, we are increasingly preoccupied with how it will be reflected in our friends’ feeds. Given the heavy dependence on electronic interactions, accruing likes turns into an arms race. A CNN report described adolescents anxious to join the “100 club”—meaning that a post of theirs accrued one hundred likes or more.28 It is not uncommon to hear of people doing all sorts of atypical, attention-seeking actions for the likes. The street artist Banksy depicted these dynamics with an image of a boy crying with symbols for no comments, no likes, and no follows hovering over his head.29 At the current pace, note Peter Singer and Emerson Brooking, “the average American millennial will take around 26,000 selfies in their lifetime”—many to generate likes. They go on, highlighting how “in 2016, one victim of an airplane hijacking scored the ultimate millennial coup: taking a selfie with his hijacker.”30 The zeal for likes is so strong that in most years, more people die from selfie accidents than shark attacks.31

If you’re like me and don’t understand the logic of the previous sentence, blame the generation gap. I asked some younger folks in my life about why the quest for likes would lead people to die. The way they explained it is this: As more and more people take photos, competition for likes steepens. Therefore, if you can take photos in crazy situations—say, a selfie as close to a moving train as possible, with a live grenade, or while springing down the streets of Pamplona while being chased by bulls, to name just a few examples that resulted in death32—and live to post, tweet, or share the tale, then you will get lots of likes.

And don’t think this is just for the young among us. In 2016, fifty-one-year-old Oliver Clark died after falling 130 feet after jumping to get the perfect picture of himself with Machu Pichu.33 In fact, a Washington Post article reporting Clark’s death noted that less than twenty-four hours prior to his demise, a South Korean tourist fell more than 1,600 feet down the Gocta Waterfall in the Amazonas region in northern Peru. Other selfie accidents mentioned in the article include the death of a Japanese tourist who fell down stairs at the Taj Mahal, tourists being gored by bison in Yellowstone Park, and the drowning of seven men in the Ganges River.34

Entrepreneurs have taken notice of this quest for likes. In an article titled “Please Like My Vacation Photo. I Hired a Professional,” the Wall Street Journal noted the boom in Instagram tours that cater to the needs of those focused on generating the perfect perception online.35 In addition to helping clients change shoes and outfits while on the tour, many will also find ideal lighting conditions to secure the like-generating shot. In some cases, they may even help create an image that differs from reality. The article notes that “judging from Instagram, the picturesque Hindu temple that houses the Gate of Heaven in Bali is surrounded by a serene body of water. But the reflection isn’t a pool; it is actually a piece of glass.” The reality is that glass that provides the reflection obscures the gray paving stones that are filled with tourists waiting to pose for that very shot.

Caring deeply about perceptions and prestige is nothing new. But it is amplified, accelerated, and bent into a new shape by social apps, which emphasize instant, one-dimensional quantifications of social interaction and appreciation. Our relationship with social media encourages a focus on appearances over experiences. Humans have always been vain, but now the medium of our vanity gives us immediate, addictive feedback. And there is medical evidence that the instant feedback is like a dopamine drip, always bringing us back to seek more. Our focus on the instant gratification of likes blinds us from taking a more holistic approach to social well-being.

And knowledge of these distracting influences on our well-being does not necessarily prevent us from being absorbed by their overwhelming power. I know, as I fell victim to the seductive allure of accumulating likes. In 2014, I began writing a weekly comment on geopolitics and geo-economics. It was a fun way for me to congeal my thinking on a range of issues, and posting it online allowed me to connect with others who seemed to care about those same issues. But in January 2015, I was convinced by one of my readers (I had dozens of them who regularly read and engaged with my content) to post my views on LinkedIn. “The LinkedIn community will love your comments, so why not post your views where people are already going rather than trying to get readers to come to you?”

It seemed to make sense, and so I began putting my weekly comments on LinkedIn. At first, the number of views jumped up from dozens to around a hundred. By the middle of 2015, I regularly had one thousand people reading my stuff, and by the end of 2015, my audience had grown to tens of thousands. LinkedIn then honored me as their #1 Top Voice for money and finance, something that fueled more readership in 2016. During the year, I had several pieces that drew hundreds of thousands of readers and many thousands of likes. I soon found myself stressing about headlines that would draw readers or topics that might generate likes. Bottom line, the tail had begun to wag the dog. What began as a process of writing to help crystallize my thinking had turned into a quest for likes, views, and shares.

So what did I do? After being honored again in 2016 as the #1 Top Voice on LinkedIn for finance and economics, I quit. That’s right, I stopped cold turkey. Rather than let popular sentiment drive my writing on a regular schedule, I chose instead to write a monthly reflection piece for my mailing list. As I stopped tracking whether it generated likes or views, the burden of writing reverted into the joy of expression.

Dating Data

For those seeking love today, abstaining from social media is likely not an option, as potential partners are increasingly turning to apps for dates. Given the superficial dynamics of dating—whether online or offline—social apps make searching for love online more convenient and may actually expand our horizons in ways that could result in concrete, meaningful relationships. The difference is, with online dating, the “online” part ends when the relationship begins, unlike with social media friendships. Not using dating apps largely condemns us to the narrow focus of our immediate friendship networks, workplaces, and serendipity in local bars.

Consider these facts: Many Americans meet their significant others and spouses through friends. According to a 2012 study, roughly 30 percent of straight couples met this way, which has been the leading path toward a relationship in the post–World War II era.36 Another popular way to meet a potential spouse is at work, a phenomenon that began in the 1960s as women entered the workforce. But since the 1990s, the number of couples meeting through coworkers or at work has been falling.37

Given the importance of choosing a life partner, the data is actually quite shocking. Why’s that? As noted by online commentator Tim Urban on Wait but Why, choosing a life partner is a big commitment that shouldn’t be trivialized. In fact, Urban says that the choice is really about “your parenting partner, your eating companion for about 20,000 meals, your travel partner for about 100 vacations, your primary leisure time and retirement friend, your career therapist, and someone whose day you’ll hear about 18,000 times.” Urban concludes, “This is by far the most important thing in life to get right.”38

But society tends to work against us by placing a stigma on intelligently expanding our search for potential partners. Urban argues that people are often still timid to say they met their spouse on a dating site—that instead, “The respectable way to meet a life partner is by dumb luck, by bumping into them randomly or being introduced to them from within your little pool.” He notes this perspective is counter-productive. The obvious conclusion, according to Urban, is that “Everyone looking for a life partner should be doing a lot of online dating, speed dating, and other systems created to broaden the candidate pool in an intelligent way.”39

And although this logic is sound, it also comes with some downside effects. While it allows us to zoom out from the focus of our immediate social surroundings, it condemns us to being overwhelmed by the overload of potential matches, making us anxious as we seek to optimize. This tendency can leave us with unnecessary feelings of dissatisfaction with our partners. Surely the perfect match exists, right?

In our quest to optimize, we’re giving up the pretty good to search for the ideal. The result is higher anxiety, more confusion, and rising discomfort with trivial challenges within relationships. As one online dater put it, “I find it insanely overwhelming at what point do you stop swiping?” While another confessed: “Sometimes I worry that the love of my life is on a different dating app.”40

Indeed, some research into online dating trends has suggested exactly this. One study found that married couples who met first online were three times as likely to divorce as couples who met in person.41 Might this be because of a nagging feeling that an ideal match exists? There’s a lot of anecdotal evidence on the anxiety and the corresponding FOMO that comes with online dating, explored by Aziz Ansari in his book Modern Romance, and the New York Times opinion section, among others.42

Connectivity Conundrums and Data Distractions

People are so focused on their phones that it’s endangering their lives. According to the Daily Mail, 43 percent of young people in the United Kingdom have walked into someone or something while checking their mobile phone.43 The situation is so bad in Japan that a cellular operator ran a public service advertisement warning against mindless pedestrian smartphone use. The ad claimed that 66 percent of people have bumped into another person while using a smartphone, while 3.6 percent of people have fallen from a train platform when texting while walking.44

This constant connection with our phones is not just endangering our lives, it’s also putting public safety professionals at risk. An April 2019 study by the National Safety Council and the Emergency Responder Safety Institute found that 71 percent of drivers take photos or videos when they see an accident, and 16 percent of these drivers admitted (how many people do so but didn’t admit to doing so?) that they either struck or nearly struck a first responder in the process.45 The same survey revealed that 60 percent of drivers posted their videos to social media (presumably to generate likes?) and 66 percent sent emails—all while driving through a traffic-filled accident site. The need to stay connected and online at all times is literally threatening our lives and those of the people around us. And yet we continue.

Even if you wanted to try to just partially disconnect to slow the tidal wave of information coming at you, there are increasing social pressures that lead people to distrust those with no social media presence. As Evgeny Morozov has pointed out,46 it is more and more common for journalists to casually imply that if you don’t have an online profile, you’re a deviant or have something to hide. You could be ostracized just for trying to avoid notifications!

We are worse off when we allow ourselves to be distracted by notifications. Cal Newport argues in Deep Work: Rules for Focused Success in a Distracted World that we cannot enter “deep work,” in which we focus on a cognitively demanding task for a long period of time, when we receive constant interruptions.47 Unfortunately, our workplaces value constant communication and accessibility, which is in direct opposition to enabling employees to produce cognitive products at their highest levels. The average employee receives over 300 emails a week, checks their email thirty-six times per hour, and takes up to sixteen minutes to refocus after handling a new message. Further, workers are interrupted on average fifty-six times per day, switch tasks twenty times per hour, and spend two hours per day recovering from distractions.48

Distraction is now a universal competency, as Joshua Rothman wrote in “A New Theory of Distraction” in the New Yorker in 2015. We see distraction as a way of asserting control, of regaining our autonomy from any situation that could trap us—a conversation, a movie, or a walk down the street.49 But there are real downsides to distraction. In 2013, researchers at King’s College Institute of Psychiatry in London reported “unchecked infomania” (the constant use of email and other social media) led to a temporary ten-point drop in the IQ of the study’s participants—twice as much as pot smokers!50

Cognitive Crutches

Overwhelming choice generates a nagging sense of regret. But this is the complete opposite of how some economists believe humans act. In reality, we humans are not the rational, utility-maximizing robots that some economists believe we are. In fact, we are so consistently irrational that the whole field of behavioral decision making exists to better understand how we think. The godfathers of this field are Amos Tversky and Daniel Kahneman.

A key insight from their work is that how choices are presented to us impacts the selection we make. Kahneman describes this “framing effect” in Thinking: Fast and Slow.51 And it’s backed up by evidence. For example, researchers found that subjects had higher opinions of meat described to them as “75 percent lean” than those described as “25 percent fat.”52 The power of framing can affect almost all walks of life, from organ donations to medical treatments. For instance, in countries where people have to check a box to be an organ donor, between 4 and 28 percent of people become donors. By contrast, if asked to check a box to not be an organ donor, between 86 and 100 percent of people become donors.53 How is this possible? Framing.

In medicine, doctors are more likely to choose treatments presented in terms of survival rates than those presented in terms of mortality rates. In one experiment, doctors were given information about outcomes for two different lung cancer treatments: surgery and radiation. When the doctors were told that for surgery “the one-month survival rate is 90 percent,” 84 percent of them preferred it to radiation. By contrast, when they were told “there is 10 percent mortality in the first month,” only 50 percent favored it.54 The options were fundamentally the same, and yet framing dramatically affected how expert doctors responded to them.

As we’ve seen in these cases, the framer sets our field of vision and in doing so can exert influence toward a specific choice. As Kahneman explains, “A physician, and perhaps a presidential advisor as well, could influence the decision made by the patient or by the President, without distorting or suppressing information, merely by the framing of outcomes and contingencies.”55 In a world of overwhelming choice, we throw our hands up in despair, hoping for someone to present some authoritative optimization strategy and thereby grant enormous power to decision aids and focus filters that frame our choices.

Kahneman and Tversky also showed how we get fixated on seemingly irrelevant numbers in ways that bias our decision making. In one experiment, a researcher spun a number wheel in front of an audience and then read the randomly selected number aloud. He then asked the audience what percentage of African countries were members of the United Nations. Audience members’ guesses tended to cluster around the randomly selected number, even though they knew it had no relevance to the question being asked. For instance, they found that if the random number was ten, then the median audience guess was 25 percent; when the random number was forty-five, the median guess was 65 percent.56 People tend to stick with what they see and hear, even if it’s totally irrelevant. This phenomenon, which they called anchoring, confuses our decision making.

The anchoring effect illustrates the stakes of managing focus. Whatever the baseline in our field of view happens to be, it will profoundly affect our ultimate choice—even though we know in our rational minds that it shouldn’t. As we encounter new information, we adjust up or down from this baseline. But the psychologists’ research shows that we adjust insufficiently. If someone asks us if global GDP per capita is higher or lower than $100,000, even if we receive further information that suggests an answer closer to $12,000, we will insufficiently adjust downwards from the initial high baseline. How our baselines get set is thus key to how we decide if something is high or low, good or bad—thoughts that in turn drive our choices. As you might imagine, this reality allows those to whom we may turn for advice or guidance to exercise tremendous power over our choice.

Another cognitive bias that focus managers exploit is loss aversion. In comparing outcomes, humans feel more pain from losses than they feel pleasure from equivalent gains. As Kahneman, Knetsch, and Thaler note, in gambling on coin flips, people often need the possibility of winning as much as $200 before they are ready to accept a potential loss of $100.57 Economists would call such behavior irrational.

Attempting to harness these cognitive biases, I often began my classes by letting my students know that everyone in the class currently has an A. I then proceeded to tell them that they would lose their A unless they did a spectacular job on each and every assignment, came prepared for each class, and were engaged in class discussions. Acknowledging that I have a biased and small sample, my trick seems to work. Most retain a good grade. Whether my framing matters is impossible for me to answer, but I do find students more engaged and committed than before I adopted this approach.

The bottom line is that we humans are not as rational as some decision-making models assume. In fact, homo sapiens differs from homo economicus in many ways, but most visibly when it comes to how we actually make choices. The cognitive crutches we’ve developed (called heuristics by psychologists) help us to make choices quickly and often correctly, but we can also be misled when we let our default methods take over and blindly follow them.

Nobel Laureate Herbert Simon suggested that we humans suffer from what he called “bounded rationality,” in that our ability to optimize is limited by (1) the information at our disposal, (2) the capacities of our mind to calculate tradeoffs and optimize with them, and (3) the limited time we all have to make decisions.58 This reality means optimization is almost universally unlikely; yet the very promise of it creates widespread decision dilemmas as the unachievable quest for the perfect decision continues unabated. Simon’s solution was simple: we should learn to “satisfice” rather than maximize,59 meaning we should be willing to make choices that are “good enough” to meet our needs, even if a better choice might be possible. We’ll return to this concept later in the book when we discuss goals-based investing.

The proliferation of options does not appear to have increased our satisfaction; rather, the explosion of choices appears to have made us more anxious, paralyzed, and regretful. Choosing has gone from an expression of preference to an anxiety-filled drama in which even trivial selections can generate doubt and regret. This paradox of choice drives demand for anything that can mitigate its symptoms.

Is it any wonder that we willingly outsource our thinking to those who claim an ability to guide us through the deluge of data toward a seemingly optimal choice? Paradoxically, our faith in choice, which was supposed to empower us, has led us to rely more on others. And in so doing, we’ve given up an in ordinate amount of control.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.28.50