© Tobias Baer 2019
Tobias BaerUnderstand, Manage, and Prevent Algorithmic Biashttps://doi.org/10.1007/978-1-4842-4885-0_2

2. Bias in Human Decision-Making

Tobias Baer1 
(1)
Kaufbeuren, Germany
 

As you will see in the following chapters, algorithmic biases originate in or mirror human cognitive biases in many ways. The best way to start understanding algorithmic biases is therefore to understand human biases. And while colloquially “bias” is often deemed to be a bad thing that considerate, well-meaning people would eschew, it actually is central to the way the human brain works. The reason is that nature needs to solve for three competing objectives simultaneously: accuracy, speed, and (energy) efficiency.

Accuracy is an obvious objective. If you are out hunting for prey but a poorly functioning cognitive system makes you see an animal in every second tree trunk or rock you encounter, you obviously would struggle to hunt down anything edible.

Speed , by contrast, is often overlooked. Survival in the wild often is a matter of milliseconds. If a tiger appears in your field of vision, it takes at least 200 milliseconds until your frontal lobe—the place of logical thinking—recognizes that you are staring at a tiger. At that time, the tiger very well may be leaping at you, and soon after you’ll have ended your life as the tiger’s breakfast. Our survival as a species may well have hinged on the fact that nature managed to bring down the time for the flight-or-fight reflex to kick in to 30-40 milliseconds—a mere 160 milliseconds difference between extinction and by some accounts becoming the crown of the creation! As John Coates describes in great detail in his book The Hour Between Dog and Wolf,1 nature had to go through a mindboggling array of tweaks and tricks to accomplish this. A key aspect of the solution: if in doubt, assume you’re seeing a tiger. As you will see, biases are therefore a critical item in nature’s toolbox to accelerate decisions.

Efficiency is the least known aspect of nature’s approach to thinking and decision-making. Chances are that you grew up believing that logical, conscious thinking is all your brain does. If you only knew! Most thinking is actually done subconsciously. Even what feels like conscious thinking often is a back-and-forth between conscious and subconscious thinking. For example, imagine you want to go out for dinner tonight. Which restaurant would you choose? Please pause here and actually do make a choice! Ready? Have you made your choice? OK. Was it a conscious or subconscious choice? You probably looked at a couple of options and then consciously made a choice. However, how did that short list of options you considered come about? Did you create a spreadsheet to meticulously go through the dozens or thousands of restaurants that exist in your city, assess them based on carefully chosen criteria, and then make a decision? Or did you magically think of a rather short selection of restaurants? That’s an example of your subconscious giving a hand to your conscious thinking—it made the job of deciding on a dinner place a lot easier by reducing the choices to a rather short list.

The reason why nature is so obsessed with efficiency is that your logical, conscious thinking is terribly inefficient. The average brain accounts for less than 2% of a person’s weight, yet it consumes 20% of the body’s energy.2 That means 20% of the food you obtain and digest goes to powering your brain alone! That’s a lot of energy for such a small part of the body. And most of that energy is consumed by the logical thinking you engage in (as opposed to almost effortless subconscious pattern recognition). Just as modern planes and ships have all kinds of technological methods to reduce energy consumption, Mother Nature also embedded all kind of mechanisms into the brain to minimize energy consumption by logical thinking (lest you need to eat 20 steaks per day). Not surprisingly, it introduced all kind of biases through this.

If you collect all the various biases described across the psychological literature, you will find over 100 of them.3 Many of them are specific realizations of more fundamental principles of how the brain works, however, and therefore several authors have brought down the literature to 4–5 major types of biases. I personally like the framework developed by Dan Lovallo and my former colleague Olivier Sibony:4 they distinguish action-oriented, stability, pattern-recognition, interest, and social biases. I will loosely follow that framework when in the following I discuss some of the most important biases required for an understanding of algorithmic bias.

Action-Oriented Biases

Action-oriented biases reflect nature’s insight that speed is often king. Who do you think is more likely to survive in the wild, the careful planner who will compose a 20-page risk assessment and think through at least five different response options before deciding whether fight or flight would be a better response to the tiger that just appeared five meters in front of him, or the dare-devil that in a split-second decides to fight the tiger?

A couple of biases illustrate the nature of action-oriented biases. To begin with, biases such as the von Restorff effect (focus on the one item that stands out from the other items in front of us) and the bizarreness effect (focus on the item that is most different from what we expect to see) draw our attention to the yellow fur among all those bushes and trees around us; overoptimism and overconfidence then douse the self-doubt that might cause deadly procrastination.

The bizarreness effect can bias our cognition like outliers and leverage points can have an outsized effect in estimating the coefficients of an algorithm. This is because of the availability bias—if we recall one particular data point more easily than other data points (e.g., because it stood out from most other data points), we overestimate the representativeness of the easy-to-remember data point. This can explain why, say, a single incident of a foreigner conducting a spectacular crime can severely bias our perception of people with that foreigner’s nationality, causing out-of-proportion hostility and aggression against them.

Overconfidence deserves our special attention because it also goes a long way to explain why not enough is done about biases in general and algorithmic biases in particular. Many researchers have demonstrated overconfidence by asking people how they compare themselves to others.5 For example, 70% of high school seniors surveyed believed that they have “above average” leadership skills but only 2% believed they were “below average” (where by definition, roughly 50% each should be below and above average, respectively). On their ability to get along with others, 60% even believed to be in the top 10% and 25% in the top 1%. Similar results have been found for technical skills such as driving and software programming. Overoptimism is essentially the same bias but applied to the assessment of outcomes and events, such as whether a large construction project will be able to remain within its cost budget.

What does this mean for fighting bias? Even if people accept the fact that others may be biased, they overestimate their own ability to withstand biases when judging—and as a result resist efforts to debias their own decisions. With most people succumbing to overoptimism, we can easily have a situation where most people accept that biases exist but still the majority refuses to do anything about it.

Another fascinating aspect of the research of overoptimism: it has been found in Western culture but not in the Far East.6 This illustrates that both individual personality and the overall culture of a country (or company/organization) will have an impact on the way we make decisions and thus on biases. A bias we observe in one context may not occur in another—but other biases might arise instead.

Note

An excellent demonstration of overconfidence is the fact that I observe that because of overconfidence, most people fail to take action to debias their decisions—but I write a book on debiasing algorithms anyhow, somehow believing that against all odds I will be able to overcome human bias among my readers and compel them to implement my suggestions. However, I also know that you, my dear reader, are different from the average reader and a lot more prone to actually take actions than others; therefore, let me just point out that in order to be consistent with your well-deserved positive self-image, you should make an action plan today of how you will apply the insights and recommendations from this book in your daily work and actively resist the tempting belief that you are immune to bias, lest you fail to meet the high expectations of both of us in our own respective skills.☺

Stability Biases

Stability biases are a way for nature to be efficient. Imagine you find yourself the sole visitor at the matinee showing of an art movie—you therefore could choose literally any of the 200 seats. What would you do: jump up every 30 seconds to try out a different one, or pretty much settle into one seat, at most changing it once or twice to maybe gain more legroom or escape the cold breeze of an obnoxious air conditioning? From nature’s perspective, every time you just think about changing your seat, you have already burned mental fuel, and if you actually get up to change a seat, your muscles consume costly energy, let alone that you might be missing the best scene of the movie. A number of biases try to prevent waste of mental and physical resources by “gluing” you to the status quo.

Examples for these biases include the status quo bias and loss aversion. You like the seat you are sitting on better than other seats simply because it is the status quo—and you hate the idea of losing it. This is a specific manifestation of loss aversion that is dubbed the endowment effect ; it has been shown in experiments involving university coffee mugs and pens that once an object is in your possession (i.e., you are “endowed” with the object), the minimum price at which you are willing to sell might be roughly double the maximum price you would be willing to pay for the item.7

While economists consider such a situation irrational and abnormal, from nature’s perspective it appears perfectly reasonable—nature wants you to either take a rest or do more productive things than trading petty items at negligible personal gain! At times, however, this status quo bias overshoots. For example, corporate decisions in annual budgeting exhibit a very strong status quo bias, with one analysis reporting a 90% correlation in budget allocations year after year (of individual departments or units). While this might have avoided an acrimonious debate of taking away budget from some units, this stability comes at enormous economic cost: companies with more dynamic budget allocation grow twice as fast as those ceding to the status quos bias.8

Another important stability bias is the anchoring effect. Econometricians studying time series models often are surprised at how well the so-called naïve model works9—for many time series, this period’s value is an excellent predictor of the next period’s value, and many complex time series models barely outperform this naïve model. Nature must have taken notice because when humans make an estimate, they often root it heavily in whatever initial value they have and make only minor adjustments if new information arises over time. At times, this bias leads seriously astray, however—namely if the initial value is seriously wrong or simply random. A popular demonstration of the anchoring effect involves asking participants to write down the last two digits of their social security or telephone number before estimating the price of an item, such as a bottle of wine or a box of chocolates. Even though there is obviously absolutely no relationship with these numbers and the price of the item, those writing down high numbers consistently estimate prices 60 to 120 percent higher than those with low numbers.10

Pattern-Recognition Biases

Pattern-recognition biases deal with a very vexing problem for our recognition: much of our sensual perception is incomplete, and there is a lot of noise in what we perceive. Imagine the last time you talked with someone—probably it was just a few minutes ago, maybe you spoke to the train conductor or the flight attendant if you’re reading this book on the go. Think of a meaty, information-rich sentence the other person said in the middle of the conversation. Very possibly a part of the sentence was actually completely drowned out by a loud noise (e.g., another person’s sneeze), several syllables might have been mumbled, and you also may have missed part of the sentence because you glanced at your phone. Did you ask the person to repeat the sentence? Or did you somehow still have a good idea of what the person said? Very often it’s the latter—because of an amazing ability of our brain to “fill in the gaps.” Our brains excel at very educated guessing—but sometimes these guesses are systematically wrong, and this is the realm of pattern-recognition biases.

Pattern-recognition biases are particularly relevant to this book because pattern-recognition is essentially what algorithms do.

In order to solve the problem of making sense from noisy, incomplete data (be it visual or other sensual perception, or be it actual data such as a management information system report full of tables in small print), the brain needs to develop rules. Systematic errors (i.e., biases) occur if either the rules are wrong or a rule is wrongly applied.

The Texas Sharpshooter fallacy is an example of a flawed rule. Your brain sees rules (i.e., patterns) in the data where none exists. This might explain many superstitions. If for three times in a row a sales person closes a deal while wearing the red tie she got from her husband for her birthday, the brain might jump to a conclusion that it is a “lucky tie.” Interestingly, the brain may not be wrong—it’s possible that the color red has a psychological effect on buyers that does increase the odds of closing the deal—it’s just that three closed deals is a statistically insignificant sample and way too little data to make any robust inference. This illustrates that the way nature thinks about pattern recognition is heavily driven by a “rather safe than sorry” mentality—how many times does the neighbor’s dog have to bite you in order for you to conclude that you better not get anywhere close to this cute pooch? By the same token, the brain is hardwired to think that even if there is only a small chance that the red tie helps, why risk a big deal by not wearing it?

Confirmation bias can be an accomplice of the Texas Sharpshooter fallacy and is nature’s way of being efficient in the recognition of patterns. Confirmation bias can be seen as a “hypothesis driven” approach to collecting data points. It means that where the mind has a hypothesis (e.g., you already have a belief that buying this book was a great idea), you tend to single out new data that confirms your belief (e.g., you praise the five-star review of this book for its brilliant insights) and reject contradictory data (e.g., you label the author of the one-star review a fool—of course rightly so, may I hasten to say!). Underneath the confirmation bias seems to be nature’s desire to get to a decision quickly and to reduce cognitive effort. Laboratory experiments have shown that participants are much more likely to read news articles that support their views than contradictory ones. You’ll therefore encounter confirmation bias as a central foe in Chapter 11 about algorithmic bias in social media.

Confirmation bias also can shape how we process “noisy” information. Imagine the above mentioned interaction with a flight attendant or train conductor. She asked about the book you were reading and you proudly showed her the cover of this book. Just as she replied, a loud noise drowned out part of her sentence. There really is no way to tell if she said “I loved that book!” or “I loathed that book!” Except that most likely you “heard” her say that she loved the book. This is because your brain of course would have expected her to say so, and an inconclusive sound would be automatically and subconsciously replaced with the expected content.

Stereotyping is an extension of the confirmation bias and an example of a bias where a rule is applied overly rigidly. First, imagine that you are in a swanky restaurant. The waiter just brought the check to the table next to you where now a stately, senior, white man pulls out a black object from his pants. What do you think it is? You probably imagined a wallet. Now imagine a police car passing a visibly distressed woman lying on the side of the street. As the police car pulls by, the woman shouts “my purse, my purse!” and waves into the air. At this moment, the police officers become aware of a young black man nearby running towards a subway station. They immediately run after the man, shouting “Stop! Police!” and aim their guns at the man. As the man reaches the steps of the entrance of the subway station, he pulls a black object out of his pocket. What is it? If you imagined a gun (not the wallet containing the man’s subway pass, which he needs to produce quickly if he doesn’t want to miss his train and hence arrive late for his piano lesson), then you fell victim to stereotyping. Based on the situation’s context, your brain already has some expectations of what reasonably could happen next. A person in a restaurant who just received a check is likely to pull out a wallet, credit card, or bundle of banknotes from his pocket; a person who appears to have committed a robbery is likely to pull out a knife, gun, or hand grenade from his pocket when trying to escape from the police. When all the brain knows is that a “black object” is pulled from the pocket, it “fills in the gaps” based on these stereotyped views of what a person in such a context is most likely to have in his pocket. The dilemma is that this guess might be wrong. It is quite clear that a police officer who shoots a suspect in the moment a black object is drawn from the pocket is less likely to be shot and hence more likely to survive than a more careful and deliberate officer who doesn’t pull the trigger unless the suspect has without any doubt pointed a gun at him, so evolution has not exactly been the greatest fan of due process. However, if the officer ends up shooting an innocent because the innocent looks like the “stereotypical” robber in the officer’s mind, nature’s trickery and decision bias have tragically claimed a life.

This example also points at the sober fact that sometimes the creation and use of algorithms can require us to decide grave ethical dilemmas—be it the denial of human rights to suspected terrorists, the decision to give parole to a convicted criminal, or the programming of self-driving cars for a situation where a deadly collision with pedestrians is unavoidable but the car could decide which of several pedestrians to run over. Just like in a classic tragedy where the hero had to choose between two equally disastrous paths, algorithms sometimes must be predisposed to go in one or the other direction, and the “bias” we ultimately decide to embed in the algorithm reflects our best assessment of what the most ethical decision is in this case.

Interest Biases

Interest biases go beyond mere heuristic shortcuts. Where action, stability, and pattern-recognition biases simply aim at making the “correct” decision as accurately, quickly, and efficiently as possible, interest biases explicitly consider the question “What do I want?” Think for a moment of a so-so restaurant in your area where you would go only reluctantly. Now imagine that a friend asked you to go there for lunch tomorrow. What thoughts came to your mind? Were you quick to think “rather not—what about we go to …?” Now imagine that your credit card company has made a fantastic offer that if you charge a meal for two at this so-so restaurant on your card by the end of the week, you get a $500 shopping voucher. Now imagine again that your friend asks you out for lunch—however, this time it is to a different restaurant that in general you like a lot. What is your reaction now? Can you suddenly sense a desire to go to the so-so restaurant instead?

If you listen very carefully to your thoughts, you might realize that your subconscious influences your thought process in quite subtle ways. In our exercise, you may have thought not just how you would suggest to your friend to go to a different restaurant but your subconscious also might have supplied several talking points to buttress your suggestion (i.e., additional talking points against or in favor of the so-so restaurant). If I task you with not liking a given restaurant, your mind is bound to retrieve in particular those attributes of the restaurant that you don’t like. By contrast, if I task you to sense a desire to go to the same restaurant, your mind is bound to retrieve the more attractive attributes of the restaurant.

The point is that not only might you rationally prefer the so-so restaurant if going there promises a windfall gain of $500 but that this explicit or hidden preference influences your assessment more broadly (think of it as a confirmation bias) and you might therefore seriously believe that the restaurant was a good choice also for your friend (who sadly won’t get a shopping voucher). Interest biases therefore can considerably muddy the water—rather than objectively stating that an option would be good for you but bad for everyone else, your own mind might wrongly convince you that your preferred option is objectively superior for every single stakeholder.

One can observe such behavior frequently in corporate settings where certain decisions have very personal implications. Imagine that your company is considering moving its offices to the opposite side of the town. How do you feel about it? Do you think there is any correlation between your overall assessment of the move and whether you personally would prefer the office to be on this or the other side of town? Interest biases therefore greatly influence the behavior of data scientists as well as users of algorithms.

Social Biases

Social biases are arguably just a subcategory of interest biases—but they are so important that I believe that it is justified to call them out as a separate group. Humans are social beings and, except for a handful of successful hermits, unable to survive outside of their group. Put bluntly, if a caveman annoyed his fellow cavemen so badly that they expelled him from the cave, he soon would be eaten by a wild animal. Humans therefore have an essential fear of ostracism that in essence is a fear of death—in fact, social exclusion causes pain (as a warning signal of imminent harm) that is stronger than most forms of physical pain (which also explains why increasing loneliness is a public health crisis and the UK appointed a “minister of loneliness.”)

In decision-making, the mind therefore weighs the benefits of any possible action against the risk of being ostracized. Sunflower management is the bias to agree with one’s boss; groupthink is the bias to agree with the consensus of the group. Members of committees tasked with important decisions frequently admit that they have supported a decision they personally deemed gravely wrong, maybe even disastrous (think of M&A decisions that destroyed otherwise healthy companies) because they deemed it socially unacceptable to veto it.

Similar to other interest biases, social biases work on two levels. While frequently enough people know perfectly well what the correct answer is and still go with an alternative decision because they believe that it would be suicidal to speak the truth, social biases also affect cognitive processes more broadly—for example, they may trigger a confirmation bias where committees strive to find numerous pieces of evidence supporting the chairman’s views while subconsciously rejecting evidence that he may be wrong.

Specific Decisions vs. the Big Picture

Interest and social biases illustrate that nature often looks at a bigger picture and not just a single decision. This observation has also been used as a criticism against the very idea of cognitive bias—the argument goes that the concept of a bias is an illusion because we just look at one particular outcome from one particular decision (e.g., if in a particular experiment the participant correctly estimated the number we asked for) whereas nature looks at a much bigger picture in the context of which an individual’s behavior and decision-making is perfectly rational and correct.

We don’t need to get lost in debating the merit of every single cognitive effect described as a bias by the psychological literature; what matters is the recognition that “truth” (or in the science of statistical modeling, a binary outcome of 0 or 1, the definition of 1) often is a surprisingly context-driven concept and that where a decision appears biased it simply may be the reflection of a compromise between competing objectives—be it simply a consideration of the importance of speed and efficiency , or a recognition that a single decision might be a negligible element in a much larger and complex problem such as maintaining good social relationships.

This realization is important for the detection, management, and avoidance of bias. As you will see in subsequent chapters, not every time that a predictive outcome (and by implication the responsible underlying algorithm) appears biased does this constitute a “problem” that needs to be fixed. At the same time, behaviors leading to (wrongly) biased outcomes still might be justified or at least rational, and therefore an effective way to contain such a bias will have to work around such behavior rather than expecting the behavior itself to change. And whenever a data scientist takes an action to explicitly eliminate a particular bias from an algorithm, he or she may have to assess whether doing so may cause a different problem in the system at large, and if therefore the intervention is warranted and really in the best interest of the stakeholders supposedly benefiting from debiasing the algorithm.

Summary

In this chapter, you saw how biases originated in a need for speed and efficiency in decision-making or other personal interests including the need for social inclusion that sometimes trumps the need for accuracy. These biases shape many phenomena that algorithms describe and predict; they also shape the behavior of humans that create, use, or regulate algorithms. As you will see in subsequent chapters, the techniques used to develop algorithms can eliminate some of these biases while other biases are bound to be mirrored by algorithms. Finally, this overview of human cognitive biases will also be useful in understanding other types of biases that are specific to algorithms due to the way they are developed and deployed.

The most important biases for our purposes are:
  • Action-oriented biases are driving us to speedy action by focusing attention and deflecting procrastination due to self-doubt.

  • An availability bias is a particular action-oriented bias that lets specific data points—especially bizarre or otherwise noteworthy ones—inordinately shape our predictions.

  • Overoptimism and overconfidence are specific action-oriented biases that can cause developers and users of algorithms to disregard dangers and limitations and to overestimate the predictive power of their algorithms.

  • Stability biases minimize cognitive and physical efforts by gluing us to the status quo.

  • Anchoring is a specific stability bias that can compromise estimates by rooting them in completely random or seriously flawed reference points.

  • Pattern-recognition biases lead to flawed predictions by either forming decision rules from random patterns or by applying sensible rules inappropriately.

  • Confirmation bias is a particular pattern-recognition bias that compromises the data we consider when developing pattern-based decision rules.

  • Interest biases compromise objective judgment by blending it with our self-interest.

  • Social biases are a particular type of interest biases that focus specifically on safeguarding our standing within our social environment.

We now turn to algorithms and start unraveling the many ways they are affected by and interact with these biases. As a first step, we need to disentangle statistical algorithms in general from a particular type of algorithm, namely those developed through machine learning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.156.140