6

Reductionist Research

Don’t be afraid to take a big step. You can’t cross a chasm in two small jumps.

—DAVID LLOYD GEORGE

So far we’ve looked at how the scientific and governmental understanding of nutrition is firmly rooted in the reductionist paradigm, and how that affects the way the public views nutrition. We’ve also seen how, when you look at it carefully, nutrition is a wholistic phenomenon that can never be fully comprehended within a reductionist framework. It’s too complex, with too many variables.

In this chapter I’d like to look a little closer at the differences between reductionist and wholistic scientific research, to show the various ways that the reductionist worldview inevitably fails us when it tries to comprehend and manipulate the amazingly complex system that is the human body.

REDUCTIONIST SCIENCE AND CAUSALITY

As we saw in chapter five, reductionism treats science like a math equation. It searches for cause and effect, and the more focused that search, the better. The holy grail of research is the ability to state with confidence that A causes B. Once you know this, if you want to reduce or eliminate B (liver cancer, for example), you simply look for ways to reduce or eliminate A (say, aflatoxin) or to block the process by which A causes B.

Baked into reductionist science is the assumption that the world operates in a linear way—that it operates on simple causality. What exactly do I mean by this? The classic conditions for proving that A causes B are three-fold:

   1.  A always precedes B.

   2.  B always follows A.

   3.  There is no C that could also cause B.

Not much wiggle room there. Certainly no room for messy, unpredictable, and complex interactions. No room for acknowledging systems that are too complicated to map out. No room for uncertainty of any kind. That’s why tobacco companies were able to get scientists to say that smoking doesn’t cause lung cancer: not all smokers develop lung cancer and not all lung cancers are attributable to smoking. In a reductionist universe, the statement “Smoking doesn’t cause lung cancer” is perfectly accurate. But it’s woefully inadequate when it comes to the practical issue of understanding the profound effect of tobacco on lung cancer, thus convincing people to stop smoking.

In the simple-causality reductionist view, the universe, ultimately, is as mechanical as a clock. Some reductionist philosophers of science have gone so far as to claim there’s no such thing as free will, since our very thoughts, emotions, and impulses are simply the result of chemical reactions that themselves were triggered by other chemical reactions, going back to the Big Bang itself.

As psychologist Abraham Maslow wisely observed, “If you only have a hammer, you tend to see every problem as a nail.” And if your only way of seeing assumes that the world operates on simple causality, you’ll see simple causality everywhere, even where it doesn’t exist; we see the world, not as it is, but as we expect it to be. Reductionist research naturally produces reductionist findings. It can be no other way. The flip side is also true: since reductionist research assumes that simple causality is the way the world works, if we can’t find simple causality in our research subject it just means we must not be looking at it the right way, or we don’t have sufficient observational or computing power to reveal it. The only way to see the miraculous complexity of nature is to allow ourselves to do so.

But looking for complexity is a much harder task. Single-factor causality is much easier to measure, and gives much more satisfying (if ineffective) answers, since no matter how complex the system and its interactions are in reality, a good reductionist scientist still assumes that just one factor among the hundreds, thousands, or billions in the system is necessary and sufficient to cause the end result under study. Smokers get more cancer? That proves nothing to reductionists until you can isolate the single chemical in the cigarette that invariably causes cancer. When the effects of smoking are mitigated by lifestyle, nutrition, or whether the cigarette is a pleasurable interlude or a guilt-raising addiction, reductionist research must steadfastly ignore these complexities.

In one way, though, looking for complexity is actually easier than seeking rigid causality. Reductionism may work from simple models of causation, but those models often provide unexpected and unexplained findings, eventually suggesting complex and confusing (and sometimes totally implausible) solutions. Wholism, on the other hand, presumes complex models of causation in a way that suggests simple solutions. (You can’t get much simpler than, “Solve most of our health problems by eating more whole, plant-based foods”!)

In other words, reductionist research often requires the invention of new complexities—especially more complicated methods of study and explanation. There’s an old joke about a dairy farmer who could not get his cows to produce enough milk. He asked the local university for advice, and they sent a team of professors, headed by a theoretical physicist. After weeks of intensive study, the team returned to the university, where they pondered potential solutions. Finally the physicist returned to the farm with an answer to the production issue. But he prefaced his presentation with a caveat: “This solution assumes spherical cows in a vacuum.” The physicists’ work, like that of reductionist nutritionists, is a whole lot of academic labor for a solution that doesn’t work in the real world. (No wonder one definition of the word academic is “moot”!)

Because I grew up on a real dairy farm, the study of spherical cows in a vacuum never occurred to me. When I entered academia, I tried to embrace the staggering complexity of biochemistry as the point and the challenge of my research. What could possibly be gained by trying to simplify it just to fit a theoretical framework?

I don’t want you to think that all of science is mired in reductionism. Particle physics, for example, chased and ultimately abandoned the reductionist dream of finding the “monad,” the elementary particle that could not be divided into anything smaller.

First physicists discovered atoms. Then the big subatomic particles that we learned about in school: protons, electrons, and neutrons. Then things started getting weird. Neutrinos, quarks, muons, bosons, fermions—each was anointed the elementary particle until theory or observation pointed toward yet another division. The closer the physicists looked, the more solid matter looked like mostly empty space with a tiny particle at its core. Now cutting-edge physicists see matter as simply a dense form of energy. It’s no accident that the recently discovered Higgs boson is nicknamed the “God particle.” Particle physicists realize that a comprehensive wholism underpins even the most reductionist mode of observation.

Many physicists point out in wonder the self-similarity between atoms, cells, planets, galaxies, and the universe as a whole (self-similarity among different levels is one of the hallmarks of a wholistic system). And the emergence of quantum theory in the twentieth century dealt a body blow to the reductionist paradigm by inserting uncertainty into what were supposed to be purely mechanical events. Theoretical physicist and popular author Stephen Hawking has written about subatomic particles that travel backward in time. The effect, known as retrocausality, suggests that certain effects can precede their causes. Talk about putting a nail in the coffin of cause-and-effect reductionism!

Yet many scientists still operate with both feet firmly planted in a seventeenth-century Newtonian universe—especially the ones (like nutritional scientists) responsible for studying human health and disease.

HOW DO WE KNOW WHAT WE KNOW?

Scientists can argue philosophy all day long, but what really counts is evidence. This begs the question: What counts as evidence? What ways of looking for answers are considered good or bad science? Which methods are appropriate for what subjects of exploration?

The answers to these questions are themselves quite subjective, even if science believes itself to be an objective, value-free pursuit. They depend heavily on the questions being asked, and also on how the answers are sought. Epidemiologists, those scientists who study the causes of human health and disease, refer to the ways we explore scientific questions more formally as “study designs.” Let’s look at a few of the points on that continuum of study design, from highly wholistic to deeply reductionist. We’ll take a closer look at the difference between the two and the types of evidence they collect, as well as how they affect the kind of conclusions we draw from the resulting research—especially when it comes to nutrition.

Wholistic Evidence Source #1: Ecological (or Observational) Research

One way to identify the optimal human diet, pretty obvious to all but fundamentalist reductionists, is to survey and compare populations as they already exist, and see what they eat and how healthy they are. Epidemiologists refer to this kind of study as ecological or observational. Its main characteristics include observation without intervention and looking at certain observable facts, like food intake and rates of disease, without trying to prove that one caused the other. Instead, researchers simply record the diet and disease characteristics of the populations as they are. If an ecological survey looks at those diet and disease rates in a group of people at more or less the same time, like a snapshot, it is called cross-sectional. The population under study can range in size from a small community of a few hundred people to a large country.

The results that ecological studies produce show associations between variables rather than proof that a particular input caused a particular output. These associations are often presented as correlations between input and output, the biological relevance and probable significance of which are determined statistically. Hence a study like this is also known as correlational.

Since the data collected in these studies are averages for entire populations, it is not possible to conclude causality for individuals. If we try to read causality into the data, we make a mistake known as an ecological fallacy. We might observe for various populations, for example, that a higher concentration of cars, indicative of a richer society, is correlated with a higher risk of breast cancer, also present in richer societies. It doesn’t make sense to conclude that cars cause breast cancer, or to tell women fearful of breast cancer to avoid driving cars. Instead, it suggests that the two have something in common that warrants further study; the strength of an ecological study is its ability to highlight significant patterns and to compare the relative successes of different lifestyles. But because conclusions about specific causes cannot be made in this type of study, it is considered by reductionists to be a weak study design.

Our project in China (the main study highlighted in The China Study) was just such a cross-sectional, ecological study design. Using various kinds of evidence, we found that the higher the consumption of animal products in different regions of China, the greater the incidence of and mortality from a whole host of diseases, including various types of cancer, heart disease, stroke, and many others. Yet critics trumpeted that we could not claim that a plant-based diet had any effect on lowering disease rates based on that correlation, because our study design was not discriminating enough to make such a claim.

They’re right in one way, but they’re wrong in another. According to reductionist philosophy, it’s technically correct to say that we cannot claim that a WFPB diet reduces disease risk, any more than we could say that driving cars causes breast cancer. But on close examination, the analogy breaks down. We weren’t comparing one input (driving) with one output (breast cancer). Rather, we were looking at nutrition, which as we’ve seen is a staggeringly complex set of processes and interactions. There’s really no meaningful way to reduce nutrition to a single input. I constructed the China project on the hypothesis that the effects of nutrition on health are wholistic, not reductionist. In other words, I wasn’t interested in whether more vitamin C prevents the common cold; I wanted to determine, from a wholistic perspective, whether a particular diet led to markedly better health outcomes than other diets. One way to do that was to study the people in an entire ecosystem—the rural population of China—who ate in a way markedly different from populations in the West. Using the rural population of China allowed us to consider a large-enough number and variety of lifestyle factors and health and disease conditions to see the big picture—the elephant, not just the trunk or tusk. We were able to investigate hypotheses that certain groups of foods are associated with certain diseases that share similar biochemical bases. That then let us assess whether there was something about those groups of foods that might be causing or preventing and remediating those diseases.

Wholistic Evidence Source #2: Biomimicry

Another wholistic way of gaining insight into our “ideal” diet is to look at our nearest animal relatives—gorillas and chimps—and see what they eat, a strategy known as biomimicry. Primates’ diets haven’t changed much in tens of thousands of years, unlike those of humans. So we would expect a primate’s instinctual food choices to produce sustainably healthy outcomes. As well, primates in the wild haven’t been influenced by fast food commercials and government propaganda, so perhaps their instincts are more trustworthy than ours. Furthermore, wild primates don’t take drugs or undergo surgeries to deal with the effects of poor diets, so if a group of primates did eat unhealthy food, they probably would become too sick and obese to survive and reproduce.

According to Janine Benyus, author of Biomimicry, early humans probably used this wholistic research strategy to determine which plants were safe and which were toxic. After all, it makes evolutionary sense to let someone else serve as your taster!

While not conclusive, animal observation can give us a starting point for our own dietary explorations. For example, just noticing that chimps and gorillas have strong bones and muscles while eating WFPB undercuts the notion that humans need lots of animal protein to grow and maintain muscle mass. And of course we can point to the largest land animals in the world, elephants and hippos, whose 100 percent plant-based diets don’t seem to render them weak or scrawny.

In short, biomimicry reframes the issue of nutrition as one in which humans are seen as one species among many. Observing animals that resemble us can provide insight into diet in a way that observing human eating habits, which have been affected by human technologies from agriculture to refrigeration to processing, can’t. It also identifies areas of current research where we may be wrong (i.e., by casting doubt) as well as suggesting areas of further reductionist inquiry.

Wholistic Evidence Source #3: Evolutionary Biology

A third wholistic approach is that of evolutionary biology, in which we examine our physiology and determine what our bodies have evolved to ingest and process. For example, we can look at the length of our digestive systems, the numbers and shape of our teeth, our upright postures, the shape of our jaws, and the pH of our stomachs, among many other characteristics, and compare those elements to known carnivores and herbivores. (We see, by the way, that we share almost all the characteristics of herbivores, and have almost nothing in common with carnivores.) By doing so, we can use reverse engineering to discover possibilities for the kinds of foods our bodies are “built” to eat.

Reductionist Study Evidence Type #1: Prospective Experiments

The most well-regarded (and therefore best-funded and most common) form of reductionist study design is prospective, meaning that information is recorded in real time, and effects are observed as they occur. In its simplest form, one group of subjects (the experimental group) is given an intervention, while the other group (the control group) is not. The gold standard of reductionist research is a form of prospective experiment known as the randomized controlled trial. The “random” part of the study refers to the way subjects are assigned to either the experimental or control group. The theory here is that random assignment eliminates the effects of potentially confounding variables by evenly distributing them across all groups. If you’re worried about whether being a heavy smoker might influence the results of an intervention, random assignment uses the power of statistics to spread this variable evenly across groups, theoretically making it irrelevant.

Randomly controlled trials often include a double-blind feature, wherein neither the researcher nor the subject knows whether the subject is receiving the intervention being tested. In a drug trial, for instance, neither would know whether the pill the subject is taking is the actual substance or a lookalike placebo. That way, patients don’t get better just because they think they’re taking a wonder pill,1 and researchers don’t subconsciously treat a placebo subject differently than a subject taking the active compound.

Prospective experiments are seen as a “clean” form of study design, because they nail down the details with more precision, and because they minimize the messiness and “noise” of the real world. This allows researchers to isolate the effects of the intervention in which they’re interested. This isolation of a single variable (X) supposedly gives the researcher the right to say, “X causes Y,” where Y is an outcome that occurs after X and does not occur when X is not present.

This is most useful in cases where it makes sense to isolate a single factor, as when we need to assess the safety and effectiveness of a new drug. But even in the case of drug tests, there’s an inherent trade-off between that kind of certainty within a controlled environment and its applicability in the messy, noisy real world. The more perfectly controlled the experiment, the less it resembles reality.

While studying specific chemicals in isolation provides for pretty findings, these research methods cannot provide predictive models for complex interactions with multiple causes and effects—in other words, life.

Reductionist Study Evidence Type #2: Case-Control Study

Another commonly used research design, regarded as less discriminating by reductionist researchers than the prospective experiment, is the case-control study. The cases—individuals who, for example, have a disease—are compared with the controls—individuals of the same sex, age group, and so forth, who do not have the disease, as researchers look for lifestyle differences between the two groups that could have influenced their different outcomes. Case-control studies typically examine influences that cannot practically or ethically be imposed on people: diets, lifestyle practices, and exposure to toxins are common examples. You wouldn’t force half of the people in your study to eat all their meals at McDonald’s, for example, but you could find people who choose this diet on their own and see what happens to them.

Case-control studies can be retrospective when researchers use previously recorded observations to explain disease outcomes. They can also be prospective, in which cohorts of subjects with different lifestyles and diets are studied to see what will happen to them. Either way, because subjects aren’t randomly assigned to these cohorts, it’s impossible to prove that the differences caused the outcomes. The problem is, people who are alike on one characteristic are probably alike on many others. It’s impossible to tell which characteristic or characteristics were the active agents leading to the varying outcomes. So researchers typically resort to a family of statistical procedures to make this problem go away, called “adjusting for confounding.”

Here’s how statistical adjustment for confounding works. Suppose you are studying the relationship between breast cancer and dietary fat. You start with two groups, one made up of women who have been diagnosed with breast cancer (the cases), and one made up of women who have not been diagnosed with breast cancer (the controls). You question them about their eating habits to figure out if the cases are eating more dietary fat than the controls. But there’s a problem: the women with breast cancer carry a higher percentage of their body weight as fat. Assuming that there is a relationship between dietary fat and body fat to begin with, what’s causing what here? Is the dietary fat causing the breast cancer? Or are the women more prone to obesity also more susceptible to breast cancer?

The more questions we allow ourselves to ask, and the more possible interactions we entertain, the further we plunge into a reductionist nightmare. Maybe these women with breast cancer and a higher percentage of body fat have a genetic predisposition both to obesity and to breast cancer, so therefore we may not have to worry about how much fat women without that same genetic predisposition consume. Maybe there’s some other variable that we haven’t even thought about; perhaps heavier women exercise less, or are more depressed because of societal prejudice, and that’s the factor that leads to breast cancer. Or maybe they’re heavier because they’re depressed, and tend to eat more and exercise less. Or maybe they’re heavier because they are less educated about healthy eating, which sometimes correlates with less access to healthcare, which correlates to low income, which correlates to less access to fresh produce, which correlates to living in neighborhoods with higher concentrations of environmental toxins.

To deal with this uncertainty, reductionists use statistics to mathematically “hold constant” all these potential sources of data pollution and make their effects magically disappear—that is, they compare, in effect, small segments of each group whose confounding variables are nearly the same. Of course, you can do this only to those confounding variables you’re able to think of and then measure in some way. No study has unlimited time or money, so there will always be potentially confounding variables that don’t get neutralized by the statistical magic wand.

But the more we scientists try to disentangle the web of influences around a specific health outcome, the less useful the “results” of a study become. Suppose, in the breast cancer example, we “adjust” for every other influence we can think of, so that the only two variables that remain are rates of breast cancer and obesity. If we then say that obese women seem to get more breast cancer, the prescription to prevent breast cancer immediately collapses into “lose weight.” Any method that purports to take off the pounds then becomes a form of breast cancer prevention. Meal-replacement shakes, low-carb regimens, lemon juice fasts, and all manner of craziness would now be tied to a healthy outcome, regardless of the actual mechanism of the relationship between obesity and breast cancer. Suppose that increased rates of breast cancer and obesity are both functions of highly processed diets with lots of animal products and not enough whole-plant products. For many women who follow this weight-loss regimen, the “get thin by any means to prevent breast cancer” message could translate into diet choices that would increase, not decrease, their cancer risk.

It’s as if you noticed that happy people tend to smile more than unhappy people, so you invented a device that stretched the human face into a smile as a cure for depression. Yes, the smile is a good marker for happiness. Yes, there’s a correlation between smiling and happiness. Yes, it’s possible that reminding yourself to smile more can affect your mood. But isolating the smile and ignoring all other factors that might contribute to happiness and depression is patently ridiculous.

Think these examples sound unbelievable? We’ll talk more in chapter eleven about a real-world consequence of this kind of narrowly reductionist research when we look at the hype surrounding dietary supplements. In this hype, researchers have used statistical adjustment to conclude that certain nutrients are not just markers of good health, but the cause of it, ignoring clusters of factors surrounding those nutrients as if they didn’t matter or even exist. The result of this miscalculation isn’t merely a waste of vitamin-takers’ money; in some cases, the outcomes have been serious illness and even premature death.

WHOLISTIC VERSUS REDUCTIONIST RESEARCH

The reason wholistic ways of exploring reality come under fire from many contemporary scientists is that they all smack of fuzziness, of imprecision. They don’t narrow cause and effect to the point where everything is airtight, completely repeatable, and measurable to the fifth decimal place, the way reductionist experimental design does.

Reductionism by definition seeks to eliminate all “confounding” factors: any variables that might influence the outcome in addition to the main substance under investigation. But because nutrition is a wholistic phenomenon, it simply doesn’t make any sense to study it as if it were a single variable. Studying nutrition as if it were a single-function pill disregards its complex interactions.

The whole point of wholism is that you can’t tease out one contribution and ignore the rest. Of course body fat, dietary fat, education level, depression, socioeconomic standing, and so many more characteristics are interrelated and interactive with one another and with our bodies’ systems. While statistical adjustments can pretend to wrap up reality into neat little packages, they don’t explain the underlying reality at all.

You can’t study wholistic phenomena solely through reductionist modes of inquiry without sacrificing reality and truth in the process.

A NEW NUTRITIONAL RESEARCH PARADIGM

At its best, epidemiology draws conclusions from many different types of study design, just as a group of blind elephant scholars pool their findings to increase their understanding of the whole beast. Sadly, however, only reductionist studies are taken seriously and funded generously, so much so that the entire field of epidemiology is substantially biased in favor of reductionist philosophy. You wouldn’t give an electron microscope to someone studying elephants and expect them to tell you anything about the animals’ personalities or social structures. The only way to find wholistic answers is to allow for the possibility of seeing them.

Reductionist critics argue that the China Study was experimentally weak because it didn’t prove independent effects of single agents or show results applicable for individual people. As I hope I’ve shown in this chapter, this criticism is misguided. We don’t need to know the effects of single agents on health, because this is not the way that nature works. Nutrition has a wholistic effect on health; one that we consistently miss and misinterpret when we focus on isolated nutrients. Our project in China, when evaluated from a wholistic perspective as intended by the study’s design, provided unique evidence on cause-and-effect relationships between diet and disease through highly significant patterns of association between food consumption and health outcomes.

For drug trials, the most informative study is the randomized control trial. But for nutrition, the most informative study design is the wholistic study: one that allows us to see how unimaginably complex interactions can be influenced, and how radiant health can be achieved through simple dietary choices.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.134.107