1
Before We Cured Scurvy

What do we know about a person? If you asked Hippocrates, he might not have that much to say. Hot or cold. Big or small. Dead or alive. Ask a physician today, and the answer is much more complex. There are thousands of medical tests we can run on a person, inside and out. Blood chemistry, urinalysis, X-rays, Dopplers, and more. We can track these results over time, in various systems, or research information online, with powerful programs like Epocrates, a medical reference app, and others. We can sequence the genome. Or we can count how many steps someone takes in a day.

Categorizing all of these observations about a person is important as we think about them as inputs to patient equations. Whether ancient or modern, these observations come with different levels of reliability and resolution. For example, movement and mood have been observed by physicians for centuries, but we can now check them digitally, reliably, and automatically—without the biases or endurance limitations of a human observer. Hippocrates could certainly count steps—but nowhere near the way a fitness tracker can.

A useful first step in our categorization comes from what most people learned in high school biology: the difference between genotype and phenotype. Before Gregor Mendel's experiments with the physical attributes of peas in the 1800s, we had little knowledge about inheritance from a medical perspective. And until James Watson and Francis Crick's famous work with DNA less than a hundred years ago, we had no notion of the mechanisms by which our genetic makeup was stored and transmitted to subsequent generations. Our genome is incredibly important in determining our health—but it is merely a starting point.

Phenotype, on the other hand, includes every observable aspect of ourselves that is not encoded in our DNA. Everything about us and how we exist in the world is phenotype: our hair color, eye color, height, weight, and so much more. The observation of phenotypes begins well before the days of Hippocrates. Imagine an ancient doctor simply using a hand to determine if a person had a fever. Or, not even a doctor—we should instead use the term “healer” in that example, since people were likely checking for fevers long before any notion of the structured discipline of medicine.

Of course, this technique continues today. Imagine a parent touching a child to check for the same. These kinds of observations certainly go under the heading of phenotype. But even what goes on in our heads—our cognition—and how those thoughts manifest in what we do every day—our behavior: it's all phenotype.

Over time, the precision with which phenotypes can be measured has continued to evolve. The hand, to start, was replaced by a thermometer to check for a fever. A modern mercury or alcohol-based thermometer can be read to a tenth of a degree of precision. 37.0° Celsius is the widely accepted average “normal” value of a healthy person's temperature. On a modern analog thermometer, that is distinguishable from 37.1° or 36.9°. A digital thermometer might be even more precise, perhaps to the level of hundredths or even thousandths of a degree.

These digital readings show a greater resolution—which is another useful dimension that we can use to categorize phenotypes. An inexperienced hand might be able to distinguish between two states: fever and no fever. For those familiar with the language of computers, we can represent this in binary as a zero or a one. Perhaps a more experienced nurse, physician, or mom can distinguish between a low fever and a high fever. Add hypothermia (the body becoming too cold for normal functioning) and we've got four possible outcomes of the measurement. The computer-literate will realize that this is now not one binary bit, but two digits, each a zero or one. If we want to know if a patient is recovering from a fever (or hypothermia), we probably need to grab that liquid thermometer and measure the temperature more precisely, so that we can see the value change over time.

As we look at more complex problems in disease diagnosis, or, for instance, predicting fertility, we may indeed need the digital version. As we take these more-and-more-precise measurements (and need more and more computer bits to store them), you can start to see how the convergence of biology and digital technologies is inextricably linked to the resolution at which we measure phenotype.

Nanometers to Megameters

Beyond resolution or precision, we can think of the available knowledge about a person in terms of scale. Starting small: individual atoms combine to form molecules that define the tiniest end of our scale, at least when it comes to our current knowledge about how to observe our state of health. (A keen futurist—or a particle physicist—might predict that future editions of this book will reflect not-yet-uncovered findings about subatomic interactions being relevant to predicting or managing our health. But, for now, the atom is as small as it gets.)

Let's begin with our DNA, at a couple of nanometers in size, as the starting point. When our genes are turned on—activated as a first step in a cascade of observable phenotypes—they are transcribed to RNA. We're still talking nanometers. Ultimately, those genes produce proteins, protein complexes, organelles (just as our body has organs, so do the cells that make it up), and we reach the next milestone of scale: a cell, at tens of micrometers in size. Figure 1.1 illustrates this continuing progression of phenotypic scale.

Illustration of a multiscale view of health depicting the  continuing progression of molecular, physiological, cognitive, and behavioral phenotypes.

Figure 1.1 A multiscale view of health

Our organs, in centimeters, are next. And if we look at the ways phenotype has been measured over time, the organs were the smallest level at which we could observe for many, many generations. The Greek anatomist Herophilus, around the year 300 BC, is said to be the first to systematically dissect and start to understand the human body.1 He described the cardiovascular system, the digestive system, the reproductive system, and more.

Perhaps embarrassingly, more than 2,000 years later, Herophilus's work still dictates much of how we divide up medical specialties. Doctors are trained in and specialize in the brain, the heart, the liver, and more—disciplines in medicine are largely still organ-based. But as we look at how impactful observations as well as medical interventions are now happening at smaller and smaller scales, the inevitable need for specialization at these smaller dimensions will become obvious. It's not that one scale is more important than others. Of course the brain and all of its complexities merits its own field of study. But as we look at cancers, and how interactions at nanometer and micrometer scales determine what kinds of treatments will be most beneficial for different patients, specialization in molecules, in pathways, and in fields that allow us to recognize that cancer isn't one disease but many will all be critical.

Professor Paul Herrling, who among several distinguished positions in academia and industry was the head of research at Novartis Pharma AG as well as a scientific advisor to Medidata, once told me that evolution was the ally of the drug discoverer. He was referring to the fact that once the molecular mechanisms that function in our bodies emerge through evolutionary processes, they are reused, sometimes over and over again.2 They will perform the same—as well as sometimes different—functions in different types of cells, and in different organs. This is a fact that life scientists ought to keep in mind. A drug that is useful for one particular purpose in treating a specific disease probably has other uses, in other diseases.

Imagine having no tools and deciding you need to tighten a particular bolt on a specific model of refrigerator (a somewhat ludicrous analogy, but I think also a useful one). You end up designing something to perform that function—much like creating a drug to treat a particular kind of cancer in a particular organ. Depending on the size of the bolt, the tool you create may well end up being able to tighten (and loosen) lots of other bolts as well, on lots of different models of refrigerators—not to mention on lots of other things too. Similarly, if that cancer treatment works in one specific instance, it may have the potential to be used in other cancers, as well as for noncancerous conditions.

As we move up the scale to our bodies, in meters, we realize that much of what we can see now has been detectable since the beginning of mankind—our moods are often quite obvious, our knowledge can be tested, our movements tracked—but not truly measurable in the way it is today. Going even bigger—if we start to not just count our steps but observe how our cognition drives the behavior of where we go and what we do in the world—we reach the scale of kilometers. Sometimes by the hundreds or thousands. Scaling up, sometimes what we think about or what we do can affect entire societies, entire countries, or the whole world.

We need to be open to these different levels of observation, these different scales. We need to look smaller and larger than the organ-based classifications modern medicine has often settled at. Joel Dudley, executive vice president for precision health for the Mount Sinai Health System as well as director of the Institute for Next Generation Healthcare and associate professor of genetics and genomic sciences at the Icahn School of Medicine at Mount Sinai, spoke about this at a recent Medidata event, explaining that humans are complex adaptive systems and we simply can't understand the entire person by looking at the individual parts.3

Organizing our study by symptoms and anatomy, he said, is like learning about the world from shadows. It is critical that we redefine our understanding of human disease with data—seeing clearly the overlaps between, say, brain disease and skin disease. Our assumptions about the relationships between systems in our body, and therefore the relationships between diseases, are outdated and incorrect, Dudley insists. We haven't even begun to define what health actually is, he says. Today, health is crudely defined as the absence of our flawed concepts of disease. But the remainder of what it truly means to be healthy is still to be fully figured out.

If we think about our journey since Herophilus, it is indeed only a few hundred years ago that we started to be able to look at things at a cellular level at all, with the invention of the microscope and the discovery of these tiny building blocks inside of us. And about halfway along the road between Anton van Leeuwenhoek viewing the first live cells in the late 1600s and the development of modern cell theory in 1839 (and the realization that everything in our body is made of cells), the world of clinical trials began.4 It is there that we could truly start building objective knowledge about how our bodies work.

Scurvy

James Lind, a surgeon in the British Navy in 1747, saw seaman after seaman dying of scurvy (on one voyage in the year 1740, almost three-quarters of the 1,900 sailors succumbed), and decided to test six potential remedies for the condition.5 He gave a different concoction to each of six pairs of sick seamen—vinegar, cider, mustard and garlic, seawater, sulfuric acid, and, to the final pair, two oranges and a lemon.6 The citrus eaters were the only ones to recover.7 Lind wrote up his findings, and they have stood the test of time as the first reported controlled clinical trial. (Although, interestingly, Lind misinterpreted his own results, believing there was no one cure for scurvy, and that the problem was a combination of environment and diet. It would take another 50 years before citrus was routinely given to sailors and the problem of scurvy on the high seas was eradicated, at least when the right fruits were available.)

The relevance here is that just as we have been learning more about the human body throughout history, we have also been learning more about how to test our hypotheses about the human body, how to develop treatments that work, and how to do good science. James Lind began with the null hypothesis: the assumption that none of what he was giving to the seamen was going to change their course of disease. And his experiment proved the null hypothesis wrong.

This is the most fundamental principle when designing a good scientific experiment. It is what we need to do today with patient equations. The null hypothesis tells us to start off with the assumption that there is no statistical significance to what we are testing. It asks us to assume that taking multiple observations, inclusive of genotype and varied resolutions of phenotype, and combining them to predict the onset of disease, the effective treatment, or any useful preventive courses simply won't give us any meaningful information. Then, like Lind, we need to prove that null hypothesis wrong. In doing so, we can prove the utility of our patient equations, and establish their worthiness in the future state of medicine.

In this chapter and the ones that follow, I'll talk about all kinds of possible new data sources, all kinds of bits and pieces that we've been ignoring—not on purpose, but largely because there has simply been no way to measure them, at least not consistently or rigorously enough to involve them in the good science we've been trying to do. Then, starting with the null hypothesis, the game is figuring out what is in fact additive in value. We need to determine what newly-measurable phenotypes, and in what combinations with traditional measurements of phenotype and genotype, will be truly useful to our understanding of disease and show themselves to matter.

Our understanding of the human body has grown so much since the time of James Lind—but I should note that our clinical trials largely haven't. We haven't had the infrastructure, the connectivity, or the information to enable us to think differently about the way we do research. Now we do. Now we can learn so much more about a person, across so many more dimensions. The magic is figuring out which of those dimensions matter, and how. But back to history …

The False Promise of Genotype

Okay, that's a deliberately misleading section title. In 1953, Watson and Crick discovered the structure of DNA and launched the modern era of genetics. It was a monumental discovery, and it will continue to propel our ability to predict and treat an incredible number of conditions and diseases. But I think we often err on the side of thinking that genotype is the most important piece of knowledge we can have about a person. Twenty years ago, when sequencing the human genome started to look like something that could plausibly be done at a mass scale, it was far too easy to imagine that we'd understand the nature of and be able to cure virtually every disease. It would all be there in those nucleobases of our DNA—the adenines, cytosines, guanines and thymines, the As, Gs, Ts, and Cs you might remember from high school. The only thing we thought we needed to do was decode it, and a future of longevity and robust health would be upon us.

The 1997 film Gattaca put onscreen the kind of genetic-determinist thinking that was emerging in society.8 In the movie, there are “valids,” who have been genetically engineered to perfection, and “in-valids,” whose genetic makeup has been left to chance. Valids were the privileged class and in-valids were left behind, denied opportunities, locked out of the best schools and jobs, and assumed to be inferior in just about every way. As the movie concludes, as one might expect from a Hollywood ending, the in-valid proves to be the better man…but it's not just a Hollywood ending, it's a very real illustration that genetics only gets you so far. The hero's drive—his cognition, and the behaviors it leads to—proves over the course of his life to be far more important than the makeup of his DNA at birth when it comes to his ability to become an astronaut.

To be explicitly clear, genotype is spectacularly important for the normal functioning of our bodies and for our overall health. It is quite literally the most important single source of information about us as organisms, and the main (albeit not the only) baseline from which virtually every other aspect of us—from molecular biology to behavior—emerges. Single variations in our DNA sequences can cause fatal inherited conditions like Tay-Sachs disease, and point mutations that occur during our lives can cause cancer. However, if we take a mathematical view of how much “genotype” versus “phenotype” knowledge we have, and how they change relatively over time (Figure 1.2), we can start to see how, and often why, phenotype trumps genotype.

As you can see from the graph, our genotype never grows, even as our phenotype becomes richer and richer, building more and more information about us over time. Even from the very beginning of life, the environment in and around us is hugely important in turning that consistent genotype into the beings we are, orders and orders of magnitude more complex than a sequence of DNA. We start out as a one-celled zygote, which then splits into two cells. One of those two cells is destined to ultimately produce (and become part of) our head, and the other is destined to become our feet. The most important factor that creates that differentiation is the local chemical environment within that zygote.

Graph depicting the mathematical view of how much “genotype” versus “phenotype” knowledge we have and how they change relatively over time.

Figure 1.2 Data about you

Morphogens—signaling molecules present in cells at varying concentrations—are part of what makes cells differentiate. Different morphogen concentrations that are the result of gradients (relatively larger and smaller amounts of them) within that first zygote, and then their relative concentrations as cells continue to divide, are what define our axes of front-to-back, head-to-toe, and inside-to-outside. And the environment only gets more important from there, both inside and around our bodies.

It's perhaps extreme to make the case that I can't tell you anything more about your state of health from your DNA (your genotype) at any point in your life than I could have at conception—not literally true, as mutations occur in our genes over time, and the proliferation of those mutations as some of our cells naturally die off, while others divide and grow, can be the cause of devastating diseases—but if we look at our original DNA, what's referred to as our germline that we get from our parents, there are things that can be derived from the sequence, and we can make weighted guesses as to whether certain conditions might affect us at different times in our lives. Those derivations and guesses don't get more precise over time. The effect of phenotype, and how it evolves based on the changing environment in and around us (inclusive of all the genotypes and piled-on phenotypes in the organisms that live in and on us: our microbiomes), makes an overweighted contribution to our health.

Even in the areas where we think genetics gives us a deterministic ability to derive something important about ourselves—normal or related to disease—we're realizing more and more that it simply isn't so straightforward. Yes, we know that certain genes or combinations are involved in a wide range of different cancers, but the complex machinery, the activation and deactivation of different genes though multilayered feedback loops, intertwined pathways, intercellular communications, and additional complexity at every scale we can consider means that often a purely genetic view of health is a useless oversimplification. Even the idea that genes can be “on” or “off” is a gross simplification of a much more involved system, where anything from huge amounts of a protein being produced at any given moment to none of it being produced are all possible scenarios. As researchers at Stanford have written, traits, conditions, and diseases are “omnigenic.”9 Yes, the genes matter. But so many of them are contributing to a given condition that trying to trace it back to a particular set of genes is an exercise in futility.

In sum, genetic information can enrich our models of disease, without question—but a good model will need much more than that. We need to combine the genetic information with physiological data, with behavioral data, and with information about our activity, our sleep, and our mood—information that we simply weren't able to measure objectively, at scale, until the past decade, but that can and does enrich our models of people and what is going on inside their bodies.

Figuring out how to think about all of these factors—seeing them as inputs in a formula whose output is a simplified but useful statement about our health, and about what treatment we should or shouldn't get—is what patient equations are all about. We may all be on a path toward clinical dementia. But as we proceed into this world of patient equations, remember that genetics are not destiny. So many of the factors that may influence whether or not we get a particular disease or condition—or at least how quickly we are progressing toward that disease or condition becoming a problem worth treating—are, at least to some extent, observable (if we know how to look) and in some cases even under our control. This includes what we eat, or where we live, or any of the dozens, hundreds, practically infinite number of things that we can either measure now, will be measuring soon, or would be able to measure if we could only identify what exactly they are.

Your Very Own High-frequency Medical Device

I've said that now, for the first time in history, we can measure more than we've ever been able to measure, at scale, objectively. And I think it's just as much of a breakthrough as that of Herophilus, or van Leeuwenhoek, or Watson and Crick. We're measuring physiology, cognition, and behavior like never before, via sensors, in a world where sensors are literally everywhere. As I write this, I'm wearing a wristband and a chest patch—and, no, I know you probably aren't, at least not until you get to the end of this book and realize perhaps you ought to be, but I bet there's a smartphone either in your pocket, on your desk, or somewhere within reach. That smartphone, as I'll explain in just a moment, is a high-frequency medical device just as much as the patch that's tracking my heart rate, body temperature, and livestream ECG, and uploading it to the cloud in real time.

Devices like these are adding layers and layers of information onto what we can already know about people, and what we can use to add to treatment decisions and our models of disease. Doctors used to diagnose based only on physiology—things like temperature, skin color, hot and clammy versus cold and dry. Add blood chemistry, and the treatment decision becomes more reliable, maybe a hundred times as good. Now add imaging—X-rays, CT scans, MRIs. We can now diagnose and stage cancer, look at organs, and more. Maybe our diagnoses are another hundred times as accurate. Now, add sensors, some of which can do what we've always done—only with greater ease, frequency, and without needing a trip to the doctor's office or hospital (real-time continuous temperature readings, blood pressure, glucose measurements)—and some of which can measure things we weren't measuring before, like the number of steps we take or the places we travel.

There are two sets of axes along which we can think about sensors and the devices they live in, or at least two sets of axes along which we've thought about them in the past. First, we've had medical-grade devices and consumer-grade devices. It used to be that thermometers were a foot long and took 20 minutes to deliver a reading. They were cumbersome and hard to transport, and you were only getting your temperature taken in a clinic setting, certainly not at home. Obviously, that has changed. Similarly, blood pressure, glucose monitoring, and pretty much everything else follows that same pathway. Doctors now send patients home with Holter monitors, we have devices that can analyze sleep outside of a sleep lab, and more.

Second, there are low-frequency devices and high-frequency ones, staccato measurements versus continuous feeds. My wristband is measuring my number of steps—a low-frequency piece of data. My chest patch is measuring my heart rhythm, which requires a lot more information.

These distinctions matter less and less these days. Pretty much every low-frequency device that exists today has a high-frequency device inside of it. The chip inside my wristband is more sophisticated than the accelerometer that was used in the 1960s to put men on the moon. The devices that used to be medical-grade are now sold to consumers, and the ones that aren't yet will be—soon. And anyone with a smartphone—yes, even those of you who scoff, for now, at the wristband and chest patch I'm wearing—is walking around instrumented with a high-frequency consumer device that is capable of measuring physiological, cognitive, and behavioral elements that we could never measure before.

So we can measure things more easily and more objectively than ever in history. But what should we be measuring, and why? And how do we even figure out where to begin taking all of these new streams of data and incorporating them into the models of disease and diagnosis that already exist? How, in other words, does the iPhone matter to the life sciences business? To answer that question, we need to begin to look at just what we mean by patient equations.

Notes

  1. 1.   Noel Si-Yang Bay and Boon-Huat Bay, “Greek Anatomist Herophilus: The Father of Anatomy,” Anatomy & Cell Biology 43, no. 4 (2010): 280, https://doi.org/10.5115/acb.2010.43.4.280.
  2. 2.   Courtesy of Paul Herrling.
  3. 3.   Joel Dudley, Conference Talk at Medidata NEXT Event (November 2016).
  4. 4.   Paul Falkowski, “Leeuwenhoek's Lucky Break: How a Dutch Fabric-Maker Became the Father of Microbiology.,” Discover magazine, June 2015, http://discovermagazine.com/2015/june/21-leeuwenhoeks-lucky-break.
  5. 5.   Milton Packer MD, “First Clinical Trial in Medicine Changed World History,” Medpagetoday.com, August 15, 2018, https://www.medpagetoday.com/blogs/revolutionandrevelation/74568.
  6. 6.   Jeremy H. Baron, “Sailors' Scurvy Before and After James Lind—A Reassessment,” Nutrition Reviews 67, no. 6 (2009): 315–332.
  7. 7.   Michael Bartholomew, “James Lind's Treatise of the Scurvy (1753),” Postgraduate Medical Journal 78, no. 925 (November 1, 2002): 695–696, https://doi.org/10.1136/pmj.78.925.695.
  8. 8.   David A. Kirby, “The New Eugenics in Cinema: Genetic Determinism and Gene Therapy in GATTACA,” Science Fiction Studies #81, Volume 27, Part 2, 2000, https://www.depauw.edu/sfs/essays/gattaca.htm.
  9. 9.   Evan A. Boyle, Yang I. Li, and Jonathan K. Pritchard, “An Expanded View of Complex Traits: From Polygenic to Omnigenic,” Cell 169, no. 7 (June 2017): 1177–86, https://doi.org/10.1016/j.cell.2017.05.038.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.248.149