Chapter 1

The American Economy as an Efficient Machine

Professor Wassily Leontief and I almost overlapped at Harvard University. Leontief, who won the 1973 Nobel Prize for Economics, left Harvard in July 1975 to join the faculty of New York University, and I arrived at Harvard two months later, in September 1975, as a freshman in the undergraduate college. In point of fact, I had no idea at the time that Leontief existed, or that he had won the Nobel two years earlier, or that I would go on to study economics, something he had done a half a century earlier.

The son of a Russian economics professor at the University of Saint Petersburg, Leontief entered his father’s university, renamed the University of Leningrad in the wake of the Russian Revolution, at the age of fifteen to study his father’s discipline, and graduated with the Soviet equivalent of a master’s degree at age nineteen. He chafed at the restrictions on academic freedom imposed by the newly created Soviet Union and, having convinced the Soviet secret police that he was dying of cancer, was allowed to leave the country to pursue his PhD in Germany.1 After completing his studies in Germany, he emigrated to America in 1931 to continue his endeavors at the National Bureau of Economic Research and to teach economics at Harvard University.2

In 1949, he began the work that would earn him the Nobel Prize twenty-four years later. He was interested in how the various pieces of the US economy worked together and, in due course, developed a technique for dividing the US economy into five hundred sectors, each of which could be linked to the others by modeling the inputs into and outputs out of each. He wove together these constituent “input-output tables,” as he called them, to describe the entire US economy.3 This work was seen as path-breaking and became sufficiently influential across the field of economics to earn Leontief the Nobel Prize.

To Leontief, the US economy was a very big, complicated machine, fundamentally not unlike a car. A car has many subsystems—power train, steering, cooling/heating/ventilation, entertainment, safety, and so forth—each of which can be understood independently and then pieced together to produce the desired vehicle with the desired characteristics. Across the operation of that vehicle, the input-output equations are clear. When you push the gas pedal, the car speeds up. When you slam on the brakes, the car comes to a halt. As with a car, the inputs and outputs of the various subsystems of the economy could be mapped out and understood. Because he saw the economy as an understandable and analyzable machine, Leontief was a fan of planning, and during the economically challenged 1970s he argued that national planning might be the only hope for US economic policy.4

Though he left Harvard in 1975, Leontief’s legacy remained. My introductory economics professor, the late Otto Eckstein, used the increasing power of digital computing to create the most sophisticated computer model of the US economy of that era, an extension and enhancement of Leontief’s input-output work. In fact, Eckstein was a bit of a sensation, because in 1969 he had co-founded a thriving and influential business called Data Resources Inc. (DRI), which provided economic forecasts based on his computer model to governments and businesses. He would go on to sell DRI to publishing house McGraw-Hill for $100 million in 1979, a staggering amount in those days for a nerdy economics professor. But the business was well worth it. Suppose you were a policy maker or government official. You would surely want to know what effect pulling a given economic lever would have on the overall machine. Eckstein’s DRI made such forecasting possible. So, what was the problem?

The Power and Foibles of Models

I grew up in a tiny village and attended a regional high school in the middle of farm country in Canada but got it into my head that I wanted to study at Harvard College, across the border in the United States. I was somewhat terrified, but I survived the entry and quickly fell in love with economics.5 While I was taking my undergraduate economics courses, I was naive as to the power of models to shape actions and the power of metaphors to drive the adoption of models.6 I believed that the models I was being taught were descriptions of how the economy actually works. It took me a while to figure out that those models are no more than a theory of how the economy might work, if the economy were, in fact, like a car. And it is because we are all so convinced that the US economy is like a car that we stick with models like Leontief’s even when those models no longer generate accurate predictions concerning the economic outcomes that will follow our interventions.

Though not cognizant of it at the time, I was exclusively taught “neoclassical Keynesian economics.” The proverbial penny didn’t drop for me, because the subject wasn’t actually taught using that descriptor. It was taught to me as “economics”—the way the economy worked. I soaked it all in like a good little undergraduate. On our assigned reading lists, we occasionally found articles and books by University of Chicago economists, such as Milton Friedman or Robert Lucas. From what I could tell, we read these works simply to enable our professors to mock their “monetarist” thinking. It wasn’t economics. It was wrong. I was not being taught a way to think about economics: I was being taught the way to do so.

I graduated into the US economy of 1979, happy to know that I now knew how the economy worked, thanks to my Harvard economics training. But perplexingly for me, I observed that the 1979 economy featured two things happening at the same time, things whose coincidence my economics education said would be “impossible.” The US economy was suffering from exceedingly high inflation—11.3 percent, which was the seventh highest in the history of inflation measurement in the US economy. And it was suffering from high unemployment—in fact the seventh consecutive year of unemployment above 6 percent, a level high enough for policy makers to feel the need for significant intervention. I had been taught that it was not possible for a long period of high unemployment such as the US economy was experiencing to produce anything but slowed, not accelerating, inflation—based on the Phillips curve, which was taught as fact, at least in the Harvard economics department at the time.

At that point I had an epiphany: even though my distinguished professors taught me economics earnestly as if it were the truth (i.e., how the US economy actually worked) and warned against people who taught wrongly about the workings of the US economy (e.g., the dreaded monetarists), my professors actually taught me only one model for how to interpret the workings of the US economy. I feel sheepish that I hadn’t figured this out before—but in my defense I was just a country boy!

It wasn’t a particularly bad model. I hadn’t come away from the experience thinking that the monetarists were more right than the neoclassical Keynesians or vice versa. But like all models, the neoclassical model was imperfect—and much more imperfect than my professors seemed to realize. I came away understanding that they had vastly oversold the veracity of their model, something that irks me to this day. But on the bright side, it was the best learning experience from my four years of Harvard undergraduate education, and I should be thankful for that. I learned to be careful not to have overconfidence in models—mine or anyone else’s.

Long after graduation, when I came across the work of MIT systems-dynamics professor John Sterman, I gained a greater appreciation for the foibles of models. Sterman points out in his aptly titled article “All Models Are Wrong: Reflections on Becoming a Systems Scientist” that all models are flawed.7 (This idea was first put forward in writing by statistician George Box in 1976.8) Because by its very nature a model simplifies the complexity of the real world in order for us to utilize the model to make decisions and take action, it leaves some things out and accentuates others, meaning that it is in some profound sense, just plain wrong. Does that mean that we should stop modeling? No. Whether we realize it or not—and often we don’t—we always have a model. In fact, we can’t not model. As Sterman argues: “You never have the choice of let’s model or not. It is only a question of which model. And most of the time, the models that you’re operating from are ones you are not even aware that you are using.”9

I also came over time to understand the importance of metaphors in underpinning and motivating models—such as the machine metaphor of the US economy from Leontief’s work. This was impressed upon me by Craig Wynett, former head of innovation at consumer-products giant Procter & Gamble. Wynett studied the subject of analogic reasoning—reasoning by way of analogy or metaphor—to better understand consumer behavior. He came to the conclusion, based on brain-science research on the subject, that human beings comprehend new things almost entirely by making an analogy to something already familiar to them. That is why metaphors are so important. An idea is generally more compelling and better understood if it is presented as being like something familiar. This is even the case if the metaphor is not presented as such but observers are able nonetheless to make an analogy to something familiar. To my knowledge, Leontief himself didn’t use the metaphor of the economy as a machine, but his work can be more easily understood and is more compelling because his model strikes observers as implicitly relying on the machine metaphor.

In 1943, British Prime Minister Winston Churchill famously observed: “We shape our buildings, and afterwards our buildings shape us.”10 The same thing holds for metaphors: we select our metaphors, and afterward, our metaphors shape us. If our metaphor for our self is a lone wolf, we will feel isolated and will act alone instead of building coalitions to get goals accomplished. If our metaphor for life is a battle, then we will categorize those around us as either allies or enemies and attempt to work with our allies to defeat our enemies. If our metaphor—per Bob Dylan—is that “chaos is a friend of mine,” we will embrace a chaotic life rather than attempt to organize it. If a country’s metaphor for itself is a melting pot—as is America’s—it will produce citizens who think of themselves first, foremost, and dominantly as Americans. Their distinctiveness will melt into the American self-conception. If a country’s metaphor for itself is a mosaic—as is Canada’s—it will produce citizens who feel they are a distinctive and identifiable part of a complex mosaic (e.g., as Polish Canadians, Vietnamese Canadians, Punjabi Canadians, etc.). The distinctiveness of their mosaic chip will never melt into the mass. To be clear, I am not arguing that one metaphor is inherently superior to another. The important point is that these different metaphors drive adoption of different models that produce very different outcomes.

Models in Business and Public Policy

To understand both the use and power of models in shaping democratic capitalism, it is helpful to look at a few specific models at work, identifying the metaphors that made them compelling, the outcomes they hoped to produce, their presumed cause-and-effect structure, and the proxies used to measure their effects. Let’s start with an example from business.

Customer loyalty

In 1996, Fred Reichheld published a best-selling—and I think excellent—book called The Loyalty Effect, in which he proposed a model for business success with customers.11 The book explicitly prioritized maintaining the loyalty of existing customers rather than overfocusing on attracting new customers. I believe that the message resonated with readers because a compelling metaphor jumped to their minds: the leaky bucket. If your bucket has a leak, you have two ways to keep it full: add more water or patch the leak.

Reichheld’s insight was, in essence, that most companies choose to add water: they spend time, energy, and resources on attracting new customers, including initiatives such as reduced prices and/or special promotions for new customers that existing customers can’t access. This, he argued, had the effect of making the leaks worse, because the largely ignored existing customers became more likely to defect, which made it all the harder for companies to keep their existing-customer numbers up, let alone add to them. Far better, Reichheld believed, to patch the leaks first.

That is to say, if a company had 1,000 existing customers at the beginning of a given year, but during that year lost 100 of them (that is, those customers leaked out of the bucket), the first 11 percent of growth from the remaining base of 900 customers would only get the company back to where it was at the beginning of the year—with 1,000 customers (that is, the company would merely have refilled the leaky bucket). If the company’s goal had been to grow, say, 8 percent during the year in order to get to 1,080 by the end of the year, the company would now have to gain 180 new customers. Hitting the year-end goal would have been much easier had the company kept its existing customer base and needed only to acquire 80 new customers. Hence, says the customer-loyalty model, companies should focus more attention and resources on maintaining the loyalty of existing customers so that they don’t have to win as many new customers to power growth.

Reichheld’s book was highly influential, and many companies adopted his model. Its popularity increased when Reichheld published a follow-up article in Harvard Business Review in 2003 titled “The One Number You Need to Grow.”12 In it, Reichheld offered a helpful proxy for customer loyalty, which provided a practical answer to the question: How would you know whether you are engendering a level of loyalty that will result in existing customers staying rather than exiting? His proxy was intoxicatingly simple. Just ask customers to answer the following question, on a scale of 1 to 10: How likely is it that you would recommend our company/product/service to a friend or colleague? Reichheld told companies to give themselves one point if the customer answered 9 or 10, zero if the customer answered 7 or 8, and negative one for answers 1 through 6. The average score after surveying a hundred customers would be the company’s Net Promoter Score. That is, if sixty answered 9 or 10, eighteen answered 7 or 8, and twenty-two answered 6 or below, your Net Promoter Score would be sixty minus twenty-two, for a score of thirty-eight. Reichheld showed that the Net Promoter Score highly correlated with future customer loyalty (and other good things like actual recommendations to friends and colleagues).

This example illustrates the core components of models: First, a desired outcome (less-expensive growth); second, a metaphor that animates a model (the leaky bucket); third, the cause-and-effect sequence intended to get you to that desired outcome (focus first on the loyalty of existing customers); and fourth, a proxy (or proxies) to measure your progress (Net Promoter Score).

In the policy realm, primary/secondary education provides another example of this sequence.

No Child Left Behind

As of the turn of the twenty-first century, there was a growing concern that, on average, American K–12 students were falling behind those of other leading nations. The desire was to improve educational outcomes for American students in order to support the global competitiveness of the American economy in the twenty-first century.

The underlying metaphor was that of the negligent parent. The cause-and-effect model built upon it was the theory that America’s performance lagged because both schools and the teachers in them (the parents in the metaphor) were not being held accountable for delivering strong educational outcomes in the K–12 system, a system funded largely with American tax dollars. Teachers’ efforts and attention to the task were variable, and the weaker schools had neither the will nor the power to enforce standards and improve the work of underperforming teachers. If the federal government stepped in and enforced accountability—that is, forced those negligent parents to pay attention—the variability would diminish, and the average outcome would rise meaningfully.

Results on standardized tests given to students were used as the proxy for educational outcomes. By evaluating teachers on the test results of their students and holding the teachers accountable for producing acceptable results, the federal authorities responsible for managing education would be able to identify consistently underperforming teachers and schools and have a publicly acceptable rationale for taking disciplinary actions.

While this may sound like a typically Republican approach, the No Child Left Behind Act enjoyed overwhelming bipartisan support leading to its passage in 2001, and was signed into law by George W. Bush in early 2002. The desired outcome was universally held, the metaphor helped everyone internalize the model, the model was amply compelling, and the proxy was suitably powerful to produce a relatively rare burst of bipartisan support on a major policy initiative.

We engage in the process described in these examples not because we make a particular decision to do so—but simply to create a construct to guide our decisions going forward as we manage a given system or function. As John Sterman says, modeling is automatic to humans. When we have a goal in mind, we create or choose a model to pursue that goal, and the model will indicate proxies that we can use to measure progress. We do so whether we are conscious or not of building and deploying a model. And the model we create or choose is almost always grounded in a metaphor that we can easily relate to.

With this in mind, let’s look at how we think about the economy.

The Metaphor: The Economy Is a Machine

Thanks in substantial part to Leontief’s outsize influence—he is more famous for having four of his PhD students go on to win Nobel Prizes than for winning the Nobel Prize himself—our metaphor for the economy is a complicated machine. We did not always view the economy as a machine. For much of American history, in fact, people viewed the economy as a black box, whose workings were mysterious and unpredictable.

It was a mystery as to why it plunged into a terrible and persistent malaise during the twenty-three-year Long Depression that began in 1873. And the mystery was repeated, along with outright terror, a generation later in the Great Depression, when stock markets fell by 70 percent, half of all banks went out of business, and unemployment soared to a quarter of American workers. All these outcomes were far more extreme than ever experienced before: American families genuinely worried about the survival of their country.

So, when policy makers and academics came up with the idea that the economy was a machine that could be steered, the idea was reassuring. Unsurprisingly, the image has continued to grow in power and influence. The Federal Reserve Board sets inflation targets and expects its monetary policies to produce exact outcomes. The Congressional Budget Office, using the latest version of the kind of computer forecasting model that Otto Eckstein pioneered, is mandated to model the fiscal impact of every piece of fiscal legislation so that Congress can “know” the future budgetary consequences of a piece of legislation before voting on it.

It is not just government that sees the economy as a machine. Business does too. A classic representation of this is found in Ray Dalio’s widely viewed 2013 YouTube video, “How the Economic Machine Works.” Dalio speaks with a confidence befitting the billionaire founder of the world’s biggest hedge fund, Bridgewater Associates: “The economy is like a machine. At the most fundamental level it is a relatively simple machine. But many people don’t understand it—or they don’t agree on how it works—and this has led to a lot of needless economic suffering.”13 Dalio is hardly unique or original in using the machine metaphor.

The machine model is also integrated into job design. Companies are absolutely chock-a-block with piece-part specialists—financial specialists, tax specialists, marketing specialists, accounting specialists, production specialists, and so on. Very few employees of US businesses see their job as providing an integrative view and making integrative decisions. Each is a cog in the big machine that is their company—which in turn is a cog in the economic machine.

Businesspeople don’t get this way automatically or naturally. They get trained to be this way. Business schools break down the business machine into the functional siloes (marketing, finance, operations, human resources, etc.) and teach narrowly in those siloes with little or no attempt to integrate across the siloes, all the while assuming that if someone adds up the narrow, independent answers produced in each silo, those answers will amount to a terrific comprehensive answer.

To be fair, not all business folk subscribe to the idea that business can be broken down, like a machine, into independent components. Perhaps the most influential business thinker of all time, Peter Drucker, certainly didn’t buy it, simply and clearly pointing out that, “There are no tax decisions . . . There are no marketing decisions . . . There are no specialty decisions . . . There are only business decisions.”14 But Drucker notwithstanding, the business schools of America (undergraduate and postgraduate combined) pump out over half a million narrow specialists per year into the US economy—a fifth of university graduates of all disciplines combined.15

Economists, whether they work in a business or in the public-policy apparatus, are similarly siloed. It is so much easier to work on one piece of the machine—labor economics, industrial organization, firm microeconomics, fiscal policy, etc. (partial-equilibrium economics)—than on the challenge of figuring out how all the pieces fit together (general-equilibrium economics). That is why there are a thousand partial-equilibrium economists for every one general-equilibrium economist. It is much handier to just assume that the constituent parts add up to a predictable whole than to contemplate the complexities of the whole.

The Desired Outcome: Growth

If the metaphor is the complicated machine, to build a model on it, we must next specify what we want the economy to deliver. The answer—a steady improvement in the standard of living, or in income—is at one level quite obvious. But hidden in that answer is an assumption about how “we” benefit from economic growth. That assumption is shaped by one of the most basic mathematical models used in the natural sciences: the famous Gaussian distribution, a construct that is based on the work of the early-nineteenth-century German mathematician and physicist Carl Friedrich Gauss.16

You will have met the Gaussian distribution early in your life. After your very first Stanford-Binet IQ test, typically given when you are about six years old, your teacher told your parents where your result placed you on the “bell curve”—the more colloquial name we use for the Gaussian distribution. Later on, when a teacher accidentally made the test too hard (or too easy) for your class, he told you that he adjusted the grades up (or down) based on that same bell curve. If you took a statistics course in college, it was dominantly about Gaussian distributions. Later still, when you took your four-year-old daughter to the pediatrician, he explained to you where her height and weight placed her on the bell curve for four-year-old girls.

What he meant, of course, was the Gaussian distribution of children’s weights and heights. It is referred to as a bell curve because when it is drawn on a piece of paper, it looks like the side view of a bell (see figure 1-1).

It is big in the middle and tapered dramatically at the edges. It is symmetrical, with both sides the same size and shape, like a proper bell. An implication of the shape and symmetry is that the greatest number of occurrences (the mode), the average of the occurrences (the mean), and the fiftieth percentage occurrence (the median) are all the same. That is to say, the greatest number of observations are right in the middle, and there are equal numbers of observations to either side of the middle.

FIGURE 1–1

Gaussian distribution

Gaussian distributions are all around us in both the natural and man-made worlds. Factors such as human weight, height, and IQ, but also things like measurement errors or slight variations in the size and weight of manufactured products from a given assembly line, array themselves in Gaussian distributions. Because they are so prevalent, they are also called “normal distributions.” Our default assumption is that if we are measuring the distribution of any set of results, those results are more likely to be distributed normally than any other way.

The Gaussian distribution is a favorite of scientists because it is found so frequently in nature, and therefore the phenomenon has an apparent validity. It is not a bad default to assume that if you make a thousand observations of something—flips of a coin or sales of muffin types—the results might well follow a Gaussian distribution. (Though the latter won’t!) But in addition, Gaussian distributions have handy, analysis-friendly properties. As the number of observations increases, the shape of the distribution becomes more Gaussian. This phenomenon is referred to as the central limit theorem, by which if you measure the height of 100 eight-year-old boys, the distribution will look vaguely bell-shaped, and if you measure another 9,900, the 10,000 observations will look a lot more bell-shaped, and if you measure another 990,000, the million will be perfectly bell-shaped. As you add more observations, the mean becomes more stable and the dispersion around the mean—called the standard deviation by statisticians—also stabilizes to a consistent level. Across all sorts of Gaussian distributions, 68 percent of the observations are less than one standard deviation from the mean and 90 percent are less than two standard deviations from the mean.

That is all really handy for doing analyses, so statisticians, epidemiologists, physicists, psychologists, sociologists, and economists all like working with Gaussian distributions. If they find an observation that is more than two standard deviations from the mean, they are prepared to say that this result is genuinely and meaningfully different than the mean. Imagine, for example, that the mean rate of remission from a certain kind of colon cancer is 33 percent and the standard deviation is 4 percentage points in a particularly sized trial planned for a new experimental drug. If, in that trial of patients taking the new drug, the mean remission is 42 percent—more than two standard deviations above the mean—the researchers will declare that this new drug is a genuine, meaningful improvement over the existing best practice. If the mean remission is only 35 percent, they will say that the result is not significantly different from the mean and perhaps just the product of random fluctuations.

A key characteristic of the Gaussian distribution is that it is a product of the independence of the data being observed. Going back to our four-year-old girls, if I am measuring their height, the height of each is independent of that of all the others. That is to say, if I measure the height of Sally, it tells me absolutely nothing about the height of Reshmi. Because Sally happens to be short, I can’t assume that Reshmi will be tall because nature decided to take two inches from Sally and give them to Reshmi. The same holds for coin flips. If we flip a coin one hundred times and then repeat the sequence of one hundred flips numerous times, the mean, median, and mode will be a result of fifty heads and fifty tails, and the hundred-flip trials will array in a Gaussian distribution around the fifty-fifty split. That is because each coin flip is completely independent of the ones that came before. Even though the thought is fundamentally counterintuitive, if I get heads on ten consecutive tosses, I am not more likely than 50 percent to get tails on the next flip.

Given that the Gaussian distribution is so common and handy to use, it should come as no surprise that a Gaussian distribution represents our expectation of economic outcomes such as those related to income and wages. It is enshrined in our economic language, in which a core assumption is that the largest part of the population is the “middle class,” which is clustered around the income mean. Most typically, we use Gaussian terminology to demarcate the middle class, often thought of as families earning between the twenty-fifth and seventy-fifth percentiles. The bulge in the middle is the middle class and the tapered end to the left represents the smaller number of poorer families and the tapered end to the right the smaller number of richer families.

To restate, then, the desired outcome of our model for the economy is economic growth, conceptualized as the Gaussian distribution of income moving steadily to the right over time, with the median income of middle-class families increasing over time and the economic fate of the poor getting absolutely better over time (see figure 1–2).

FIGURE 1–2

Desired advance in the income distribution

Beyond overall growth, the Gaussian construct, in this context, also presumes economic mobility. Since independence of the outcomes is assumed, families who show up in the lower end of the distribution in a given year would have a reasonable chance of showing up in the middle or the higher end in later years rather than being stuck in their spots in the income distribution. And by extension, we imagine that other actors can, through their efforts, exhibit a similar mobility. This decade’s worst-performing company can, through its own decisions and actions, turn itself into the following decade’s top performer. If instead of being independent, future outcomes were dependent on current outcomes, then such economic mobility would be curtailed.

The fact that we predicate economic outcomes on a Gaussian distribution is critical to explaining why American capitalism has enjoyed consistent democratic support. The fundamental characteristic of the democratic aspect of the equation is that the government of the jurisdiction in question—America in this case—is voted into office by a majority of its citizens. In other words, 51 percent or more of voting citizens need to vote for a government for it to be charged with managing the economy. It follows, then, that if the democratic-capitalist system is operated so as to cause a Gaussian distribution of economic outcomes to shift rightward over time, the median voter—who is on average getting ever more prosperous over time—will continue to vote to support political parties that promote democratic capitalism.

This Gaussian narrative was a reasonably good description of the American political economy, at least during the first two hundred years of the country’s existence. Incomes for all families moved remarkably steadily to the right in the vast majority of years, producing the largest economy in the world with by far the highest standard of living of any consequentially sized country.17 While not perfectly bell-shaped, the family-income distribution consistently looked reasonably normal throughout the period. There has, of course, always been a longer extension to the right than to the left, because the lowest income, on the left, by definition, is zero, while the incomes for the rich, on the right, extend up into the hundreds of millions. But broadly speaking, this distortion on the ends aside, the income-distribution curve was largely bell-shaped.

However, as I observed in my introduction to this book, in the past four-plus decades, the pattern has changed dramatically for the worse. We no longer see such a consistent move rightward of a bell-shaped prosperity curve. The median family income is increasing much less rapidly than before, and the distribution of incomes is becoming increasingly skewed. To understand why this is happening, we need to reexamine our theories about what “improves” the economy as well as the proxies we employ to measure progress and guide further action.

What Drives the Outcome: Efficiency

Machines that process inputs into outputs are judged and compared according to the efficiency with which they convert the one into the other. A car that travels farther on the same amount of fuel than another car is, other factors being equal (road conditions, for instance, or weather), more efficient and therefore better by that measure than the other car. If we assume that the economy is a machine, then the same cause-and-effect sequence must apply. Greater efficiency drives growth—reflected in the progressively rightward shift of the Gaussian distribution of incomes. Based on that core principle, American economists, policy makers, and business managers have consistently optimized measures that encourage ever greater efficiency. They have been encouraged in this endeavor—and, more generally, in their beliefs—by the contributions of a handful of influential thinkers.

Adam Smith

It is an interesting and important historical coincidence that Adam Smith’s The Wealth of Nations was published in 1776, the same year that America declared its independence from Britain. In it, Smith provides two important underpinnings for a model of efficiency. First is the “invisible hand” of an unfettered market of buyers and sellers that produces, without explicit organization, a price that generates efficient use of resources to provide the optimal quantity of goods. Second, he uses a pin factory to illustrate the technique for enhancing efficiency of production. The inefficient approach is to staff a factory with workers who independently fabricate entire pins. The efficient approach, in contrast, is to break down the pin-production process into distinct tasks—making the stem, making the head, putting the pieces together—and give each worker a single task. Through this division of labor, as Smith called it, pins would be produced much more efficiently. And if all companies in the economy would pursue division of labor and the attendant efficiency, the entire economic machine would operate more efficiently.

Smith’s work was well known to the Founding Fathers, including Thomas Jefferson, Alexander Hamilton, and James Madison, who led the formulation of the US constitution and developed the basic principles of US economic policy in the quarter century following the Declaration of Independence. A full 28 percent of American libraries from 1777 to 1790 held The Wealth of Nations.18 Hamilton made specific reference to the benefits of the division of labor in his 1791 Report on the Subject of Manufactures, lauding the “greater skill and dexterity naturally resulting from a constant and undivided application to a single object” and arguing that it “has the effect of augmenting the productive powers of labor, and with them, the total mass of the produce or revenue of a country.”19 Both Smith’s invisible hand and division of labor were embraced as the economic policy of the US was formulated in this seminal period. Markets were largely left free to establish efficient prices and quantities, and as US businesses grew, they used the division of labor to produce goods ever more efficiently for those unfettered markets.

David Ricardo

Four decades after Smith, David Ricardo took the efficiency idea further with his theory of comparative advantage, arguing in On the Principles of Political Economy and Taxation that, since it is more efficient for Portuguese workers to make wine, thanks to their natural endowment of sunny weather, and English workers to make cloth, due to the cooler climate in which they operate, each would be better off were they to focus on their area of advantage and trade with the other.20

The insights of Smith and Ricardo both reflected and drove the Industrial Revolution, which was as much about innovations in processes that reduced waste and increased productivity as it was about the application of new technologies. While America immediately embraced Smith’s invisible hand and division of labor, especially as the country’s manufacturing sector grew in size and scale during the nineteenth century, the United States was much slower to embrace Ricardo’s push for the efficiency boost that comes from freer trade. During the nineteenth century, the southern states, which were big agricultural-product exporters, wanted freer trade to make sure they had markets for their exports, while the northern states wanted protection for their nascent manufacturing industries. However, after the Civil War, America’s manufacturing sector grew to rival England’s, and by the turn of the twentieth century America had become both the world’s largest manufacturer and its greatest exporter.21

That accomplishment preceded what, in hindsight, is probably the dumbest economic policy initiative in American history: the Smoot–Hawley Tariff Act of 1930. Even though America, as the world’s largest exporter, had by far the most to lose in a trade war, it started one—and a monumental one at that—by dramatically raising tariffs on imported manufactured goods. Unsurprisingly, its trading partners responded in kind. The subsequent global trade war either caused or exacerbated (depending on which economic historians one chooses to believe) the Great Depression.

However, in 1947, the United States woke up to its economic place in the world and did a 180-degree turn. It took a forceful lead in pulling together chief exporting nations to create the original General Agreement on Tariffs and Trade (GATT) and continued to lead a succession of major negotiating rounds, which drove average tariffs in developed countries from approximately 25 percent in 1947 to 4 percent by 2000.22 The efficiency Ricardo argued for in 1817 became a central US policy thrust, if not obsession, though only after 130 years had elapsed.

Frederick Winslow Taylor

In the meantime, the drive for more efficiency proceeded apace on other fronts, which brings us to Frederick Winslow Taylor. Trained as a mechanical engineer, Taylor made it his life’s work to promote industrial efficiency, becoming the intellectual leader of what came to be known as the Efficiency Movement. His work was highly influential throughout the Progressive Era (1890–1920) and is encapsulated in his 1911 book, The Principles of Scientific Management.

Taylor is both famous (with managers) and infamous (with workers) for developing the technique of using time-and-motion studies to determine the optimal method and target time for accomplishing each task in a production process. For example, his studies would determine the optimal amount of coal with which a worker should aim to fill his shovel. Too little coal per shovel load would waste the strength of the worker. Too much coal per shovel load would tire the worker out too quickly.

To Taylor, increasing efficiency was management’s most central task, as illustrated by this passage from his 1911 book: “It is only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with management alone.”23

Taylor’s work built what is now the huge field of industrial engineering and created a focus on the use of “scientific management” to drive greater and greater efficiency.

W. Edwards Deming

A generation later, after World War II, W. Edwards Deming helped pioneer what became the field of “total quality management.” Deming was born in the American heartland, in Sioux City, Iowa, at the turn of the twentieth century. He lived his first half-century in near total obscurity, working on statistical issues for the Department of Agriculture and the Census Bureau. But that all changed starting in 1947, when General Douglas MacArthur, who was overseeing the US efforts to rebuild postwar Japan, invited Deming to assist in carrying out the 1951 Japanese census. While doing this work, Deming gave a series of lectures on statistical process control and quality management that made him a legend and guru to Japanese companies attempting to become internationally consequential.

Deming created a managerial approach for organizing production so as to drive out waste and achieve both quality and efficiency, which famously influenced Toyota and what became known as the Toyota Production System. Thanks in part to Deming’s contributions, which are revered to this day in Japan, the laggard Japanese auto manufacturers became the bane of Detroit carmakers, so much so that American manufacturers, auto and otherwise, came to embrace Deming’s methods in the 1980s and helped drive an American manufacturing revolution. When he was awarded the National Medal of Technology and Innovation in 1987, Deming was finally a hero in his home country, thirty-seven years after Japan created the prestigious Deming Prize in honor of him.

By the final quarter of the twentieth century, America had become fully obsessed with and driven by increased efficiency. The collective belief of Smith, Ricardo, Taylor, and Deming in the virtue of efficiency has come to dominate American policy and business.

Where Does All That Leave Us?

What I hope the foregoing has established is that whether we realize it or not, our dominant model of the economy is that of a machine. The desired outcome is that the assumed Gaussian distribution of family prosperity in America moves smartly and consistently to the right over time. The cause-and-effect relationship is that more efficiency drives better functioning of the model and more progress toward the desired outcome. That implies that it is worth fighting hard to put in place fixes to the economic machine that drive it toward perfect efficiency.

Pursuit of efficiency is definitively not a bad thing. The rise in the standard of living of the average family in America from the Revolutionary War to the present is substantially the result of much higher efficiency today compared with that of centuries ago. But there is ample evidence that the pursuit of efficiency just isn’t working as well now—and hasn’t been for almost half a century.

The problem, as is the case in all facets of life, is that too much of a good thing is no longer a good thing. In the case of American democratic capitalism, the proxies that we have adopted for measuring and driving efficiency are turning our pursuit of efficiency into a destructive force. Let’s turn to look at how and why that has happened.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.130.13