Chapter 5

Achieving Balance in America’s Natural System

Toy lovers woke up September 19, 2017, to the shocking news that Toys “R” Us had filed for bankruptcy the previous day. Six months later, over twenty million radio listeners were similarly shocked when they learned that iHeartMedia, the owner of their favorite radio station, also filed for bankruptcy. We typically assume that bankruptcy is what happens to badly managed companies. But it is something of a stretch to think of America’s number-one toy retailer and its biggest radio-station operator as bad companies. Industry leaders are not usually badly managed.

What got those companies into trouble was not a failure to win customers. It was, quite simply, that both companies were owned by leveraged-buyout (LBO) firms, whose business model involves purchasing companies that appear to have a robust cash flow and then leveraging up their capital structures to replace expensive equity with cheap debt.1 As a result Toys “R” Us was “saddled with debt” and iHeartMedia was put under a “crushing load of debt,” according to close observers.2 So, although the companies were reasonably well-managed from an operational perspective, the massive debt loads they were forced to carry had increased the risks of defaulting should overall industry conditions deteriorate, and these were indeed tricky times in retailing and media. The LBO firms that purchased Toys “R” Us and iHeartMedia learned the hard way that greater apparent capital efficiency is not necessarily better, and their investments had to be written off.

Imposing debt on the scale that Toys “R” Us and iHeartMedia were required to carry is predicated on the ability of the LBO firms to predict those companies’ future cash flows fairly precisely. The belief that a high degree of precision is possible is itself rooted in a model in which companies operate as cogs in a machine. But as we have seen, the economy is not a machine in which the players are cogs. Businesses are part of a natural and highly complex system, characterized by a large number of interdependencies and contingencies, which makes estimating their future cash flows with anything resembling precision a fool’s errand. The systems are adaptive in that players inside the company modify their behavior to take into account the fact that the company is now operated with one goal in mind: maximizing the benefit to the LBO-firm investors, with no regard whatsoever for employees. What’s more, changes in parts of the system have ramifications outside those parts—cutting staff and capital investments translates into lower investments in customer satisfaction, which increases the risks to revenues.

The bottom line is that a relentless and unconstrained drive for capital efficiency saps LBO acquisitions of all of their resilience. Equity providers are not owed any fixed payment in any given year. So, if a firm does badly in the trough of a market, it doesn’t have to pay a dividend (or any other form of payment) to the equity holders. In contrast, debt payments must be made regardless of the performance of the firm. Any homeowner knows the story. It is never a pleasant thing to lose your job. But if your mortgage has been paid off, the experience is not likely to be financially catastrophic. However, if your house is mortgaged to the hilt, you will probably lose it and perhaps more. In similar fashion, having a relatively high proportion of equity in its capital structure increases a company’s resilience to downturns (like having a paid-off house and therefore no monthly mortgage payment). The LBO firms destroyed the resilience provided by having a buffer of equity, in order to achieve a financially efficient capital structure, and that is what sank Toys “R” Us and iHeartMedia. Their businesses weren’t bad, but their financial structures certainly were.

The core design challenge for the future of American democratic capitalism is to achieve a much better balance between efficiency and resilience in the system than exhibited by these LBO disasters. It is a delicate balance, as there is danger on both sides. Pursuit of all resilience and no efficiency is as problematic as pursuit of efficiency with no resilience. The only difference is in the nature of the death.

Nonresilient systems tend to die explosively. The US capital markets blew up spectacularly in 2008–2009, and could well have gone extinct had it not been for intervention by the US government. In 2011, in Japan’s Fukushima nuclear-reactor disaster, two of the reactors, lacking resilient systems, melted down and then the buildings housing them exploded, spewing massive levels of radiation into the region. To be sure, the tsunami that caused the catastrophe was a low-probability event. But the downside to that weak resilience has been and will continue to be tremendous. The latest estimate of the cleanup costs alone is over $600 billion, and that doesn’t include the costs of having a still-uninhabitable zone around the plant, with citizens perhaps permanently displaced.3

In contrast, inefficient systems tend to fade away slowly, as systems with superior fitness replace them. There is no way to guarantee the resilience of a system that doesn’t pay attention to efficiency. It may appear to be resilient, but it will eventually be overwhelmed by a more efficient adversary. The authorities could have wrapped the Fukushima nuclear facility in layer upon layer of redundancy to make it particularly resilient. But the cost would likely have been so high that it would have made the power produced economically unviable.

So, what does the pursuit of that delicate balance involve in the context of democratic capitalism? Recall that in the economic context, more and more efficiency converts a Gaussian distribution of outcomes into a Pareto one, as the more efficient agents take a larger proportion of economic value. Since prior outcomes influence future outcomes in the economic context, the slide toward Pareto is accentuated, amplifying the direct efficiency effects. To correct for this imbalance, we need to explicitly retire the machine model of the economy and consciously adopt the model of the natural system. Hence, our design principles for achieving the delicate but more desirable balance have to take account of the three core features of a natural system: its complexity, its adaptivity, and its systemic structure. While doing so, we must not forget that we are modeling and that whenever we model, we risk letting the proxies for progress surrogate themselves for the model itself. Hence, we need to be more conscious both about how we model our complex adaptive system and about how we measure its progress.

Design for Complexity

The key challenge of complexity is that the operation of a complex system is so opaque and inscrutable that it is not feasible to predict performance with even reasonable accuracy, as illustrated earlier by economists’ forecasts of both America’s 2008–2009 economic performance and the consequences (or lack thereof) of the “fiscal cliff.” This means that forecasting just how much pressure a system can bear and then applying that exact amount of pressure is a more dangerous game than we imagine. LBO firms tend to think that having a high debt load—and hence high annual interest payments—puts productive pressure on management to perform. Tell that to the management teams of Toys “R” Us and iHeartMedia. Or tell the managers at Wells Fargo who faced extreme pressure to have their customers open more accounts that the pressure was actually good for them. While pressure can produce more efficiency, when applied without constraints, it produces Pareto results—and worse.

The design principle, therefore, is to balance pressure for more efficiency with friction to limit its damaging extremes. This will involve taking actions and establishing rules and practices that moderate and limit the amount of pressure that can be applied to a system, in order to protect against results that are both unpredictable and negative or even catastrophic in their consequences. This, I must emphasize, is not a radical or new concept. The voluntary imposition of limits and constraints in the interest of prolonging the lifespan of a system or economic agent is a survival strategy as old as humanity itself. Let me illustrate the deliberate introduction of friction in the context of America’s most iconic sport: baseball.

Mel Stottlemyre was a beloved and successful New York Yankees pitcher in the 1960s and 1970s. He won 164 games (214th all-time in major-league history) over his eleven-year career and was named an All-Star in five of those eleven seasons. He had two baseball-playing sons, Mel Jr. and Todd. Both were considered truly elite prospects as young, minor-league pitchers. However, Mel Jr. pitched only a small portion of one year in the major leagues, accumulating zero career wins, while his younger brother Todd pitched for fifteen years, won 136 games, and pitched for two World Series Champion teams. The only real difference between the brothers? Mel Jr. blew out his arm (torn rotator cuff, for baseball aficionados) while still in the minor leagues and Todd didn’t.

Baseball pitchers live in constant danger of overstressing their pitching arm to the point of career-ending injury. And in the complexity of elite pitching, it is hard to figure out when it might happen and, when it does, why. Clearly, some of it is genetic and some is random. But one factor that has come into focus in the time since Mel Jr. blew out his arm is the number of pitches a starting pitcher throws in a given game. It is not the first 80 pitches of the game that are likely to overstress a pitcher’s arm: it is the last 30 pitches in a 110-pitch game. But an aggressive pitcher will likely feel he can pitch forever—and an aggressive coach will be very tempted to believe him, especially in a crunch game. And that is precisely why in modern baseball, there are pitch-count restrictions (and innings-per-year restrictions) imposed on pitchers by their teams, especially with respect to young pitchers.4

These restrictions arguably hurt efficiency. Other things being equal, the team would want its best pitcher to pitch more innings of more games. And for a while, that might pay off. But with each game the risks will rise that the pitcher’s arm will be overstressed, forcing the pitcher to bow out for a protracted period—if not forever. Pitch-count restrictions are a guard against that—they make it more likely that the pitcher will be able to bounce back from each pitching start and will perform well for the entire year, and the next, and the next. I find it useful to think about pitch-count restrictions as a productive friction. By limiting the use of a particular resource or system component (the pitcher), the life and usefulness of that resource are prolonged, to the overall benefit of the system.

Similar friction is imposed in NASCAR stock-car racing. On the fastest, highest-banked tracks, the cars are mandated to use “restrictor plates” that reduce the airflow between the carburetor and the intake manifold and thus bring down the maximum speed of the car, because the highest speeds are too dangerous for those tracks. The plates serve as a constraint on the inherent efficiency of the engine. Drivers don’t love them. But they save lives.5

The balance of pressure with friction is not limited to the world of sports. The US Federal Reserve Board, for example, uses interest rates as its “restrictor plate.” If the Fed feels that the economy is heating up too much and in danger of creating an economic bubble and/or inflation, it can use monetary tightening to push interest rates higher—creating friction intended to slow growth. The previously mentioned Tobin tax on currency trading was explicitly viewed as an imposed friction that would slow down the most dramatic and damaging swings in currency movements. By adding a cost to the execution of each trade, currency traders would be discouraged from trading in an unfettered way that would drive down the currency of a given country.

Of course, the principle isn’t to eliminate the pressure for efficiency. That would be equally disastrous. The principle is to create friction that ameliorates the damaging extremes of pressure—that is, by avoiding those times when more pressure is not better.

Design for Adaptivity

As we have seen above, the complexity aspect of a complex adaptive system means that the system in question is largely inscrutable, with causal relationships among elements in the system that are ambiguous and nonlinear. Even more challenging, those relationships aren’t stable. The actors in the system are continuously driving adaptation of the system. By the time we decide what to do, it is quite possible, if not likely, that the system has changed in a way that renders our decision obsolete by the time it is acted upon. And by the time we have figured that out, the system will have changed again. Because of that adaptability, our design principle must be to balance the desire for perfection with the drive for improvement.

In a machine model, the pursuit of perfection makes sense. It is sensible to analyze the machine in every detail in order to understand how to maximize its performance and, once that optimum performance level has been achieved, then defend against any attempt to change the way the machine works—because it is performing as well as it possibly can. At this point, any failure in the machine’s performance is likely to be interpreted as pilot error or not giving the machine enough input or time. This is what philosophers call a justificationist stance. There is a perfect answer out there to be sought, and when that perfect answer is found, the search is over. The task then turns from searching for the perfect answer to protecting the perfect answer against any attempt to alter it. It feels noble to aim for, fight for, and protect perfection.

However, in an adaptive system, there is no perfect destination; there is no end to the journey. The actors in it keep adapting to how it works. In nature, this happens reflexively, as with a tree that turns to the sunlight due to the force of nature, and by growing taller obscures the sunlight for those in its increasing shadow. In the economy, adaptation happens reflectively. People take in the available inputs and make choices, and those choices influence the choices and behaviors of the other humans in the system. This means that players will try to game any change in a system the moment that change is put in place. It is both natural and inevitable. If you are offered a bonus for achieving your sales budget, you will work hardest not to sell more of the product or service in question but rather to negotiate the lowest possible sales budget to make achieving it easiest and most likely. If you are the Lehman Brothers CEO and CFO and you know that investors will punish you if you report a high level of debt, then you will use the Repo 105 device in ways that were never intended, in order to disguise your dangerously excessive level of debt. Attempting to prevent gaming with an inspired design is therefore a fool’s errand. Gamers will exploit whatever solution is in place, and sooner or later the solution will become dysfunctional.

So, although the pursuit of perfection may seem like a noble goal, in a complex adaptive system it is delusional and dangerous. In a cruel paradox, seeking perfection does not enhance the probability of achieving said perfection. In a complex adaptive system, it is not possible to know in advance the organized, sequential steps toward perfection. Guesses can be made. Better and worse vectors can be reasonably chosen. But perfection is an unrealistic direct goal, with the problematic downside of creating a paradise for gamers. As justificationists staunchly defend a system they perceive to be perfect, gamers are only given more time and space to enrich themselves at the system’s long-term expense.

Hence, we must balance the understandable desire for perfection with an incessant drive for improvement. Given the impossibility of finding anything resembling a permanent fix, we need to engage the system not in a spirit of periodically and dramatically fixing it, despite the political attractiveness of seeming to offer “the answer,” but rather with the intention of tweaking it on a continuous, incremental basis. Those tweaks will never be perfect. In a complex adaptive system, there are no perfect answers, or even perfect directions. There are just better and worse ones in the moment. And the best ones will still be found wanting. Error is not a bug: it is a feature, an important and inevitable signal that it is time to initiate another intervention into the complex adaptive system. The experience of error is consistent with the march toward ever-better answers—ever-better models. Even the cleverest designs will be gamed. Like the economy, software works as a natural system in this respect as well: hackers will always hack, and patches will always be needed—again and again and again.

The approach of continuously tweaking rather than seeking singular perfection should resonate with the American legal community. America’s common-law legal system (borrowed, of course, from England), in contrast to the civil-law systems found in France, Germany, Japan, and China, for example, is based on the principle that the laws of the land will be tweaked through their interpretation by judges in the context of specific cases. American democratic capitalism should take a page from American jurisprudence in embracing relentless tweaking as the best way to progress toward a perfection that will never be achieved.6

Design for Systemic Structure

Recall that in a natural system there are numerous interdependencies among the elements in the system. The core design principle with respect to the systemic structure is to balance connectedness with separation. As with the overall balance between efficiency and resilience, and with complexity balance between pressure and friction, and adaptivity balance between perfection and improvement, there is danger on both sides of the desirable path.

With respect to connectedness, as we’ve seen, interdependence between outcomes over time amplifies the gravity effect of efficiency. It’s no surprise, therefore, that the transformation from a Gaussian distribution to a Pareto distribution is accelerating. We are currently connecting more and more things, in more and more tightly coupled ways.7 The internet of things (IoT) is the latest generation of enhanced connectedness. Untold billions of devices will be connected to provide real-time information, computer-to-computer. Systems everywhere are becoming tightly coupled. Lots about it is good, indeed very good. A connected world is more efficient. A connected world drives out transaction costs and unnecessary rework. Humans are already tightly connected at fractional costs; now machines will be too, and machines to humans, and vice versa.

But tightly coupled systems can fail catastrophically. In 2003, the entire US Northeast (plus Central Canada) experienced a power blackout because a single power line in Ohio came in contact with a tree branch. This relatively minor fault cascaded through the tightly coupled system and, once it hit a critical stage, 265 power plants went off-line in about three minutes. It took a week to put that particular Humpty-Dumpty back together, during which tens of millions of people and their businesses went without electrical power.8

The solution for avoiding this sort of calamity is not full separation. It is to attempt to understand and benefit from connectedness but balance it with separation, because when outcomes are interdependent, effects become the causes of more of the given effect and the proverbial sand pile collapses. That is precisely why tightly coupled systems are prone to total meltdowns. The most direct way to reduce dysfunctional connectedness is to install firebreaks, an idea that comes from forest management. In a dry season, a bolt of lightning may set a tree ablaze, and if there is a strong wind and all the trees are dry, the fire can spiral rapidly out of control. To reduce the risk of losing commercial timber to fire, forest managers introduce gaps across which the fires cannot easily pass, whether that is an existing road or a stream or a path that is bulldozed to impede a forest fire already underway. It creates productive separation in a system that is too tightly coupled to exhibit the requisite resilience.

Productive separation has long been a tool in financial regulation. The Glass–Steagall Act of 1933, which was part of the New Deal legislative agenda, was designed to create a firebreak between the banking and securities industries after the crash of 1929. To be sure, some historians do not believe that the crash of 1929 was driven by commercial banks inappropriately participating excessively in the securities business, but that was certainly the assumption of the framers of Glass–Steagall, which prevented commercial and investment banking institutions from venturing into each other’s businesses. The idea was that if one of the industries experienced a meltdown, the other would remain relatively unaffected. In 1956, the Bank Holding Company Act plowed a further firebreak, between banking and insurance.

These firebreaks were removed starting in the 1980s in the spirit of efficiency and connectedness. The theory was that a firm that spanned these industries would be able to serve customers in a broader and more seamless fashion and would therefore be a more efficient delivery mechanism. Once again, experts disagree on the consequences of the removal of this firebreak. Some view the removal as a factor contributing to the global financial crisis.9 Others view it as not relevant. It is never possible to be certain in such a complex adaptive system, but it is clear that adaptation of a sort not necessarily anticipated by those removing the firebreaks ran rampant. The largest commercial banks in the country used retail deposits to fund risky securities-industry activities that would have horrified those depositors had they known what the capital base provided by their deposits was being used to underwrite.

Another bit of productive separation is the circuit-breaker system used today on most stock markets. The idea of the circuit breaker was spurred by the October 19, 1987, fall in the Dow Jones Industrial Average of 508 points. It was the biggest absolute (though not relative) drop in history. The problem was that the effect (dropping stock prices) became the cause (I should sell now before the market drops further) of more of the effect (dropping stock prices) and so on, for 508 points of decline.10 The tight connection of instant data and instant reaction producing more data and more instant reactions had to be separated. That was achieved by way of a circuit breaker. Following a drop of predetermined magnitude, trading would be halted to give the participants a chance to think more carefully, so that they wouldn’t simply react as fast as possible in order to get their trades in before others and not lose so much.

While there are examples of firebreaks designed to balance connectedness with separation, there simply aren’t enough. Some, as in the case of Glass–Steagall, are being eliminated in pursuit of higher connectedness for efficiency. It might be justifiable in individual cases, because balance, not an extreme of separation, is the goal. However, the balance between higher connectedness for efficiency and greater separation for resilience needs at present to be redressed in favor of separation.

As you manage this balance, be aware that connectedness is an inherent property of natural systems and a source of efficiency, which needs encouragement as much as resilience needs protection. It is important to look out for and recognize these connections, because failing to recognize them means that you run the risk of a reductionist’s trap: not recognizing when dysfunction in one part of the system may spread to another part. A further complication is that many important connections hide in plain sight.

To illustrate, let me take you down a connection that contributed to the collapse in the mortgage-backed bond market in 2008. In the capital-markets system, investors who buy bonds, including portfolios of derivatives based on bonds, count on bond-rating agencies—Standard & Poor’s (S&P), Moody’s, and Fitch—to help them judge the riskiness of the bonds in which they are investing. Some investors, like pension funds, are required to hold only bonds that are rated “investment grade” (above BBB− for S&P and Fitch or Baa3 for Moody’s) in order to make sure that these funds are not taking risks that put in jeopardy the retirement payments owed to pensioners current and future. Hence, they must use the bond raters. Other investors choose to avail themselves of the reports of bond raters in order to manage their risks.

To do a useful job in rating bonds, bond raters at any one of these rating agencies need to be very talented and experienced at understanding with great precision the inherent risk associated with the entities issuing the bonds they are rating, and the degree to which the bond holder is or is not protected from those risks by the covenants in the bonds’ debenture agreements.

The interesting thing about anybody who happens to be highly skilled at that core and necessary task is that they would also be great at trading bonds. And trading bonds—whether on one’s own behalf or for a bond-trading company or the bond-trading desk of a larger financial institution—is far more lucrative than working as a bond rater for one of the three bond-rating agencies. Hence, while there will be inevitable exceptions to the rule, most people who happen to be very highly skilled at understanding bonds well enough to rate them will be sitting at a bond-trading desk, not a bond-rating desk. Ones that aren’t so good at rating bonds will be occupying the majority of the desks at bond-rating agencies.

Observers and participants were surprised and outraged at how dreadful the bond-rating agencies had been at rating the packages of mortgage-backed bonds that triggered, or at the very least exacerbated, the global financial crisis. In our system for ensuring the stability of the capital markets, we assumed that bond-rating agencies were staffed by experts in rating bonds, who would consistently provide high-quality advice to bond investors. However, during the financial crisis, large volumes of mortgage-backed bonds rated AAA by the bond raters defaulted despite the reasonable expectation of investors that the default rate for AAA bonds is virtually nil. It should have been impossible for bundles of mortgages expertly rated AAA—metaphorical miles away from the marginal BBB– rating—to default and in some cases lose all their value. And yet they did, and in large volumes.

The fundamental problem was that the truly expert bond raters weren’t rating bonds at S&P, Moody’s, or Fitch. They were at bond funds and investment banks, busily creating structured bond portfolios designed to fool less savvy bond investors and their overmatched bond raters into believing that the structured bond products were AAA-grade when they were actually thinly disguised junk. Why did this happen? Fundamentally it is because the players involved in the system, from investors to regulators, assumed the absence of any connection between the system by which bond experts chose their place of employment and the system by which bond ratings of a given assumed quality are generated. In reality, those two systems were connected in a subtle but fundamental fashion, the ignorance of which created significant damage.

Designing to avoid this underappreciation of connectedness—i.e., too much assumed separation—is tricky. The complexity element of a complex adaptive system ensures that it is not a straightforward task to identify all the relationships in the system. Once again, systems-theorist professor John Sterman provides useful perspective: “There are no side effects—only effects. Those we thought of in advance, the ones we like, we call the main, or intended, effects, and take credit for them. The ones we didn’t anticipate, the ones that came around and bit us in the rear—those are the ‘side effects.’11

It would be unhelpful to admonish designers to figure out all the effects in advance so that the system doesn’t fail because of “side effects,” since there will always be unexpected effects. On September 11, 2001, the majority of large organizations located in lower Manhattan had carefully designed backup telecommunications facilities to take over in case their primary one failed. However, none of those organizations anticipated that the majority of the redundant communications cables in lower Manhattan ran directly under the World Trade Center and would be wiped out because the Twin Towers collapsed.12 To ask the designers to anticipate a terrorist attack that would destroy New York’s two biggest buildings is entirely unrealistic.

But it is not unrealistic to ask designers of systems to pay close attention to unexpected effects when they show themselves, rather than ignore them because they don’t fit the model. I learned an important lesson in this when I studied what helped Dr. Stephen Scherer become one of the world’s preeminent researchers of autism spectrum disorder and genomics. For Scherer, anomalies are a treasure trove: “My belief is that answers to really difficult problems can often be found in the data points that don’t seem to fit existing frameworks. To me, those little variations are like signposts saying: Don’t ignore me!” He believes that all of his research successes stem from paying attention to “the data that everybody else was throwing away.”13

While Dr. Scherer may be in the minority in terms of keeping an eagle eye on effects that don’t comport with the model of the day, he isn’t alone in doing so. Sometimes the financial regulators do pay attention. When the consensus estimates of analysts and their buy/hold/sell recommendations became more important and central to the capital markets in the wake of the creation of First Call and the passage of the Private Securities Litigation Reform Act, as described in chapter 4, regulators noticed that the analyst ratings were getting ever more dramatically weighted toward “buy” recommendations. Why would that be? Well it was very helpful to the analysts’ colleagues in the investment-banking department, when attempting to sell business to a given company, if there was a positive, optimistic buy recommendation out on the stock of that company. While regulators had hoped that the analysts would do rigorous evaluation of each company’s stock and provide an unbiased recommendation, it turned out that there was an unforeseen connection between the hopes and desires of the investment bankers and the preponderance of buy recommendations on the part of their analyst colleagues.

In 2002, in response to understanding this connection, the Securities and Exchange Commission imposed a rule called Financial Industry Regulatory Authority (FINRA) Rule 2711 mandating that “a member must disclose in each research report the percentage of all securities rated by the member to which the member would assign a ‘buy,’ ‘hold/neutral,’ or ‘sell’ rating.”14 While it is unlikely that this has fixed the problem entirely, at least it adjusts to connectedness where it was shown both to exist and to produce problematic outcomes.

In these ways, designing for systemic nature can create productive separation where tight coupling threatens to produce extreme Pareto outcomes and can adjust for nonobvious connectedness that would otherwise produce equally problematic outcomes.

Summary, and Introduction to the Agendas for Change

To summarize, the overall design principle for repairing the dysfunctions in our democratic capitalist system is to balance efficiency with resilience. To achieve that overall balance, we need to pursue three individual balances, first between the pursuit of pressure and the application of productive friction, second between the desire for perfection and the drive for improvement, and third between the march toward connectedness and the enforcement of separation. The prescriptions for a broken system need to be consistent with the nature of that system—in this case a complex adaptive system, not a complicated machine.

As we embark on our journey to repair our broken system, we need to recognize that no single actor can create the necessary shift on his or her own. Unsurprisingly, in this complex system, there are many important and interrelated actors. Given the diagnosis to date, it would be silly and unrealistic to imagine that one actor could fix the problems identified. All the stakeholders in democratic capitalism have their parts to play. Business executives need to embrace that they are participating in a complex adaptive system rather than operating a machine—and manage accordingly. Political leaders need to recognize both that their job is not to promote unalloyed pursuit of efficiency and that they are overseeing a complex adaptive system rather than a perfectible machine. Educators need to understand that their job is to prepare graduates to operate effectively in a complex adaptive system—and teach accordingly. Citizens have to take on the assignment of aiding the functioning of the complex adaptive system by enforcing more friction in order to encourage more resilience.

As mentioned earlier, in a break with tradition in books of this sort, I will recommend only prescriptions that have already been used or are in place. They may come from a different context or jurisdiction or time. But they are not theoretical; they are actual. They aren’t speculative; they are doable. There are lots of them. We don’t necessarily need all of them, but we need most of them together to restore balance and save American democratic capitalism.

That will have a particular implication for readers. A business executive, political leader, educator, or citizen may dismiss a recommendation by saying: “This isn’t a useful recommendation. I am doing that already.” But every recommendation I include in the next four chapters was chosen precisely because someone, somewhere, can say: “I am doing that already.” Yet that doesn’t negate the recommendation. It means instead that more people in more places have to be doing what you are doing, and I would encourage you to help those people, by your example, to adopt the behavior in question.

On to chapter 6, an agenda for business executives . . .

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.138.144