Chapter 7

An Agenda for Political Leaders

September 2008 was a particularly bad month for American political leaders, whether elected politicians or senior government administrators. The financial turbulence that began in 2007 reached a crescendo in September 2008. On September 6, the Treasury Department was obliged to provide a $100 billion bailout for Fannie Mae and Freddie Mac, the federally backed mortgage-guarantee agencies Congress had created in 1938 and 1970, respectively, in order to promote American home ownership. On September 14, Lehman Brothers (of Repo 105 fame) declared bankruptcy due to its heavy exposure to mortgage-backed bonds, having failed to obtain a bailout from the Federal Reserve Bank and the Treasury, which deemed the investment bank small enough to fail. Just two days later, however, the impending collapse of insurance giant American International Group (AIG) forced the Federal Reserve to intervene with an initial $85 billion bailout (later increased). On September 25, the Office of Thrift Supervision seized Washington Mutual, the country’s largest savings-and-loan bank and placed it into receivership.

Three days later, on September 28, Congress refused to approve the more comprehensive bailout program, the Troubled Asset Relief Program (TARP). That rejection precipitated a 777.68-point drop in the Dow Jones Industrial Average on September 29, the biggest drop in history to that point, forcing Congress to reverse course, with the eventual passage of TARP on October 3. But the damage to confidence had already been done. Between September 19 and October 10, the Dow dropped 31 percent—the biggest one-month drop since another miserable fall month on Wall Street: September 1929. Banks stopped lending, even to their best clients. Commercial-paper programs, which had become a key source of short-term funding for large companies, stopped being supported by the financial intermediaries, threatening the solvency of even blue-chip industrials.1 America’s six largest banks all received assistance ranging from $10 to $25 billion from the TARP program to ensure their solvency and their continued ability to provide service to their customers.2 Nonetheless, the economy went into deep recession and unemployment soared, leading to one of the slowest economic recoveries in US history.

The six biggest US banks that received TARP funding are the first, second, third, fourth, seventh, and eighth largest banks in North America. On the same list, Canada’s “Big 5” banks ranked fifth, sixth, ninth, tenth, and eleventh.3 But unlike the six big US banks (and many smaller US banks), the five big Canadian banks did not receive or require bailout money from their government. Despite its close proximity to and interdependence with the US economy, the Canadian financial system did not experience a crisis of confidence and did not require the huge bank bailout programs that took place in the United States and most of the world’s advanced economies—from that of the United Kingdom to those of most of Europe and Australia.4

There were of course many factors behind this difference; there is never just one. But at the very least Canada’s policy approach to this vital sector has to be considered as one important factor, for it is very different from the US approach—in ways that implicitly recognize the financial system as a complex adaptive system and in ways that inform the agenda I propose here for political leaders. The first point of difference is that regulation is not seen as providing solutions so much as improving financial practice.

From Solutions to Improvements

The most important and comprehensive financial regulatory legislation in Canada is the Bank Act, which was enacted in 1871, four years into Canada’s life as a sovereign country. The Bank Act came with a very unusual provision: a required decennial review. Regardless of the situation, regardless of the political context, the act was to be formally reviewed every ten years. While historians are not clear on how the Bank Act came to contain the decennial-review provision, they believe it was requested by the banks themselves, with speculation that bank leaders at the time felt that the periodic review would keep them better connected to their customers.

In any case, ever since 1871, the Bank Act has been periodically reviewed and tweaked to maximize its future effectiveness. In fact, in 1992 the review interval was shortened to five years. As a consequence, there will never be a big partisan political fight over whether or not to “repeal and replace” the Bank Act. It is thoroughly reviewed just because five years have passed. This has enabled Canada’s regulators to balance continuity with change, tweaking regularly so that the system never becomes unbalanced.

In addition, the regulatory oversight in Canada is much more relationship-based than rule-based. Canada’s top regulator during the global financial crisis and its aftermath was Julie Dickson, Superintendent of OSFI (the Office of the Superintendent of Financial Institutions). She was well known for her habit of flying regularly from her Ottawa office to Toronto to meet with the CEOs of the five big Canadian banks. No one but Dickson and the CEOs knows exactly what went on during those one-on-one discussions. But the rumor on Bay Street (Canada’s equivalent of Wall Street) was that Dickson was not shy in offering her opinions, most notably in suggesting that the banks would be unwise to emulate their US counterparts in fueling a housing bubble by offering easy long-term fixed-rate mortgages. In a 2010 article, Dickson herself stressed the importance of informal oversight of this kind: “Rules—such as minimum capital requirements, leverage ratios, limits on activities—are important, but in the Canadian experience, the actual day-to-day supervisory oversight of financial institutions is just as significant . . . Stricter rules, like substantially higher capital requirements, can create a false sense of security; an institution will never have enough capital if there are material flaws in its risk management practices. That is why supervision matters.”5

From the Parts to the Whole

In the United States, the various financial institutions are regulated by a veritable crazy quilt of narrowly-focused federal and state bodies, all operating independently of each other and jealously guarding their separate patches. Nobody has the mandate to take a holistic, systems approach to the oversight of US financial institutions. If the system fails, everybody can and will attempt to argue that it wasn’t because they failed to manage their little part of the system.

Canada is very different. Since 1987, there has been just a single regulator of financial institutions: OSFI. To be sure, within OSFI there are specialized departments focusing on the different parts of the system, but the fact that they are all part of the same agency makes coordination and information sharing much easier and largely eliminates the federal/state/sector distinctions characterizing the US regulatory system. As a result, OSFI can take a broader system view on financial institutions than any US regulator can. In effect, the Canadian regulatory system mirrors the connectedness of the different parts of the real financial system—not the partitioned, reductive construct assumed in US legislation—and that makes it easier for Canadian regulators to see when a dysfunction in one part may have systemwide effects.

From Global Pressure to Local Friction

Finally, the big Canadian banks were not allowed to merge. Two pairs of the five—Royal Bank of Canada and Bank of Montreal, TD Bank and Canadian Imperial Bank of Commerce—attempted to do so in 1998. They argued that in order to compete in an ever more intensely competitive global market, they needed the increased efficiency that would come from merging, taking out redundant costs, and increasing their competitive heft. The Finance Minister at the time, Paul Martin, thought otherwise and refused to allow the two mergers—which would have resulted in two banks controlling approximately 70 percent of Canada’s banking assets.6 And despite the bank leaders’ dire warnings at the time about how hard it would be to keep up with global competitors due to the disallowance of the mergers, the Canadian banks in question rank higher on the lead tables now than in 1998, in part because in the lead-up to and through the global financial crisis, they were focused on banking while the big US banks were focused on merging.

Public policy in Canada with regard to banking provides a number of pieces of the agenda for balancing efficiency and resilience in a complex adaptive system. It features continuous tweaking instead of fixed, permanent rules. It takes a holistic perspective rather than many reductionist ones. And it is willing to enforce productive friction to offset the constant pressure for ever more efficiency. Let’s look in more detail at these and other shifts in the Canadian approach to policy making and regulating that US political leaders should consider as they contemplate how they can restore balance to democratic capitalism.

Write Revision into the Laws You Make

When Americans identify a problem, their typical reaction is that there should be a law or a rule about it. The United States passes lots of laws and issues many rules as a result. Sadly, the economy doesn’t appear to present problems that are permanently fixable. As I pointed out in chapter 4, US policy makers imagined that the Sarbanes–Oxley Act, or SOX, would prevent large-scale corporate abuse. We saw how that worked out, so why should we have assumed that the Dodd–Frank Act of 2010 would succeed in fixing the system where SOX had failed? It, too, was designed to be the comprehensive fix. Yet it has already been dramatically altered, with much controversy around the alterations, in part because it was designed to be permanent.

Once again, the explanation and the solution lie in realizing that the economy and the business world are complex adaptive systems, which quite simply adapt. In other words, when problems recur in natural systems, they are never quite the same as they were before. We can protect people against all the flu viruses we know, but we can’t guard against the next adaptation of the virus. We can similarly protect people against the exact repeat of past corporate abuses. But because the players in the system adapt their behavior to game the new legislation, the next banking crisis is not going to be an exact replica of the last. What’s more, establishing complicated rules for a game understood in its entirety by no one will inevitably offer opportunities for gaming from the start—in part because the consent of would-be gamers is necessary for the law to pass in our democratic system. Even if that can be avoided and the rules do work at first, players will sooner or later find ways around them because they will have many more years to benefit from the gaming, since no one will have the time, patience, or energy to reengage with lawmaking on the subject again—absent another crisis. Legislative permanence plays right into the hands of the gamers and creates more investment in gaming than would otherwise be the case.

In the natural model of the economy, regulation should be treated less as a cure and more as an exercise in learning and development. And that was the extraordinary insight of the framers of the first Canadian Bank Act. In building in a requirement for a regular revision, Canadian lawmakers were implicitly treating the new law not as a solution but as a prototype from which learning would take place based on the interaction between the prototype and the complex adaptive system it enters. Take encouragement in the learning that comes from experiencing and observing the flaws in action. Then tweak and tweak to make that prototype better and better until it is genuinely good—maybe even close to perfection. In essence, it means policy makers should act like software companies: promise imperfection followed by speedy fixes. In that industry, if customers prefer to get the software early, they receive it knowing it will come with bugs: that isn’t a surprise. If instead customers don’t want bugs, they are free to wait for later releases. That system works for all customers and for the software producers.

Every new piece of legislation dealing with the economy should be made subject to periodic review and sunsetting if it doesn’t pass muster in such a review. This will raise the cost and lower the value of gaming by shortening the period during which the profits from gaming can be accumulated.

On occasion, America does engage in tweaking or sunsetting legislation. The aforementioned TARP is a good example. Under TARP, the Fed was authorized to spend up to $700 billion to purchase troubled assets that were originally defined as residential or commercial loans in default, loans that were threatening the solvency of the financial institutions that held them. The definition was later expanded to include any financial instruments the purchase of which would provide financial-market stability. The program was later cut back to $435 billion and ended up disbursing a little over $400 billion.7 To the surprise of almost everyone involved, TARP ultimately didn’t cost the taxpayers a penny, because the Treasury turned a $15 billion profit on the program—even though the assumption going into the program was that all $700 billion would be spent without any recoveries.8 And, once the crisis had passed, TARP came to an end.

Other domains provide even better examples, notably in sport. While the National Football League (NFL) is of course no paragon of virtue with its misbehaving players and physical brutality, it does understand and internalize the detrimental gaming of its very valuable game—even totally legitimate gaming by clever coaches. Its standing Competition Committee meets after every season to take stock of how the rules in place did or didn’t provide an optimal outcome on the field for the fans. It has the mandate to tweak—and in fact routinely does tweak—the rules that govern play on the field every year. The committee tweaks the rules to keep offense and defense in relative balance, because if the two get out of balance, the game will become more predictable and less exciting for the fans. If offenses starts to dominate defenses, the game will become a back-and-forth offensive race down the field, while if defenses dominate offenses, games will end with no or low score. Both are less enjoyable outcomes than a rough balance between offense and defense. Thanks in part to this relentless tweaking of the game, the NFLhas become the most popular and lucrative sports league in America.

Nothing lasts forever—but much legislation implicitly assumes that it does. The assumption should be that all games get gamed and need to be designed for continuous tweaking. The Canadian Bank Act, TARP, and the NFLCompetition Committee show that it can be done successfully, and any government actor can follow suit.

Seek Mental Proximity When Designing Policy

Systems-theorist professor John Sterman reminds us that everything we think, do, or say is on the basis of a model. Hence policy makers have no choice but to model their citizens when they design policy for them. However, there is a choice of how they can do that. That model of people can be informed by mental distance from or proximity to those citizens.

In the former camp, we can make general assumptions about “what poor people need” or “what single mothers need” or “what will influence Wall Street executives,” and then build complex and highly theoretical models on those assumptions to determine how to legislate and regulate. Unfortunately, although this approach has a pleasing formal rigor about it (it is, after all, how we construct mathematical models), what comes out of it is often highly flawed.

I recall vividly a small example from my time serving on the board of a world-leading nonprofit pediatric hospital in Ontario. The Ministry of Health, which oversees all public hospitals in Ontario, had mandated that hospital CEO compensation packages contain some incentive compensation based on measurable patient objectives, and the board was considering a package for our CEO that contained measurable objectives for the rate of infant patient mortality at the hospital. If she achieved a sufficiently low infant-mortality rate, she would earn a significant additional bonus. By the conclusion of the board discussion, I had little confidence that the compensation committee had determined that our CEO would actually find this incentive motivational.

The CEO had begun her illustrious career low on the hospital totem pole as a nurse, and through hard work, dedication, and native intelligence, had made her way up to the top. By the time of this compensation decision, she was a very successful and revered leader. When I later asked her whether she would be motivated by the infant-mortality-rate target, I knew what her response would be before she gave it: “I can understand why the board would want to install performance incentives into our compensation system. The Ministry of Health is asking for something on that front. But since I began here as a twenty-three-year-old nurse, I have dedicated my life to minimizing the number of babies who die in our care. Nothing the board says or does, no amount of compensation, will change by one iota the dedication with which I pursue my goal of helping every baby who arrives at our hospital leave as healthy as possible.”9

To be sure, the ministry policy change made sense and was consistent with prevailing compensation theory. But what resulted didn’t take into account the real person sitting in the CEO chair, a real person who would have been happy to answer any question we asked her in advance of designing a compensation package that insulted her more than it motivated her.

Given the likelihood that traditional, theory-based approaches will produce deeply flawed outcomes such as this, policy makers will be well advised to shift to an alternative approach, one that is informed by up-close observation and interaction with real citizens. Such an approach is much more likely to generate adaptable prototypes for legislation and regulation that will fit the needs and features of the target population.

It’s relatively easy to go talk to a CEO about a compensation plan, of course. But what about a big government initiative affecting thousands, even millions, of people? That’s been successfully done. In 2010, when the UK government was putting together what turned out to be the massively successful GOV.UK website service that went live in 2012, the cabinet office created a unit called government digital services (GDS). The new unit was led by Michael Bracken and Tom Loosemore, whose first task was to convince the various government departments and agencies, which collectively produced hundreds of UK government websites, that the true customer of the websites wasn’t the departments and agencies themselves but rather the citizens that the departments and agencies were set up to serve.10

Even then, the customers that the website producers cared almost exclusively about were those who could make their lives most miserable: reporters and think tanks, who from time to time would criticize what they either saw or failed to see on a government website. Because they had a voice, they scared the government officials. Bracken and Loosemore set out to find out what citizens who didn’t have such a voice wanted and needed from government websites, rather than design their solution based on satisfying the agencies and departments or mollifying reporters and think tanks.

Their first initiative was to do a quantitative study of the search data produced by ordinary citizens’ visits to UK government websites. When they published their initial results, they came under withering attack in a blog by a qualitative-research expert named Leisa Reichelt. Rather than defend or double down on their existing model, they did the opposite and hired Reichelt to be GDS’s head of user research. Bracken and Loosemore credit Reichelt with integrating deep qualitative user research into all GDS work. Through the first year of design, GDS did in-depth ethnographic interviews of hundreds of citizens, watched them use the existing sites, and had them try out initial designs of new sites. Reichelt insisted that the whole team—from developers to designers to product managers—watch the entire videos of the relevant citizen interviews before coming up with design plans. At first Bracken and Loosemore worried that sitting through entire videos would be excessively time consuming. But they soon came to understand the profound value.

Loosemore tells of a particularly striking insight from an interview of a woman in her forties from the northern United Kingdom on the topic of replacing her passport—one of the many reasonably high-volume citizen use cases. As he explained, there are two very different processes one must follow, depending on whether one loses a passport or has it stolen. The latter, for example, involves filing a police report, which is not part of the process for the former. Because of the major differences, the beta version of this service had two entirely separate landing pages: one for if you typed stolen passport into the search box, the other for if you typed lost passport.

The woman in question told the interviewer the traumatic story of having recently had her passport stolen, so the interviewer had her test the system unprompted and encouraged her to proceed with a search. Loosemore watched in amazement as she typed in lost passport and was taken to the newly designed landing page for lost passports. Only then did he realize that in her mind she indeed had “lost” her passport, in that she had it before it was stolen and didn’t have it now. Thus, it was “lost” to her. She had no way of knowing in advance the consequences of typing the word lost versus stolen, which was that she was guided down a track that would not result in getting a new passport anytime soon. After lots of online form filling, she would realize that she needed to find the process for a stolen passport. As a result of watching this user actually interact with the design, the team reversed course and created a single landing page for a person typing either the term lost passport or stolen passport. On that page, users would find detailed descriptions that would help clarify which button to click—stolen or lost—to get to the process that matched with their needs.

Interestingly to me, Loosemore used the term friction to describe what they did in that instance. They initially had a goal of taking out all the friction and getting visitors directly to “lost” or “stolen.” The additional combined landing page was what he called “a speed bump on the way.” Going a bit slower ended up saving time! The insight from the passport case was applied to other services. For example, for the service of acquiring power of attorney (typically gotten by a son or daughter on behalf of an aging parent), they put in a speed bump that forced applicants to go back to their family and ask a number of questions before plowing ahead with the application, because the user researchers found that if those conversations didn’t happen, there could be deep resentment over who got power of attorney over what. Here again, pursuit of unalloyed efficiency was suboptimal. Balancing the pressure for fewest clicks with friction by way of speed bumps produced a high enough level of efficiency but also resilience in terms of not inadvertently leading people down unproductive paths.

Achieving mental proximity to users helped the GDS team reduce the hundreds of UK government websites to a single site—GOV.UK—with usability that is now considered a global standard. In 2013, the site won the design of the year at the Design Museum awards, the United Kingdom’s most prestigious design award for all entities across the entire country, not just government. In fact, in its eleven years thus far, the award has gone to a government agency only once: GOV.UK.

Dial Up Productive Friction in Trade

As we saw earlier, the prevailing wisdom in policy circles has long been that lowering trade barriers is an unalloyed good. That is why, since 1947, the United States has pushed for freer trade, through the General Agreement on Tariffs and Trade as well as through bilateral and (with Canada and Mexico) trilateral agreements. The country, moreover, has led by example, making its own economy perhaps the world’s most open marketplace. By and large that leadership has been necessary, because back in 1950, when average trade tariffs stood at 25 percent of the value of goods traded, the balance between efficiency and friction in trade was unquestionably too far in the direction of friction. Productive countries were operating way below capacity, to the detriment of all but a very few of their citizens.

At today’s 4 percent, however, it is legitimate to question whether America has overcompensated. To begin with, although free trade does benefit the overall economies of the partners, it most certainly does not benefit everyone in the economies in question. As economist Dani Rodrik points out, there are always losers from trade, and in a developed country like the United States, the loser is overwhelmingly unskilled labor. Although economists have historically judged such losses to be small and not consequential, more recent work demonstrates that they are of meaningful and permanent size. For example, in the decade following the passage of the North American Free Trade Agreement, high-school dropouts in locales whose employers were heavily affected by imports experienced wage growth of 8 percentage points lower than those of their counterparts in locales whose industries were not similarly affected. Overall wage growth in industries that experienced a loss in their protection from imports fell 17 percentage points relative to those in industries not experiencing a loss in protection.11

Contributing to this imbalance is the fact that American trade policy is in fact far freer than that of its trading partners. Virtually every major developed country restricts competition in sensitive sectors of its economy in numerous ways. Canada, for example, notionally allows foreign banks to “compete” in its market, but the subtle array of restrictions on foreign banks has enabled the five big Canadian banks to dominate their home market—while growing to substantial size in the attractive, sizable, and quite open US banking market. And while US automakers can in theory export cars into the very large Japanese car market duty free, US car exports to Japan are close to nil. There are myriad nontariff restrictions that dramatically raise the cost of selling an American-made car in Japan, from safety inspections to zoning regulations for dealers. As a result, while Japan exports over 1.7 million automobiles to the United States in a year, the United States exports just 17 thousand to Japan.12 Some argue that this is because American vehicles are not competitive in Japan. But it is not just American vehicles. Japan has managed to keep foreign-vehicle penetration of its home market under 10 percent—a strikingly low share to be sure, and one out of step with all other advanced markets, including that of car-producing neighbor South Korea.13 Yes Japan makes great cars: but not that great! The list goes on and on. The European Union protects farmers. France protects yogurt. China protects whatever it feels like protecting with absolutely no apology.

The point is that, going forward, America’s political leaders should become more careful about removing trade frictions than they have been. In a complicated world, what seems a barrier to efficiency to one group of players is a necessary protection for the livelihoods of another group of players. For America, it probably is time to take a pause on further opening of its economy to more unfettered trade. It is the world’s biggest internal market and already the most open economy of any size in the world. America doesn’t need more free-trade deals. It needs to reestablish the faith of its citizens in democratic capitalism with more balance between pressure and friction when it comes to trade policy.

Fight the Giants

Antitrust policy was initiated with the passage of the Sherman Antitrust Act in 1890, which sought to counteract industry-based Pareto outcomes in which one participant gained such a dominant position that it would have the capacity to reap a disproportionate amount of the rewards from that industry. Now, when we need antitrust enforcement more than ever, our antitrust enforcement is weaker than it has been since its inception, largely thanks to the growing perception that today’s would-be monopolists will be so efficient that we will all benefit.

What that argument ignores is that the motive behind the original antitrust legislation was not the impact of monopolization on efficiency, whether positive or negative. In fact, the assumption behind public utilities, like power and water, has always been that having a monopoly will surely be more efficient than having a number of smaller competitors replicating one another’s investments and cost structures. Rather, the fundamental concern was the accumulation of power by the producer over the customer. So, our policy was to allow public-utility monopolies in domains such as electricity and water, but then strictly regulate those monopolies in order to counterbalance their market power. To remove regulations purely on the basis of efficiency, therefore, is to misunderstand and even negate what antitrust legislation was designed to do in the first place, which is to prevent the most efficient player from abusing its power.

And I’m afraid those players are doing exactly that. The argument that the technology companies that preside over two-sided monopolies aren’t hurting consumers because their service is free is misguided. In a two-sided market, there are—as the name indicates—two sets of customers. For Google, for example, on one side is the search-engine user and on the other side is the advertiser seeking to communicate with that searcher. To say that Google couldn’t be causing harm as a monopolist because it doesn’t charge anything to the first customer is embarrassingly naive.14 The other set of customers—who increasingly get gouged as Google becomes more of a monopolist (or in fact a duopolist with Facebook) in the online-advertising market—matter as much as the first set in the two-sided market. It is no virtue to give something away to a certain audience so as to be able to gouge another audience.

On top of this, as pointed out previously, monopolies, whether in the private or public sector, tend to serve their customers increasingly badly as time passes, because they cannot learn from their customers. In addition, monopolies produce brittle monocultures that are vulnerable to an external shock—often from the development of a new technology that the monopolist has ignored for too long because of excessive investment in the status quo.

Robust antitrust enforcement can reduce these risks, because it protects innovative firms the monopolist would otherwise acquire. This may result in some inefficiency in the short term, but giving customers a choice forces the efficient incumbent to listen to consumers, thereby protecting its own dynamic efficiency over the long term and making all companies more alive to the opportunities presented by innovation. The sacrifice of efficiency today is, therefore, an investment in a more robust, resilient, and innovative system over the long term. The efficiency defense should be relegated to the dustbin of history, and policy makers should ensure that antitrust laws return to their original purpose of serving as a deterrent to monopoly outcomes, regardless of short-term efficiency gains.

On this front, the European Union can be seen as a positive example. Despite, as a matter of policy, having adopted the efficiency defense around the time America did, it has been much more forceful in tackling monopolistic behavior on the part of the technology giants.

It has taken on Google’s monopolistic practices in three matters over the past three years, fining the company €8.2 billion in the process. The most recent fine, in June 2019, was for €1.5 billion for the classically monopolistic practice of forcing customers of its AdSense service to sign contracts agreeing to accept no advertising from Google’s search-engine rivals.15 The practice plainly and simply sought to eliminate competition through the use of market power. It was indeed making Google more efficient to extract the maximum level of earnings from the EU online-advertising market. This fine followed two others in the preceding two years. One, a record fine of €4.3 billion, was for abusing its dominant position in mobile. The other, a fine of €2.4 billion, was for surreptitiously manipulating search results to the benefit of Google itself and to the cost of unsuspecting searchers. That adds up to three fines totaling €8.2 billion, or nearly $10 billion, for engaging in practices that benefited Google and its efficiency but produced more Pareto results that are bad for democratic capitalism, not only in the European Union but also the rest of the world.

The above enforcement is in the services sector. In addition, the European Union recently took on a big technology manufacturer, Qualcomm, the world’s number-one chipmaker. In 2018, the European Union fined Qualcomm €997 million for paying Apple to use only Qualcomm chips—an attempt to try to drive Intel and others out of the mobile-phone-chip business. In 2019, the European Union fined Qualcomm another €242 million for knowingly and purposely selling certain of its chips below cost in order to drive British phone-chip maker Icera out of business. In the end, though, Qualcomm succeeded, because struggling Icera was acquired by larger chipmaker Nvidia, which then shut down the targeted chip business.

The European Union demonstrates that if a jurisdiction reaches beyond the short-sighted efficiency defense, it can successfully target modern monopolists, including the technology giants. America could do the same and strike a blow against Pareto outcomes, if it just made the effort.

Extend Time Horizons

As presently constructed and regulated, the capital markets are feeding, encouraging, forcing, and rewarding short-term, antire-silient behavior. I described earlier how the pressure for company executives to meet or beat their quarterly-guidance, analyst-consensus earnings has become unrelenting. In a downright scary study, finance professors John Graham, Campbell Harvey, and Shiva Rajgopal confirm this. They surveyed four hundred financial executives from large US public companies and found that a majority of the executives agreed that in order to meet the current quarter’s analyst-consensus earnings, they would defer or cancel attractive projects.16 These managers live in fear of an “activist investor” showing up in their share register and exerting pressure on the company to improve its financial performance.

And while these investors usually claim to be interested in the long-term performance of their targets, they actually don’t care. Their holding periods are short—422 days on average for American activist hedge funds.17 At the same time, executive tenure is getter shorter. Median CEO tenure in large public companies has continued to drop, from six years to a new low of five years between 2013 and 2017.18 So, both predator and prey are becoming more oriented to the short term. The result shows up in the pursuit of short-term efficiency proxies like workforce reduction, outsourcing, and off-shoring—proxies that are destroying companies’ longer-term competitiveness and resilience to external shocks.

For this reason, policy makers need to encourage capital providers to pursue longer-term rewards from the companies in which they invest and to utilize more—and more intelligent—long-term proxies for measuring the companies’ progress. Here we can see that the Securities and Exchange Commission (SEC) has recently set a good example with its approval of the Long-Term Stock Exchange (LTSE) as the nation’s fourteenth stock exchange, in May 2019. Founded by Silicon Valley entrepreneur and best-selling author of The Lean Startup, Eric Ries, the LTSE will explicitly require companies that list on the exchange and investors that trade on it to follow practices that are oriented toward the longer term.19 While the exact listing rules had not been made public as of this writing, they hold the promise of an alternative stock market for both companies and investors who would like to think longer term.

The SEC’s approval didn’t come easily. There was a fierce and protracted resistance from industry players who benefit from the existing market setup. These were the players who had fought the SEC’s earlier approval of the Investors Exchange (IEX), in 2016.

The IEX was specifically set up to offer an exchange that did not enable high-frequency trading. The founding group, led by CEO Brad Katsuyama, believed that high-frequency traders (HFTs) enjoyed an unfair advantage over ordinary investors by setting up trading systems—as exemplified by the leased server space in the NYSE’s Mahwah facility—that routed their trades to exchanges faster, thereby enabling HFTs to profit before other traders could react.20 That is, HFTs had gamed the game for their own advantage, to the disadvantage of traders like the mutual-fund companies and pension funds that invest on behalf of average American families. In addition, the impact of the wild trading activity on the stock prices of companies makes it harder for executives to manage companies for the long term.

To put a constraint on this hyperefficient trading process, the IEX installed a “speed bump.” All trades to the IEX need to be routed through a thirty-eight-mile coil of fiber-optic cable that slows quotes and trades by 350 millionths of a second, which is long enough to eliminate the advantage of the HFTs.21 While IEX is still small—approximately 3 percent of US stock-trading volume—it provides a viable alternative to the traditional big exchanges (the NYSE and NASDAQ) in which HFTs have an advantage over traditional investors.

Another tool that US policy makers could borrow from European countries, including France, Italy, and Belgium, is tenure-based voting rights. In France, for example, the 2014 Florange Act gives shareholders two votes for every share of stock in a given company if they hold that stock for more than two years.22 The theory behind tenure-based voting rights of this kind is that common equity is supposed to be a long-term stake: once it is given, the company notionally has the capital forever. In practice, anybody can buy that equity on a stock market without the company’s permission, which means that it can be and often is a short-term investment. But long-term capital is far more helpful than short-term capital to a company trying to create and deploy a long-term strategy. If you give me one thousand dollars to invest in my business but say that you can change with twenty-four hours’ notice how I am allowed to use it, that capital isn’t nearly as valuable to me as if you say I can use it as I see fit for ten years. If Warren Buffett’s desired holding period for stock is, as he jokes, “forever,” while the quantitative arbitrage hedge fund Renaissance Technologies holds investments for only milliseconds, Buffett’s equity capital is more valuable than that of Renaissance.

The difference in value to the company notwithstanding, the two types of equity investments are given exactly the same rights, which is a mismatch that tenure-based voting rights are designed to remedy. The tenure-based voting-rights measures in Europe are fairly new, and companies are generally able to opt out of the provisions, so the data on the relative success of these measures is not yet clear. However, these examples show that such initiatives can be put into place legislatively.

I would go further than France, because two years is not longterm enough to enable management teams to take consequential action, and doubling of voting rights is not enough to make a meaningful difference. Instead, I would give the owner of each common share one vote per day of ownership up to four thousand days, or just under eleven years. If you held one hundred shares for four thousand days, you could vote four hundred thousand shares. If you sold those shares, the buyer would get one hundred votes on its day of purchase. If the buyer became a long-term holder, eventually that would rise, to a maximum of four hundred thousand votes. But if the buyer were an activist hedge fund like Pershing Square, whose holding period is measured in months, the interests of long-term investors would swamp its influence on strategy, quite appropriately. Allocating voting rights in this way would reward long-term shareholders for providing the most valuable kind of capital. And it would make it extremely hard for activist hedge funds to take effective control of companies, because each time the hedge funds turned over their stock holdings, their voting rights would be minimized.

Some argue that the net effect of such a regime would entrench bad management. It would not. Currently, investors who are unhappy with management can sell their economic ownership of a share along with one voting right. Under the proposed system, unhappy investors could still sell their economic ownership of a share along with one voting right. But if many shareholders are happy with management and yet a single activist hedge fund wants to make a quick buck by forcing the company to sell assets, cut investment in research and development, or take other actions that could harm the company’s future, that activist would have a reduced ability to collect the voting rights to push that agenda. Instead, management would be empowered to pursue more-resilient long-term proxies for performance.

Return to More-Progressive Tax Rates

This agenda item is different from the others, all of which are aimed at preventing Pareto outcomes in the first instance. This one is aimed at moderating Pareto outcomes that already exist. As such it is inherently less effective, but I believe that it should be part of the overall agenda for policy makers. It is time to accept that the current, four-decade-long experiment with historically low marginal effective tax rates on very high incomes hasn’t produced the promised results.

Let me begin with a brief recap of the history of personal income tax in the United States. Federal income tax, with a top rate of 15 percent, came into existence in 1916, but the top rate rose rapidly to 77 percent by 1918 in order to fund US participation in World War I. After the war ended, the top rate fell back to 25 percent during the 1920s. Then, during the Great Depression, in order to fund the New Deal programs designed to stimulate the economy and provide more assistance to the disadvantaged, the top rate soared again, to 63 percent. Arguably, during this period, taxing the incomes of the rich in order to fund programs for the poor came to be seen by the majority of the electorate, though by no means all, as a legitimate function of government.

When the Second World War came along, rather than being in a position to heighten income taxation from a low base (15 percent) as was the case during the First World War, America needed to raise the top income-tax rate from its already elevated Great Depression level to new highs—to a peak of 94 percent in 1944. As happened after the First World War, the top rate drifted downward thereafter, but remained above 70 percent through the 1970s. Then, during the Reagan presidency, 1981–1989, the progressivity of the personal-income-tax system was dramatically lessened. The top rate plummeted from 70 percent, which had been its level from 1965 to 1980 (with slight short-term bumps upward in 1968, 1969, and 1970), to 28 percent by the end of Reagan’s presidency. That was a low range not seen in America since the period spanning 1925–1931.23

Thus, for the half-century from 1932 to 1981, the progressivity of the federal personal-income-tax system featured a top rate perpetually in excess of 63 percent and on average 80 percent. This was a period of exceptional US economic growth and progress for the median family in America. Arguably, that half-century experiment with progressive taxes worked well. Yet despite this economic success, the view emerged that the high marginal tax rates for the highest-earning Americans discouraged their work effort and hurt American growth. This led to the Economic Recovery Tax Act of 1981, which—among other measures—cut the top marginal personal-income-tax rate from 70 percent to 50 percent. That was followed by the Tax Reform Act of 1986, which cut the top rate to 28 percent, roughly a level last seen in 1931.

It is easy to see these two acts as the doings of a conservative Republican President—and indeed the tax-cutting movement was spurred by Ronald Reagan, who campaigned on it in both 1980 and 1984. However, when both acts were passed, the Democrats held large majorities in the House of Representatives (242 to 192 in 1981, and 253 to 182 in 1986). The Republicans had a small majority (53 seats) in the Senate at the time of both acts. The 1981 act was passed by a vote of 282 to 95 in the House and 67 to 8 in the Senate, meaning that almost 100 House Democrats voted in its favor.24 The 1986 act was the deeper cut, taking the rate down from 50 percent to 28 percent. For that bill, the House vote was a huge majority, 292 to 136, and the Senate vote was 74 to 23. Both votes were overwhelmingly supported by the respective Democratic caucuses. Democrats voted 176 to 74 in favor in the House and 33 to 12 in the Senate. In fact, House Democrats were more favorably inclined toward the act (at 70 percent) than were House Republicans (at 65 percent).25

The votes illustrate the power of metaphors and models. The metaphor that carried the day was “trickle down.” In the model based on this metaphor, wealthy Americans would work harder and invest more, which would create significantly more economic activity, and the benefits of that increased activity would trickle down to the rest of the American income distribution. That is, those in the rest of the income distribution would benefit more from top-income Americans keeping a much larger portion of their income and generating economic activity than they would if those Americans turned it over to the government to provide it to the rest of the income distribution through transfers.

It was a compelling model, compelling enough to cause a majority of the Democratic Congressional Caucus to vote to support it. But in due course, as we have seen, the model was overwhelmed by the reality that having wealth is causally linked to acquiring more wealth still. And as has been pointed out, when effects become the causes of more such effects, the outcome migrates toward Pareto.

In the modern economy, high-end talent is becoming ever more capable of extracting high pretax earnings by utilizing that talent.26 When that is combined with a personal-income-tax regime that allows that talent to keep a much larger percentage of its pretax income (relative to rates that prevailed during America’s greatest period of sustained growth), a Pareto distribution of income and wealth is the unsurprising and persistent outcome. While far from the only factor driving the current Pareto distribution of wealth in America, the historically low marginally effective tax rate on very high incomes has undoubtedly been a contributor to increasing inequality, and that contribution will continue.

Policy makers simply must increase the tax rates at the high end of incomes. Since 1987, at the federal level, the top marginal personal-income-tax rate has been below 40 percent and has averaged 36 percent—in contrast to the half-century preceding 1987, during which the rate averaged 80 percent.27 This is by no means an untried, experimental idea. It is an idea that was tested and that proved effective for this very country for a half a century. By how much and starting at what income level the progressive rate should rise is open to different theories and arguments. In addition, the approach is complicated by the great variance in state personal income tax rates. Those top rates vary from nil (in nine states) to 13.3 percent in California, and the population-weighted median (across the fifty states) is a top rate of 5 percent.28 My recommendation is a top federal rate of 45 percent for incomes between $500 thousand and $5 million, 55 percent between $5 million and $1 million, and 65 percent above $10 million (implying median top combined rates, factoring in state income taxes, of 50 percent, 60 percent, and 70 percent, respectively). What is clear is that we have gone past the point of having logic or data support the notion that the current personal-income-tax structure is contributing positively to future prospects for American democratic capitalism.

Getting Started

Getting started will not be a trivial undertaking for political leaders, especially elected ones, given the powerful duopoly in which they currently operate. The easiest thing for them to do is continue along the current path to maintain the current comfortable duopoly—until it crashes cataclysmically. Hence, they should start with initiatives that don’t make them feel too uncomfortable—anything else they will reject as too painful. For example, taking on powerful Wall Street interests to fight short-termism might not be the best place to attempt to build momentum.

Getting close to the real citizens who are the target beneficiaries of legislation during the process of designing it, as GDS did on the GOV.UK initiative, may be more comfortable. Plus, it will have the beneficial side effect of helping political leaders empathize with and show greater compassion for the electorate, which will make the legislative ideas more compelling. In addition, it is probably not terribly intimidating to set expectations low, by indicating that initiatives won’t be designed perfectly and will need to be tweaked to improve them after launch. Finally, attaching a requirement for periodic review to new legislative initiatives isn’t a terrifyingly difficult thing to imagine.

If political leaders get their feet wet on these sorts of initiatives, they can dive into the more substantive trade, antitrust, and tax issues, possibly with encouragement from their electorate.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.171.202