Chapter 11. Banks Too Big to Fail

“Any fool can make things bigger, more complex, and more violent. It takes a touch of genius--and a lot of courage--to move in the opposite direction.”

Albert Einstein

Global mega-mergers left many banks “too big to fail”—so important to the economy that governments could not let them fail. This created “moral hazard”—with no fear of failure, they had less incentive to avoid risks. Subsequent regulations gave banks further incentives to inflate bubbles—and as truly global institutions, they had an impact across the world.

It was an unholy Holy Week. On April 5, 1998, Palm Sunday, lawyers thrashed out a merger that revolutionised finance. Citicorp, long the most truly international bank, was to merge with Travelers Group, a sprawling mélange of financial services companies pieced together by the entrepreneur Sandy Weill that included big investment banks, consumer credit companies, and an insurer The deal signaled the final abandonment of the post-Depression attempt to keep US banks tightly regulated, and replaced it with an environment where banks were free to grow with impunity.

Weill and his Citicorp opposite number John Reed announced the deal under a hastily put together name and logo for the new entity—the word “Citigroup” sat under Travelers’ umbrella logo. With total assets of almost $700 billion, bigger at the time than the gross domestic products of all but seven countries, this was a financial company on a scale previously undreamt of. But it was not just the size of the new “Citigroup” that caused a sensation. The new company, as constituted, was plainly illegal.

Under the Glass-Steagall act, in force since 1934, a deposit-taking commercial bank could not sit under the same roof, or umbrella, as an insurer, or an investment bank. Now Citibank would do both. Bankers had been finding ways around Glass-Steagall for a while, with regulators often helping them find loopholes. It seemed anachronistic, and put the US out of step with the rest of the world. In Europe, “bancassurance,” with large banks offering insurance as part of a “one-stop shop” to their customers, was the norm. Weill now seemed confident he could end Glass-Steagall once and for all.

In another political environment, Weill’s confidence might have been taken for hubris, but in the late 1990s no politician wanted to stand in the way of the stock market. Republicans in Congress soon thrashed out a deal with President Bill Clinton, whom they had tried to impeach only months earlier, Glass-Steagall was struck down, with weak guidelines to replace it, and Weill joined the president at the ceremonial signing.

Citigroup’s merger was only the first big deal of Holy Week in 1998. Over Good Friday weekend, two more massive banks were hatched. NationsBank, a regional bank based in Charlotte, North Carolina, took over BankAmerica, a huge Californian bank with operations around the world, for $66.6 billion, while First Chicago NBD, long the most powerful bank in the mid-West, sold itself to Bank One. These deals also showed that the post-Depression banking structure was over. For decades, US banks were barred from doing business outside their own states, leading to the balkanization of the industry. In the 1980s, there were as many as 15,000 banks in the US, many of them with only one branch. As the mega-mergers hit, there were still as many as 9,000. At this time, Britain and Canada, similar countries in many ways, had 212 and 53 banks respectively. If the American banking system were to become as concentrated as Britain’s, it would have, at the most, 1,000 banks; if like Canada’s, just 500. With computers, bigger banks created economies of scale, and so, the banks argued, they could provide better value for clients.

But their scale raised the issue of what economists call “moral hazard.” Banking regulation ultimately rests on capitalism’s application of human fear to the actions of banks’ managers: if they take too many risks, they should face the risk that the bank goes bust. But at some point when a bank grows big, its collapse would wreak so much damage on the economy, that it could not be allowed to be happen. There is no hard and fast measure of when this happens, but you know it when you see it: the new Bank of America was plainly too big to fail. Once a bank knows it cannot fail, it has no incentive to avoid excessive risks. This is moral hazard.

This unholy Holy Week thus forced the US government to make fateful decisions, but the banks got what they wanted. Both big mergers, and several others, were waved through swiftly, and completed within months.

Not only American banks felt the urge to merge. European banks also grew bigger, and found footholds in the US. With Glass-Steagall repealed, they could swallow up Wall Street brokers, as when UBS bought Paine Webber in 2000. Because Wall Street was so important to them, this meant that the US became the de facto regulator for all the world’s biggest banks—as shown when US regulators held up the merger of two giant Swiss banks, UBS and Swiss Banking Corporation, after revelations that they were still holding on to money deposited with them before the war by Holocaust victims.1

While America’s conservative banking regulations had avoided the problem of “too big to fail” for more than half a century, Europe already had this problem. The splurge of deals as the US finally relaxed its rules created an even greater problem for Europe. By the time the crisis hit in 2007, the total bank assets in the US were roughly equal to GDP. In Iceland, which had the world’s most overblown financial system, bank assets were 8.9 times the size of the economy, while for Switzerland the figure was 7.8. The UK’s bank assets were five times the size of its economy, France’s were four times, and even Germany’s were almost three times. In these jurisdictions, large banks tended to be a source of national pride and income, and their growth had historically been encouraged.2 The conservative regime in the US had somewhat inhibited the excess, but European banks now became not only too big to fail, but arguably also too big for their own governments to rescue.

Another trend also reached its completion. Aside from being separate from commercial banks, investment banks had been run as partnerships. The senior bankers, who took decisions on all the biggest deals, took a share of the profits, and also stood to be liable for losses. This balanced fear and greed in their minds—in the event of collapse, they stood to lose all that they had gained before.

But starting in the 1970s, Wall Street’s partnerships decided to float on the market. On the other side of the Atlantic, many of the old London investment banks and brokers disappeared inside bigger public banks after the City of London’s 1986 “Big Bang”, which did away with old distinctions between stockbrokers and marketmakers.

Bankers still had an incentive to make profits, through the bonus system. But they did not stand to lose what they had already gained in a collapse. That fate belonged to shareholders. As a banker could easily set himself up for life with a few good years on Wall Street, this significantly lowered fear in their calculations, and aided greed.

In May, 1999, a year after the Citigroup deal, Goldman Sachs, Wall Street’s most prestigious and successful investment bank, finally took itself public. The firm had a cherished 130-year-old partnership culture, but it could now raise money from shareholders. Going public was an instant one-off boon for the men then in charge, several of whom played key roles once the crisis hit a decade later. At the price the bank offered shares, the stake of Hank Paulson, then the chief executive, was automatically worth $206 million.3

The move towards laisser faire from banking regulators came to its logical conclusion in 2004 with the Basel II accord, named after the Swiss city that hosts the Bank of International Settlements. This body is a central bank for other central banks, lending money to the central banks that need it, and coordinating banking regulations across borders. Basel II effectively gave the big US and European banks all that they wanted.

A key task of regulators is to set how much capital banks must put aside. The more capital they have, the more they can withstand losses from bad loans or bad trades. Banks might think they are engaged in low-risk activities, but if the regulator disagrees, they can be required to keep extra money in reserves. In Basel II, regulators put more emphasis on banks’ own assessments of the risks that confronted them, even though they were now too big to fail. It unwittingly drove the ultimate credit crisis by requiring far fewer reserves against mortgage-backed securities, or against securities with the top available credit rating of AAA—moves that prompted banks to buy mortgage-backed securities.4

A perverse consequence was to vest immense power in the rating agencies, such as Moody’s, Standard & Poor’s and Fitch, who give debt offerings a “rating” before they are launched. The trick was to persuade the agencies to rate securities AAA. The danger in the new system came in the event of a downgrade from AAA, as even if the bonds or debts in question had still not defaulted, this would force banks to raise more capital to stay within the Basel II regulations.

Compared to the 1950s, the banking industry had changed totally. It was now peopled by investment bankers betting with other people’s (shareholders’) money, not their own. Commercial bankers were divorced from the consequences of their decisions by the commercial paper market, mortgage securitization, and money market funds. Mutual funds, largely tied to the main indices, had taken their place as the main allocators of capital. But even though many of their businesses were no longer covered by deposit insurance, the bigger banks operated as though they had government protection.

Under economic rationality, commercial banks would have wound themselves down and closed as they lost their businesses. But with human nature governing them, executives did something different. The business of taking deposits in non-interest bearing accounts is a good one—from the banker’s perspective, your clients are effectively lending you money for free. With Glass-Steagall over, it was easier to put that money to work in the securities markets.

At the time, few sounded the alarm but Henry Kaufman, a Wall Street economist known as Doctor Doom in the 1970s for his successful predictions of high interest rates, tried to spoil the party. In words that now sound prophetic, he warned that the new huge banks should be “treated more and more like public utilities” rather than entrepreneurially run private enterprises.5

For him, the problems managing the “new behemoth banking institutions” and the “fallout when the US financial bubble bursts” posed latent threats to US prosperity. At time in the years since, Citigroup and other big banks have indeed seemed too big and complex for anyone to manage. But he contended that the money in mutual funds, had created a political imperative for the central bank to begin “acquiescing to increases in asset prices but taking policy measures to stop, or at least contain, asset price declines.” That prediction came spectacularly true within months, taking moral hazard to reach a dangerous new level.

In Summary

• Mergers and the removal of post-Depression regulations created a group of big and complex international banks that were far too large for any government to allow them to fail.

• This created moral hazard—the tendency to take more risks when you know you have protection if something goes wrong.

• The loss of traditional businesses and new international reserves rules gave them extra incentives to expand into new and riskier markets—most blatantly subprime mortgages.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.34.146