Chapter Seven

Securing Debt in an Insecure World

CREDIT CARDS AND CAPITAL MARKETS

IN 1978, DONALD AURIEMMA, then the vice-president of Chemical Bank’s personal loan department, doubted the wisdom of Citibank’s aggressive expansion into the credit card business that had begun a few years earlier. Auriemma expected that the expansion would not pay off for Citibank since he believed that “the credit card business is marginal, [and] it’ll never make big money for banks.”1 Yet within a few years, these marginal profits on credit cards would become the center of lending. By the early 1980s, credit cards metamorphosed from break-even investments to leading earners. With much higher profits than commercial loans, financial institutions began to lend as much money as they could to consumers on credit cards. By the early 1990s investments in credit cards were twice as profitable as conventional business loans. Drawing on newfound ways to access capital markets, lenders borrowed funds from the markets, supplemented by their own money, to fund consumer debt rather than business investment and remake the possibilities of the American economy. Using new mathematical, marketing, and financial techniques, issuers tipped the scales of capital allocation in the U.S. economy toward consumption over production. For banks to lend, consumers had to borrow. And borrow they did—in record amounts on their credit cards and against their homes. In 1970, only one-sixth of American households had bank-issued revolving credit cards, compared with two-thirds of households in 1998.2 Increasingly the now plentiful credit cards allowed consumers to borrow more money and with greater flexibility than they had before. For home owners, home equity loans also offered a new way to borrow by tapping into the value of their homes. Like credit cards, home equity loans allowed borrowers to pay back their debt when they wanted, without a fixed schedule.

Credit cards and home equity loans—though both revolving debts—still appeared quite distinct from one another in how consumers used them and thought about them. Credit cards bought pants, dinners, and ski vacations. Mortgages bought houses. Credit card debt was unsecured by any claim on real property, while the installment debt of mortgages was secured by a claim on unmovable property. While these two different financial practices—borrowing against a home’s value and borrowing on a credit card—appeared very different for consumers, the business logic and financial practices that underpinned them both grew more and more similar over time. By the late 1980s, these two different debts converged and become financially indistinguishable in how they were funded—by asset-backed securities. By the mid-1990s, even consumers began to use home mortgages and credit cards interchangeably, consolidating the debt of credit cards into the debt of mortgages. How the mortgage and credit became indistinguishable defined a key aspect of the financial transformation of this post-1970s world, reshaping the relationships of lenders and borrowers.

As lenders sought to expand their loans, consumers had new reasons to borrow. While protestors demanded fair credit access for all Americans, innovations in basic financing and lending techniques enabled lenders to profitably extend that access. Home equity loans enabled home owners to borrow against rapidly inflating values of their houses. Credit cards, using new statistical credit scores, allowed more Americans access to plastic than ever before. Mortgage-backed securities, created in the Housing Act of 1968 to fund development in the inner city, paved the way for the easy resale of debt as a financial investment, even as the original development intentions were forgotten. All forms of personal debt became sold as securities, allowing the world to invest profitably in American debt. As wages fell, Americans continued to borrow evergreater amounts, making up the gap between incomes and expectations. Through home equity loans, consumers could easily turn their rising home prices—growing even faster than the inflation that eroded their wages—into money to pay off their credit card bills. Historians who have seen the increase in debt outstanding in the 1970s as a result of increased borrowing, rather than a decrease in ability to repay, have interpreted this rise as a strategic response to inflation. Borrow today and pay back tomorrow when the money is cheaper. Yet few borrowers self-consciously responded to the rise in interest rates. While inflation did not have this strategic consequence for borrowers, it did have an important strategic consequence for lenders, who, in their fixed-rate portfolios, felt the rising interest rate most keenly as they watched their profits fall. Lenders’ business responses to inflation, more than consumers, pushed household debt in new directions that previously, in the most literal sense, had been impossible.

Personal debt after the 1970s was made possible through the global connections of capital that arose after the fall of Bretton Woods. The story of high-flying finance divorced from the everyday lives of Americans, a viewpoint through which financial history is too commonly told, makes as little sense as telling the story of the 1970s only from the viewpoint of consumers. Borrowing and buying, after 1970, took place in a very different world than that of the postwar period, a world where employment and prices were more volatile, where median real wages had fallen for thirty years, and where wealth inequality, which had contracted in the postwar period, had once again began to widen. The aberration of postwar prosperity had ended and the true face of American capitalism—unequal and volatile—had returned with the vengeance of the repressed. The proliferation of revolving credit and home equity loans, as well as securitization, reflected, and to some degree enabled, this changing economic order. While the expansion of debt occurred because consumers were less and less able, on average, to pay back what they borrowed, the massive investment necessary to roll over that outstanding debt required lenders to use capital markets in innovative ways. Moving beyond the resale networks of the mid-twentieth century, new ways to sell debt anonymously on national and even international capital markets inaugurated a new relationship between consumer credit and investor capital. In an insecure world, unsecured debt came of age.

Image

Figure 7.1. Median Male Wages. Source: Robert A. Margo, “Median earnings of full-time workers, by sex and race: 1960–1997,” Table Ba4512 in Historical Statistics of the United States, Earliest Times to the Present: Millennial Edition, eds. Susan B. Carter, Scott Sigmund Gartner, Michael R. Haines, Alan L. Olmstead, Richard Sutch, and Gavin Wright (New York: Cambridge University Press, 2006). In 1994 dollars.

Image

Figure 7.2. Income Inequality. Source: Peter H. Lindert, “Distribution of money income among households: 1947–1998,” Table Be1-18 in Historical Statistics of the United States, Earliest Times to the Present: Millennial Edition, eds. Susan B. Carter, Scott Sigmund Gartner, Michael R. Haines, Alan L. Olmstead, Richard Sutch, and Gavin Wright (New York: Cambridge University Press, 2006).

Image

Figure 7.3. Consumer Credit Outstanding. Consumers of the 1970s continued to borrow as they had since World War II, but as inequality widened they were increasingly unable to repay what they borrowed. Repayments, not borrowing, were what changed in the 1970s. To fund this borrowing, financiers developed new ways to invest in debt. Source: Federal Reserve, “G. 19, Consumer Credit, Historical Data,” http://www.federalreserve.gov/releases/g19/hist/cc_hist_sa.txt, accessed December 2009; in billions of seasonally adjusted dollars.

Mortgage-backed Securities and the Great Society

The financial innovation that ultimately allowed capital markets to directly fund any form of debt began with the federal government, not business. In the late 1960s, the federal government sought a way to channel capital into America’s rioting cities. Capital would make possible the Great Society ambitions of saving America’s cities and the newly rising pension funds needed to invest. Ironically, pension funds borne of strong union movements helped provide the justification for policies based on remedying poverty through better access to capital, rather than better access to wages. For Great Society policymakers and promoters, the problems of inequality were framed as a problem of credit access rather than job access. More credit, and not higher wages, would be enough to solve the problems of America’s cities. Toward that end, federal policy fashioned the financial innovation that made possible America’s debt explosion—the asset-backed security—that expanded well beyond its original purpose.

Solving the urban crisis would require solving the housing crisis. But to fix the housing crisis, radical financial innovation would have to occur to maintain the capital flows into mortgages. As the urban riots became the urban crisis, however, mortgage markets had a crisis of their own. American mortgage markets had abruptly frozen—the so-called Credit Crunch of 1966—as investors rapidly withdrew their deposits from banks and put their money in the securities markets. Stocks and bonds offered greater returns than the Federal Reserve-regulated rates available at banks3 Without these deposits, banks could not lend mortgage money. FHA Commissioner Philip Brownstein believed that “our innovations and aggressive thrusts against blight and deterioration, our massive efforts on behalf of the needy, will be lost without an adequate continuing supply of mortgage funds.”4

In a novel move, policymakers seeking a way to fund an expansion of stabilizing home ownership in the cities turned to those same securities markets for new sources of mortgage funds.5 Using markets as sources of capital defined the Great Society approach. Rather than distributing existing mortgages through resale networks as New Deal–era institutions did, markets would guarantee that credit crunches would not interrupt urban development. With the mortgage-backed security, Great Society policymakers tried to harness changes in capitalism to fit its programs rather than trying to regulate capitalism to fit its agenda.

Beyond the immediate crisis of the Credit Crunch of 1966, the old system of buying and selling individual government-insured mortgages through personal connections had already begun to break down over the 1960s. The instruments and institutions through which Americans saved had changed. Beginning in the late 1950s, the big growth in American savings was through pension funds. Pension funds, unlike insurance companies, had little interest in buying mortgages. Whereas insurance companies had large mortgage department staffs whose job was to buy, sell, and collect on mortgages, pension fund managers preferred to invest in stocks and bonds, which could be easily tracked and managed. In 1966, pension funds held $64 billion in assets, according to the Federal Home Loan Bank Board, 60 percent of which were invested in stocks and 25 percent in corporate bonds.6 While pension funds did not have the mortgage departments of insurance companies, they shared the insurance companies’ interest in safe, long-term investments like bonds. To create a new flow of funds, a new financial instrument would have to be fashioned to meet the needs of large institutional investors without the desire or capacity to oversee the collection of a mortgage. If mortgages could be fashioned into an easy-to-invest-in form such as bonds, then pension funds, policymakers believed, would flock to mortgages, which promised a slightly higher return. Making mortgages bond-like, bankers and policymakers realized, would radically expand their investor base, which would have been the goal of such an instrument.

President Johnson announced that to fulfill the aims of his urban housing agenda, he would “propose legislation to strengthen the mortgage market and the financial institutions that supply mortgage credit.”7 Government studies in the aftermath of the “deplorable” credit crunch insisted, in the words of Senator John Sparkman, chairman of the Senate Committee on Banking and Currency, that action be taken to “insure an adequate flow of mortgage credit for the future.”8 The solution, Sparkman asserted, lay in “correcting deficiencies in our financial structure.”9 FHA officials, like Philip Brownstein, believed the mortgage-backed security “may very well be the break-through we all have been seeking for many years to tap the additional sources of funds which so far have shown little interest in mortgages.”10 Disparate groups, from bankers to unions, demanded Congress fashion a “new security-type mortgage instrument” to channel the money invested in the securities markets into mortgages.11 With the support of the mortgage industry, as well as politicians, this mortgage instrument encountered few obstacles. Even the Fed, whose authority would be hampered by such an invention, encouraged Congress to consider creating debt “instruments [issued] against pools of residential mortgages” to “broaden [the] sources of funds available for residential mortgage investment,” so as to “rely less on depository institutions that tend to be vulnerable to conditions accompanying general credit restraint.”12 Creating such instruments would undermine the Fed’s ability to affect mortgages through its monetary policy, and in turn weaken its control over the money supply, but quarantining mortgages from the rest of the economy would also put the Fed, if need be, beyond public blame.

The Housing Act of 1968, which implemented this vision, remade the American mortgage system in a way that had not been done since the New Deal. Congress privatized the Federal National Mortgage Association (FNMA) and created its signature financial instrument—the mortgage-backed security.13 At the same time, the Housing Act inaugurated a shortterm program, called Section 235, which used these mortgage funds to loan money to low-income borrowers, whose interest would be directly subsidized by the government.14 Through mortgage-backed securities and these low-income loans, policymakers hoped to stabilize the unrests of the American cities.15

These Section 235 loans allowed low-income buyers with little or no savings to buy new and pre-existing homes. The program provided billions of dollars in financing for millions of homes during its operation. The federal government’s role in housing in 1971, when federal programs subsidized 30 percent of housing starts, was shockingly higher than in 1961, when only 4.4 percent did, with “much of the increase in housing units . . . occur[ing] in section 235,” according to Nixon administration officials.16 Government-sponsored mortgage debt accounted for 20 percent of the overall increase in mortgage debt in 1971.17 While in operation, the Section 235 marshaled new financial instruments to transform hundreds of thousands of Americans from renters to owners. Section 235 created such an upswing in housing that by 1972 the president of the Mortgage Bankers’ Association could pronounce it the “principal system” for low-income housing.18 One prominent mortgage banker declared that Section 235 “answered the cry, ‘Burn, baby, burn’ with ‘Build, baby, build!”19 Eighty percent of the funds were earmarked for families at or near the welfare limit. A home buyer who qualified for the program would receive an interest-subsidy every month such that the government would pay all the interest above 1 percent. Sliding scale down payments, which reached as low as $200—two weeks’ income for the median Section 235 buyer—would enable even the very poor to own a house.20 If the borrower defaulted, the government would pay off the balance of the loan. Home buyers could borrow up to $24,000 as long as FHA house inspectors declared the property to be in sound condition. Having bought a home, their monthly rent payments would become equity instead. Section 235 would build wealth. FHA administrators like Brownstein believed the Section 235 program “[broke] down the remaining barriers to the fullest private participation in providing housing for those who are economically unable to obtain a decent home in the open market.”21 By definition, Section 235 lent to borrowers who could not get a mortgage from conventional lenders. The program intentionally sought out the riskiest borrowers that Brownstein described as “families who would not now qualify for FHA mortgage insurance because of their credit histories, or irregular income patterns.” Section 235 buyers had no normal access to home financing. The program offered them their only chance for home ownership. The Section 235 program lasted only a few years, eventually bought down by scandals eerily reminiscent of today’s subprime crisis, as realtors, builders, home inspectors, and mortgage bankers colluded in unsavory ways to defraud trusting first-time buyers without alternatives for home ownership.22 Nonetheless, the mortgage-backed security invented to fund the program persisted, and in the long-run, exceeded the reach of its original purpose, enabling new sources of mortgage capital for home buyers of all incomes.

While in theory the mortgage-backed security allow borrowers to bypass financial institutions and borrow directly from capital markets, in practice, a long chain of financial institutions still mediated the connection between borrower and lender, and it was the way in which the mortgage-backed security fit those institutional needs that made it such a success. Making the mortgage-backed security work required adjusting the financial institutions that constituted the mortgage market—mortgage companies, institutional investors, and the FNMA. The FNMA existed before the credit crunch, but the Congressional response to the credit crunch remade FNMA into a new kind of institution, even more privatized and market-oriented—with a new kind of financial instrument containing great possibilities. Created in the New Deal to buy and sell government-insured mortgages across the country, FNMA had forged a national secondary market for mortgages offered through the FHA. During the 1960s, however, the federal government had created more and more socially oriented, specialized housing programs that relaxed the FHA’s lending requirements, especially in the inner city. FNMA had resold these mortgages alongside the other mortgages. Many policymakers believed that the “credit requirements” used by federal lending programs were “too stringent,” overlooking potential borrowers’ “true merits.”23 Only “liberalized” mortgage financing, which relaxed the FHA’s strict standards, would provide financing to low-income buyers or in low-income neighborhoods.24 Though mortgage officials at the FHA were critical of this policy, loose lending policies found support across the aisle in Congress.

Though privatized, the Housing Act still provided extensive federal oversight over FNMA, explicitly ruling that the secretary of the Department of Housing and Urban Development (HUD), even after FNMA’s privatization, could still require FNMA to purchase low-income mortgages.25 Internal matters to FNMA would be private, but larger market actions would remain partially under government control. The Act spun off a new agency, the Government National Mortgage Association (GNMA or Ginnie Mae), which would handle all the subsidized mortgage programs. Splitting FNMA into two organizations—FNMA and GNMA—would cordon off the welfare programs from the market programs, and privatization would take the welfare expenses, to a large degree, off the federal budget because mortgages bought and sold would not look like a government expense on the accounting sheets, enabling the expansion of federal mortgage lending. Only the subsidies to GNMA, and not the total mortgages bought, would go on the books as a federal expense.26

Mortgage-backed securities initially came in two forms: the “modified pass-through” security and the “bond-like” security. Both forms gave the investor a claim on the monthly principal and interest payments of a large, diversified portfolio of mortgages. Both forms rendered the investor’s connection to the underlying assets completely anonymous and secondhand. The differences between them initially irked the mortgage banking industry, however. The pass-through security delivered the real monthly principal and interest payments of the portfolio, minus a servicing fee, to the investor. The bond-like security provided a steady, even payment of principal and interest to the investor. The monthly variance for the pass-through security made it different than a normal bond, which mortgage bankers thought would reduce demand. The pass-through security could vary because of mortgage prepayments, defaults, and any of the other risks incurred with a mortgage. The bond-like security hid those events. While the pass-through mortgage-backed security provided new opportunities for tapping new institutional investors, mortgage bankers remained disappointed.27 What they had wanted was a true bond, guaranteed by GNMA, with fixed guaranteed payments, not a bond-like passthrough instrument, in which they doubted investors would be as interested. Mortgage bankers had envisioned trading their mortgages for a bond, which would be sold at auction. Such a bond, with its underlying assets completely hidden to the buyer, would make mortgage reselling truly competitive, that is to say interchangeable, with other forms of bond issues.

In August 1969, the newly founded GNMA announced that it would be offering mortgage-backed securities for the first time.28 After receiving suggestions from financiers, policymakers, and potential investors for the regulations surrounding the securities, GNMA, in association with FNMA, issued the first mortgage-backed securities on February 19, 1970. Three New Jersey public-sector union pension funds bought $2 million worth of pass-through securities from Associated Mortgage Companies, a sprawling interstate network of mortgage companies.29 Soon thereafter, in May 1970, GNMA had its first sale of bond-like mortgage-backed securities, selling $400 million to investors.

While at first the bond-like mortgage-backed securities outweighed the pass-through mortgage-backed securities, the tables quickly turned. Within the year, in 1971, GNMA and FNMA sold over $2.2 billion in pass-through mortgage-backed securities and $915 million in bonds. Within a few years, in fact, GNMA stopped offering the bond-like mortgage-backed securities entirely. For the bond-like mortgage-backed securities to work, the pools had to be enormous, at least $200 million, and the mortgage company had to have enough capital to guarantee the payments in case of default. Few private companies sold so many mortgages and none had the requisite capital. The pass-through mortgage pools could be much smaller—only $2 million. Private mortgage companies had to content themselves with pass-through mortgage-backed securities, and the mortgage companies could more easily acquire and bundle mortgages than GNMA or FNMA.

The drawbacks that mortgage bankers initially feared turned out to matter little. In many ways how the mortgage-backed security fit the institutional needs of investors mattered as much as the rate of return. While not a true bond, the pass-through security completely hid the hassle of mortgage ownership—the paperwork and collection—while still providing higher returns than government securities, and unlike corporate bonds, the mortgage bonds had foreclosable assets backing the debt. The market mediation that made the mortgage-backed security easier for mortgage companies—No personal networks! No salesmanship! Just buy a typewriter and some GNMA application forms!—made even the pass-through security much more appealing to investors than directly owning the underlying mortgages. Investment required no specialized knowledge of mortgages or even housing—the mortgage-backed securities could be compared to other bonds, whose safety was rated by Standard & Poor’s. The mortgage-backed security eliminated the need to know about the underlying properties or borrowers. As Woodward Kingman, president of the GNMA noted, “this instrument eliminates all the documentation, paperwork problems, and safekeeping problems that are involved in making a comparable investment in just ordinary mortgages.”30 Institutions, big and small, did not need a mortgage department to track payments, check titles, or any of the myriad other details involved in mortgage lending. The institution just needed to file the security. Instead of tracking fifty individual $20,000 mortgages, an investor could just buy the mortgage-backed security for $1,000,000. The mortgage-backed security lowered the accounting costs by making the investments enough like a bond to attract the notice of institutional investors.31 Investors embraced the new securities, although not always the investors that the creators of the mortgage-backed securities had originally intended.

At first, surprisingly, the biggest buyers of mortgage-backed securities were not pension funds but local savings and loan banks. Mortgage-backed securities turned out to be a great way for local banks, legally limited in their geographic scope, to invest in distant places.32 Many states forbade local banks from lending money beyond a certain distance, but had no such provision against the buying and selling of bonds. Mortgage-backed securities allowed capital mobility for all financial institutions, but in allowing savings and loan banks such access, they did not increase the net available funds for mortgages as a nation, since savings and loan banks already invested in mortgages. Such purchases could, however, move funds to capital-poor areas.

With the declining investment in FHA loans in the 1960s, local home buyers outside of the capital-rich east found it harder to find funds for a mortgage since there was no comparable national market for conventional mortgages as there was for federally insured mortgages.33 Every house was unique, which made reselling mortgages difficult. FHA loans had established a secondary market and national lending for distant mortgagees because, through its guarantee and its standards, the FHA created a homogeneity that allowed those loans to be sold as interchangeable commodities.34 All loans that were not federally insured, so-called “conventional mortgages,” had no such secondary market. No conventional mortgage, by itself, could be so homogeneous as to be traded across the country. Mortgage lending, for the conventional market, required local knowledge that no distant mortgage banker could have.

While FNMA could resell federally insured mortgages, such loans grew less important each year in the 1960s. Though the mortgage banking industry had flourished through its ability to originate FHA and VA loans, and then resell them on the secondary markets through FNMA, this reselling could not be done for conventional loans. While federally insured mortgages made up a large portion of American borrowing, it was a good business model, and helped move substantial amounts of capital across the country. After World War II, the use of conventional mortgages had fallen as Americans turned to FHA and VA mortgages to finance the suburban expansion. Around the late 1950s, however, the use of conventional mortgages stabilized at about half of all mortgages issued, and then began to grow again. By the mid-1960s, conventional mortgages accounted for two-thirds of all mortgages. In 1970, conventional mortgages—though double the volume of federally insured mortgages—had no national secondary market. For mortgage companies, this rise in conventional mortgages was dire. While mortgage companies originated 55 percent of federally insured loans, mortgage companies originated only 5 percent of conventional loans.35 And with fewer investors buying federally insured mortgages, demand fell along with supply. The possibilities of a mortgage-backed security for conventional mortgages excited mortgage bankers, because as American consumers moved away from federally insured mortgages their core business shrank.

To create a secondary market for conventional mortgages, Congress, in the Emergency Home Finance Act of 1970, authorized the creation of the Federal Home Loan Mortgage Corporation (FHLMC or Freddie Mac), which drew on the mortgage-backed security financing techniques developed in the Housing Act of 1968. Like FNMA, FHLMC could buy and sell mortgages and issue mortgage-backed securities. Unlike FNMA, Congress intended FHLMC to buy its mortgages primarily from savings and loan banks rather than mortgage companies. Otherwise, the two corporations were largely identical. By this point, mortgage experts, like FNMA executive vice president Philip Brinkerhoff, recognized that finding new sources of capital could “be accomplished more efficiently through the issuance and sale of mortgage-backed securities than through direct sale of mortgages.”36

FHLMC learned from FNMA and, in November 1970, almost immediately after its inception, issued mortgage-backed securities. While this first group of loans was federally insured and not conventional, FHLMC demonstrated to skittish investors that it could buy mortgages and issue mortgage-backed securities. Having its first portfolio insurable guaranteed that existing mortgage-backed security investors would buy the first issue. Thereafter, FHLMC began to transition into conventional mortgages, developing innovative methods to standardize conventional mortgages. Even if the home differed, standardization of information helped their commonalities come to the fore. FHLMC developed a national computer network called AMMINET to provide up-to-the-minute information on mortgage-backed security trades and issues, creating a real national “market” with national information.37

By 1972, FHLMC, with established procedures for credit evaluation, loan documents, appraisals, mortgage insurance, and mortgage originators, began to issue completely conventionally backed, mortgage-backed securities—creating the first national conventional mortgage market. More than just standardization, however, conventional mortgages could be traded because they were issued through mortgage-backed securities and not the old assignment system of the 1950s. While the FHA mortgage reduced investors’ risk by homogenizing standards, the FHLMC reduced risk by heterogeneous diversification. The mortgage-backed security came with a pre-diversified portfolio for a given interest rate, so that the investor did not need to cherry-pick mortgages across regions and neighborhoods. The risk of one bad loan could be diluted across many good loans in a mortgage-backed security’s underlying portfolio. Mortgage portfolios backing the securities brought enough diversification, it was believed, to overwhelm any outlying bad loan. For investors who would never see the property, such risk-reduction was essential. FHLMC substituted risk-reducing portfolio diversification for risk-eliminating federal guarantees.

For GNMA and FNMA, the federal government lent its authority to their operations, and FHLMC, in mimicking them, acquired their patina of government insurance. Beyond the portfolio diversification, as a last resort, the Housing Act of 1968 had also authorized the Treasury Department as a “backstop,” or buyer of last resort to the market, enabling them to buy up to $2.25 billion of FNMA mortgage-backed securities if they could not be sold.38 Otherwise, FNMA was considered to be private—paying taxes and earning profits.39 But this amount the Treasury could “backstop” was more important symbolically than practically, as it amounted to only about half of FNMA’s annual mortgage purchases in 1972. Investors wanted the reassurance that the unlimited tax-collecting resources of the federal government stood behind the securities, even if, legally, there was not an unlimited backstop. While GNMA and FNMA announced in their publications that the “full faith” of the U.S. government stood behind their issues, the reality fell far short of the promise—but for investors, it was close enough. Dangling promises, diversified portfolios, and foreclosable houses convinced many investors.

The mortgage-backed security had come into its own and quickly began to define how mortgage funds flowed in the United States. By 1973, FHLMC was buying three times as many conventional mortgages as federally insured mortgages—nearly $1 billion in conventional mortgages.40 The next year, 1974, FHLMC further doubled its conventional mortgage activity to nearly $2 billion and shrunk its purchases of federally insured mortgages to $261 million.41 This rapid expansion into an uninsured market was made possible through FHLMC’s assiduous mimicry of the debt instruments of GNMA and FNMA, which continued through 1972 to deal primarily in federally insured mortgages. By the end of 1973, FNMA was, next to the Treasury, the largest debt-issuing institution in U.S. capital markets.42

Mortgage-backed securities rescued the mortgage banking industry and preserved the easy access to mortgage funds that middle-class Americans had come to expect. Capital markets became a central source of funds, as the older institutional investor and small depositor arrangements had collapsed in the face of rising interest rates and shifting savings practices. By 1970, withdrawals at savings and loan institutions exceeded deposits nearly every month.43 The president of the Mortgage Bankers of America, Robert Pease, declared at their annual convention that, “except for FNMA, there is almost no money available for residential housing. We are in a real honest-to-goodness housing crisis!”44 On average in 1971, $50 million worth of mortgages flowed from the capital markets through mortgage-backed securities into American housing each month.

In both the cities and suburbs, mortgage-backed securities provided new sources of mortgage funds. While direct mortgage assignment collapsed, mortgage-backed securities provided the financing to dramatically increase the new housing programs in America’s cities. Federally subsidized mortgages, resold as FNMA mortgage-backed securities, propelled the American building industry in 1970, accounting for 30 percent of housing starts (433,000) and 20 percent of the mortgage debt increase.45 In the first year of sales, GNMA issued over $2.3 billion in mortgage-backed securities, funneling money backwards into federal housing programs.46 Typifying the connection in many ways, the first company to bundle enough mortgages for resale, as discussed earlier, was Associated Mortgage Companies, Inc., whose advertisement for its pool of mortgages, “Ghetto ready,/ghetto set,/go!,” illustrated the explicit connection between funds for inner-city America and mortgage-backed securities.47 Mortgage company lending surged as well, providing 90 percent of FNMA’s purchases.48 By January 1973, mortgage companies originated more conventional mortgages than FHA or VA loans.49 Moving away from a reliance on bank deposits, the mortgage industry had been rebuilt atop a new foundation of securities.

The mortgage-backed securities also promoted the lending of mortgage dollars even further down the economic ladder, to those borrowers outside the federally subsidized programs. By November of 1972, FNMA had begun to emulate FHLMC and began to buy mortgages that covered up to 95 percent of a house’s price.50 FHLMC had offered them earlier, but for two firms created and privatized to compete with one another and to service different kinds of financial institutions, competition drove them both to similar lending programs. Mortgages with as little as a 5 percent down payment could now be repackaged and sold as a security. FNMA actuaries calculated that the rate of default on a 95 percent mortgage was three times higher than on a 90 percent mortgage. The higher risk required a higher yield, but investors trusted the U.S. government to make good on the payments, even when the American borrowers could not. Yet, the assurance of payment was not sufficient to draw pension funds to invest in the mortgage-backed securities in the amounts that the creators of the securities had imagined they would.

Mortgage-backed securities, by the mid-1970s, sold in great numbers, but not exactly as the framers of the instrument had intended. While pensions bought over half of the bond-like mortgage-backed securities (52.72 percent) within a few years, such bonds were not sold any longer and pension funds resisted buying the far greater volume of pass-through mortgage-backed securities, bought by the savings and loan banks.51 As Senator Proxmire remarked in Congressional hearings on the secondary mortgage markets, the increase in mortgage-backed security buying by pensions, while “commendable,” was still a low relative share of the market.52 Pension funds, intended to be the primary source of investment for mortgage-backed securities, accounted for only 21 percent of FNMA mortgage-backed security ownership.53 Though GNMA actively sought investment from pension funds, as late as 1975 such funds accounted for only 8.29 percent of 1975’s purchases of pass-through mortgage-backed securities. Savings and loan institutions bought 41 percent of the passthrough mortgage-backed securities. Pension funds’ share of purchases rose, but did not take on the leading role policymakers had hoped for in providing a new source of mortgage capital. FNMA and FHLMC did, however, provide leadership to the U.S. mortgage market. With the creation of mortgage-backed securities, FNMA’s centrality to American mortgage markets increased, sometimes supplying by the mid-1970s as much as half of all new mortgage funds in a given quarter.54

Mortgage-backed securities offered institutional investors stable, bondlike investments in mortgages and provided American borrowers a growing source of mortgage capital. Low-income mortgage lending, funded through those mortgage-backed securities, contained the possibility of giving inner-city renters a stake in their cities to quell, as legislators hoped, the urban unrest. While low-income mortgage lending, in the Section 235 program, quickly fizzled in scandal, not to return in great numbers again until the end of the century, mortgage-backed securities equally quickly assumed a central role in the economy. Savings and loan banks bought these early mortgage-backed securities, substituting their easy administration, low risk, and higher yields for their own mortgages, which did not add, on net, to the level of available mortgage capital. Pension funds—the investors for whom the mortgage-backed security was constructed—still remained minority buyers, but through changes over the next few years would find other ways to invest in American mortgages.

Home Equity Loans and Adjustable-rate Mortgages

While critics gaped at the rising levels of outstanding debt in the late 1970s, economists and social critics always seemed to exclude mortgages from these calamitous computations. These numbers were supposed to reflect the dangerous debt, not the responsible debt. Mortgages, after all, were good debt, helping Americans “own” their homes. Home owners “built equity” by repaying their principal—along with the interest on the debt—every month. And if the value of the home rose, which they had in every year since the Great Depression, then home owners would reap 100 percent of that increase. Houses were the easiest way for people to leverage their equity—multiplying the reward on an increasing value of an asset. Though home owners paid interest on the mortgage, they could get the entire increase in house price. While borrowing on the margin to buy stocks was seen as risky, buying a house with a mortgage was seen as prudent. Homes, for most Americans, were the only kind of financial leverage to which they could have access. For many, such leverage paid off handsomely in the late 1970s. While the average price of houses doubled between 1970 and 1978, the overall consumer price index rose only 65 percent.55 Equity owned nearly doubled from $475 billion in 1970 to $934 billion in 1977.56 Housing prices rose faster than other consumer goods. The inflation-driven rise in house prices provided a broad spectrum of home owners a lot of equity. This paper wealth of equity, however, mattered little since home owners of the 1970s could not use it without selling their home.

Bankers first offered home equity loans in the 1970s to fill just this need. Home equity loans made the value of a home owner’s house more accessible. The equity could be spent while still living in the home. Second mortgages had existed since the nineteenth century—though they became less common with the expansion of FHA loans, which forbade such “junior mortgages”—but home equity loans were more like credit cards than a junior mortgage.57 Home owners could arrange a line of credit and then borrow up to that limit as they liked, repaying the debt irregularly. With flexible access to the credit line, home equity allowed consumers to move money in and out of their house as they saw fit. Easy access to home equity meant that home owners could use the equity of their house to consolidate their other debts, and unlike credit cards, if the borrower did not repay the debt, the lender could foreclose on the house.

The artificial distinction between non-mortgage and mortgage debt, underpinned by this idea of inevitably rising house prices, obscured the ever-growing equivalence between these forms of debt, and legitimated home owners’ borrowing against the value of their houses. Borrowing against a house, on some level, required less financial reasoning than comparing two credit card offers. Comparing interest rates required mathematical skills to calculate the costs and benefits of switching, and the answer was strictly numeric. Borrowing against a house was rooted as much in ideas of ownership as in cold calculations. Home owners already “owned” the equity. It was theirs to spend. The feeling of ownership allowed the choice to be easier than the choice between credit cards, and non-numeric. Yet, the danger of foreclosure remained. Even in 1983, a banking journal wrote that, “the public hasn’t taken too kindly to resales, refinancings, and second mortgages.”58 Caught between “the conflicting desires of minimizing taxes and owning their homes outright,” many debtors resented the home equity loan, even as they took greater advantage of it.

These home equity loans were unlike traditional fixed rate mortgages in other ways too. At the center of lenders’ innovations in the late 1970s was the floating interest rate. In an era of stable, low-interest rates, like the postwar period, lenders could comfortably extend credit at a fixed rate of interest. The sharply rising interest rates of the 1970s, however, made many banking practices unprofitable. Lending long-term mortgages at a low rate and forced to borrow from depositors at high rates, bankers sought out a new way to lend money. A floating interest rate solved their problem. Mortgages with adjustable rates allowed banks to lend money without incurring interest rate risk. Such adjustable rates shifted the risk of a rising interest rate to the borrowers, who, also with fixed incomes, would be even more unable to weather such a shift in their payments than institutions.

In the early 1980s, adjustable rate mortgages (ARMs) and secondary markets made commercial banks’ re-entry into mortgages profitable again—and easier.59 If banks wanted mortgages in their portfolios, ARMs allowed them to do so without interest rate risk. If they wanted a quick resell, secondary markets, including mortgage-backed securities, were deep and easy to use, which was necessary if banks were to lend money for mortgages. While bankers embraced the variable rate mortgages, relatively few borrowers did. Attempting to switch borrowers from fixed to variable rate mortgages, banks offered introductory teaser rates, as much as 5 percent lower than the market fixed rate mortgage.60 Despite teaser rates, ARMs comprised only 11 percent of all mortgages in 1984.61

Fixed rate mortgages continued to be the most popular mortgages for borrowers, but presented unacceptable risks for lenders. While it was possible for bankers to offer fixed rate mortgages and hedge the risk of interest rate changes through derivatives, such hedging was outside the skills of most bankers. Even the slightest miscalculation or misunderstanding of how the derivatives functioned could expose a bank to serious losses. Few commercial bankers could carry out a complex hedge strategy against interest rate fluctuations. While mortgage companies with limited capital had securitized mortgages since the early 1970s, banks increasingly used mortgage-backed securities to reduce their interest rate risk. By securitizing the mortgages, banks could collect a steady, interest-rate-independent stream of servicing income and leave the risk of interest rate fluctuations to someone else.62 Once banks began to resell loans, the advantages were overwhelming. Only a third of commercial banks, in 1984, resold mortgages to the secondary markets.63 But those that did sell, sold nearly all—90 to 100 percent—of their mortgages.64

If banks sold off fixed rate mortgages whenever possible, they held onto home equity loans. While lenders struggled to move borrowers into variable rate mortgages, only 4 percent of creditors offered fixed rate home equity loans.65 Forty percent of creditors even offered interest-only loans, unheard of at the time in standard mortgages, because the principal was never repaid.66 As one banker in Fort Lauderdale remarked, “the yield is so good on these [home equity] loans that my parent company doesn’t want to sell any.”67 More than half of all bank advertising dollars were directed at home equity loans by 1986.68 Eighty percent of home owners knew about home equity loans and 4 percent of home owners had them.69 Large consumer finance companies as well, like Household, began in 1980, “redirecting assets from less profitable areas into more profitable activities[,] in particular, real estate secured loans.”70 In the next two years, home equity loans, with variable rates, rose from 34 percent of Household’s portfolio to 50 percent. Household’s reallocation of capital is understandable since it realized a return on its loans of 5.6 percent.71 By the early 1980s, faced with a rising cost of funds, variable rate home equity loans appeared ideal, even for the larger banks. In a joint interview with leading bankers, the consensus on the great challenge to consumer banking was the same: the cost of funds. George Kilguss, senior vice-president of Citizens Bank, remarked that “unless you have variable-rate installment loans, you run into a problem.”72 Kilguss expected Citizens would begin to offer an “open-end credit line with a variable rate in 1983. Equity mortgages will secure these lines, which we expect to be large.” Home equity loans offered banks a way to offer consumers variable rate credit, which solved their cost of funds problem, and offered banks more secure collateral. While banks explored the possibilities of secured lending, they also expanded the boundaries of unsecured lending through innovations in credit card lending.

Collateralized Mortgage Obligations, Tranches, and Freddie Mac

Extending the pass-through mortgage-backed security into other forms during the 1980s, financiers opened up the financing of consumer mortgages. While pass-through mortgage-backed securities offered investors a more bond-like investment, they were still not bonds. And the mortgage-backed securities still had other drawbacks: time and risk. Not all investors wanted a long-term investment over the life of the mortgages, yet they wanted the security and return of investing in house-backed securities. In June 1983, FHLMC, in association with the investment banks Salomon Brothers and First Bank of Boston, issued the first collateralized mortgage obligation (CMO).73 The CMO worked just like a mortgage-backed security of the 1970s except that instead of a single kind of bond, each mortgage pool was split up into several different kinds of bonds. These kinds of bonds were called “tranches,” from the French word tranche, meaning “slice.” Rather than a mortgage-backed security having a single maturity and interest rate, the CMO sliced the mortgage-backed security into multiple bonds, each with a different maturity date and interest rate. The first CMOs offered by FHLMC had three tranches, arbitrarily named A-1, A-2, and A-3, each of which had a different maturity and interest rate.74 The first tranche had a five-year maturity, the second a twelve-and-a-half year maturity, and the third a thirty-year maturity. All tranches received interest payments, but principal payments only went to the tranche with the shortest maturity.75 The shortest maturities have the lowest risk of default or prepayment, since they received the principal payments, and they also received the lowest interest rates.

Tranches made investing in mortgages, especially the short-term tranches, a more certain investment. Early prepayment risk, when interest rates fell and borrowers refinanced, upset the calculations of investors, as did the uncertainty of defaults. The longest maturities, with the highest risk of default or prepayment, commanded the highest interest rates. In CMOs, investors could find what they needed to match their investment needs. The CMO allowed the staid mortgage to split into a variety of securities, each with a unique rate of return different from that of the original mortgage. A mortgage could be a high-risk, high-return investment. A mortgage could be a quick-paying, low-risk investment. With the right math, a mortgage could be turned into anything.

Slicing the mortgage-backed security into tranches expanded the potential investor pool. Institutional investors wanted investments that came due when their obligations came due, like an insurance company paying death benefits or a pension fund beginning to fund a retirement.76 Mortgages and mortgage-backed securities had only long-term maturities. With different maturity dates, the tranches allowed investors to match the dates of their obligations with the maturity of their investments. Insurance companies, for instance, would statistically know what fraction of their life insurance policies would come due, hypothetically, on January 3rd, 1987. The company would want enough of its investments to come due on that day to cover those expenses, but not more and not less. If the investment matured earlier, then the insurance company would have to find another investment, which cost money. If the investment matured later, then the insurance company would not have the cash to meet its obligations. Different investors—insurance companies, pension funds, banks, etc.—all had different time frames and the tranches enabled mortgage investments to fit these time frames, from just a few years to several decades, rather than the real timeframe of mortgage repayment.

Tranches allowed a wide spectrum of investors to put their money in mortgages and tested the limits of the charters of the government-sponsored enterprises that created them. FHLMC President Kenneth Thygerson, upon his retirement in 1985, proudly claimed to have “tried to extend the barriers to the limits of the corporation’s charter. Future opportunities will require an act of Congress, so this is the time for me to look to the private sector.”77 These limits could only be extended by using the most recent technologies. Slicing mortgage-backed securities into tranches required elaborate payment calculations—not only for Freddie Mac, but for investors as well. As Dexter Senft, a First Boston investment banker who worked on the first CMO, remarked, “these products couldn’t exist without high-speed computers. They are the first really technologically-driven deals we’ve seen on Wall Street.”78 Pricing all those tranches, and paying them, required computing power unavailable only a few years earlier. Innovations like the CMO gave Freddie Mac access to new sources of profit as well as new investors. In the three years Thygerson was with Freddie Mac, its portfolio increased four-fold from $25 billion to $100 billion, as its profits increased nearly five times. In the private sector, the rules governing Freddie Mac would not apply, allowing Thygerson, and others like him, to extend the boundaries of finance to places unimagined. The next appointed president of Freddie Mac, Leland Brendsel, who had been directly responsible for the first CMO as Freddie Mac’s CFO, reflected the importance of the CMO to Freddie Mac’s future.79 Other financial institutions began to offer CMOs, substituting the government’s backing for their own, or their credit insurance companies. Citibank, for instance, offered its first CMO in 1985—using its large resources to smooth out the repayment schedule—by providing minimum guaranteed principal and interest payments.80

For the home owners financed through mortgage-backed securities and CMOs, however, the complex debt instruments remained largely opaque, and unimportant. When John and Priscilla Myers of Lancaster, PA, bought their $47,000 two-bedroom split-level in 1984, they went to their local savings and loan for a fixed mortgage.81 Since their local savings and loan, like all banks of the 1980s, feared holding onto fixed rate mortgages, the mortgage was resold and pooled into a CMO. For John Myers, the actual owner of the mortgage did not “make a difference . . . as long as Priscilla and I were able to get the money for the house.” The flood of money from pension funds and other nontraditional investors into mortgage-backed securities gave the Myers the ability to buy their home. While CMOs transformed the mortgage industry, and the amount of capital available to borrow, they also opened the door for turning any other kind of steady stream of income into a security. This alchemical science of turning assets into securities, after it had been perfected with the CMO, underpinned the expansion of many other debt instruments of the 1980s, such as credit cards. Soon the Myers family, and millions of other Americans, would be able to borrow much more than just a mortgage from capital markets.

Credit Cards in the 1970s

Despite their high interest rates, the profitability of credit cards has fluctuated over the past thirty years, sometimes eking out only marginal profits. While Americans always paid a higher interest on credit cards than any other form of debt, the credit card companies—“issuers” in the industry-speak—faced many challenges: the cost of funds, borrower default, finding new creditworthy borrowers, and firm competition. Making credit cards profitable required clever business strategies that challenged conventional ideas of creditworthiness as much as conventional understandings of capital. During the 1970s, those who had access to revolving credit shifted their debts away from installment credit.82 But only the most creditworthy households (35 percent) had bank cards in 1977—double the number in 1970 but still not a majority of American households. Before 1975, retailers continued to serve as the primary source of revolving credit, but such credit was limited in its use to individual stores.83 Only ten of the one hundred largest department stores in the United States took bank credit cards before 1976, continuing to rely on retail credit cards to maintain store loyalty.84

Universal credit cards that could be used anywhere had existed in various guises since the 1950s, but were not widespread until the late 1970s. In the early 1960s, merchant billing networks were proprietary—Diner’s Club and American Express each billed merchants for their own cards. In 1966, banks set up two separate networks that separated the billing of merchants from the lending of consumer credit. Bank of America, in 1966, began to allow other banks to use its billing system—BankAmericard. Also in 1966, another group of banks started the Interbank Card Association, which became MasterCard in 1980, in a bid to share the costs and difficulties of expanding merchant participation. Bank of America spun off BankAmericard in 1970 to the banks that used the system, eventually rebranding itself by 1977 as VISA. These two systems standardized merchant fees, but allowed issuers to charge borrowers whatever they liked, which allowed banks to focus on lending to consumers and not selling their card systems to merchants.85 At first member banks could only use either VISA or MasterCard, but by 1975, after a lawsuit, such restrictions were dropped.86 The proliferation of VISA and MasterCard allowed credit cards to be used by more merchants, which in turn made them more useful for consumers.

In the late 1970s, bank cards were issued only to the most creditworthy borrowers, who tended to repay what they borrowed, and consequently banks’ profits were meager. The primary challenge for credit card issuers was that the borrowers least likely to default also tended to pay off their debt every month—denying the creditors any interest income. These “non-revolvers,” who did not revolve their debt from month to month, treated credit cards like mid-century charge cards. In contrast, borrowers who might not pay off their debt every month had a higher chance of not paying at all, leading to “charge-off” or a complete loss on the loan. Between the non-revolver and the defaulter was the much sought-after “revolver,” who paid the interest every month but not the principal. This sweet spot of revolving debt promised the highest profit rates for credit card companies, but differentiating the revolver from the free-loading non-revolvers and defaulters was extremely difficult. For the credit card companies of the 1980s, revolvers were profitable but lending to them went against the easy risk management models based on guaranteed repayment developed in the era of installment credit.

Issuers also faced the challenge of consumers’ expectations of how they should behave. Unlike department stores, where most consumers used revolving credit until the mid-1970s, credit card issuers sold no goods. The income from the cards was the only income. Consumers had always been told by retailers to pay their charge account bills on time, which generalized a particular connection between profit and repayment into a more general moral principle. What these credit card companies called “revolvers” had historically been called “slow-payers” and had been the bane of all earlier creditors. Slow-payers tied up retailers’ expensive capital. Though borrowers who paid their debts on time still thought of themselves as “good customers,” the logic of revolving credit was different—profitable customers revolved their debt. In 1980, 37 percent of VISA customers, accounting for half of VISA’s credit volume, paid their bills in full and thus incurred no finance charges.87 Consumers believed they were good customers when they paid their bills, but they actually were bad customers, at least from the perspective of the lender.

Proper consumers, but not profitable ones, resisted the idea that they were doing something “wrong.” Non-revolvers abided by the compact created through generations of credit use, now firmly inscribed in common sense. But these proper customers lost money for the d issuers and the easiest way to rectify that was simply to charge them a fee for their unprofitable behavior. Citibank in April 1976, for instance, attempted to charge a 50 cent fee to customers that did not maintain a balance.88 Incensed that good customers were charged fees, the House Consumer Affairs Subcommittee conducted investigations, at which William Spencer, the soon-to-be president of Citibank told the committee, “you obviously do not believe that we are, in fact, losing money on this portion of the business. Let me assure you the contrary.”89 Whether because of threatened legislation or “competitive pressure,” as a Citibank spokesperson claimed, the fee stopped in December 1976. Even the possibility of a fee, it turned out, was struck down two years later in 1978 by the New York Supreme Court, and Citibank was forced to return the fees to its customers.90 The business solution, in the 1970s, to non-revolvers could not simply be a fee, but something that satisfied both the bottom line and the moral expectations of customers. Rather than charge fees afterwards to those who were not revolvers, credit card companies would have to find the revolvers ahead of time. The entire system by which lenders conceived of “creditworthiness” was geared, however, to screening out revolvers. To make revolving credit more profitable for bank lenders, new criteria would have to be developed.

Discrimination and Discriminant Analysis

The Equal Credit Opportunity Act implemented through the Federal Reserve’s Regulation B pushed lenders toward more “objective” models of lending that excluded race, sex, and other protected categories—defined by the Fed as “demonstrably and statistically sound” models. These statistically sound models were required to avoid any inkling of sex or race discrimination. Understandably, lenders quickly developed in-house models, subcontracted them, or bought them from third-party companies, in an effort to avoid legal tussles. The credit scoring system offered by GECC, for instance, promised to be “discriminating enough to accurately determine credit worthiness, yet objective enough to avoid discrimination.”91 While these models were kept secret from the public—though not the Fed, which required proof of their objectivity—academics attempted to develop their models using the same available techniques. If academic and commercial credit systems were similar, which the academics at the time certainly thought they were, then the problems facing academic systems would also be in the commercial credit scoring systems.92 Of course, while the corporate models were secret, the academic models were not. And neither were their shocking findings.

If the antidiscrimination laws of the 1970s hoped to guarantee women and African Americans access to credit, the models developed in the early 1980s confirmed that not only would be this possible—it would be profitable. The models relied on a statistical method called “discriminant analysis” that, despite its exceedingly confusing name, grouped potential lending populations on the factors that distinguished them—without human prejudice. Using discriminant analysis, statisticians could group borrowers into good and bad default risks based on observable characteristics (like phone ownership or income) using data provided by lenders or credit bureaus.93

The great challenge, of course, was not just finding defaults and nondefaults, but revolvers and non-revolvers. These models, while better than random guessing, were not nearly as accurate as the imposing mathematical apparatus might lead one to expect. An academically constructed multidiscriminant model correctly placed 67 percent of the sample into the correct groups of revolver and non-revolver. While 17 percent more correct than a random 50-50 guess, one-third of the sample was still incorrectly placed.94 Unlike a human loan officer, such models could be objective, and ostensibly ignore protected categories. But in general, these models worked no better, and often less well, than human loan officers in differentiating between borrowers—with or without prejudice.95 A lender could still not afford to trust these kinds of models to find the sweet spot of revolvers. The risk of default remained too high.

The very groups that credit cards tended to lend to—affluent households—turned out to be the worst revolvers. The higher the level of education and income, the lower the effective interest rate paid, since such users tended more frequently to be non-revolvers.96 The researchers found that young, large, low-income families who could not save for major purchases, paid finance charges, while their opposite, older, smaller, high-income families who could save for major purchases, did not pay finance charges. Effectively the young and poor cardholders subsidized the convenience of the old and rich.97

And white.98 The new statistical models revealed that the second best predicator of revolving debt, after a respondent’s own “self-evaluation of his or her ability to save,” was race.99 But what these models revealed was that the very group—African Americans—that the politicians wanted to increase credit access to, tended to revolve their credit more than otherwise similar white borrowers. Though federal laws prevented businesses from using race in their lending decisions, academics were free to examine race as a credit model would and found that, even after adjusting for income and other demographics, race was still the second strongest predictive factor. Using the same mathematical techniques as contemporary credit models, the academic models found race to be an important predictor of whether someone would revolve their credit. But while politicians of the 1970s worried that black Americans would be denied credit on account of their race, if creditors, desperate to find revolving borrowers, read academic papers, they found exactly what they needed. Based on the data, the most profitable group to lend to, if a bank were maximizing finance charges, would be black Americans. According to research done with Survey of Consumer Finances in 1977, black borrowers were three times as likely as white borrowers to revolve their debts.100 While nonwhite, nonaffluent borrowers held out the promise of the greatest profit, even though the models circa 1980 remained inaccurate enough to base a lending program upon them. The interest rates available to lenders simply could not cover the losses that would be incurred by such lending.

For every loan, except those that will always default, there is a price that can be charged to make that loan profitable. The profit on a loan is determined by subtracting the cost of lending the money to the borrower from the price of the loan. Lenders of the late 1970s were squeezed from below by the rising cost of funds, and above by state-regulated caps on interest rates. Before lenders could pursue the sweet spot of revolving credit, lenders would have to find a way to address this squeeze. In 1978, lenders would finally have their chance.

End of Rate Caps and the Marquette Decision

Part of the reason that bankers were loathe to lend to the less creditworthy borrowers, and those more likely to revolve, was that many states capped the interest rates at too low a level to overcome the costs of default. For unsecured lending, the risks were much higher than for secured mortgage and installment lending, for which the caps had been established. The rate caps established for secured lending precluded all but the most creditworthy of borrowers from getting credit cards.

Economists of the 1970s, and today, have found it difficult to understand the appeal of interest rate ceilings. For economists, the interest rate contained no moral overtones, but was simply the price of borrowing money, taking into account the risk of the borrower and the relative demand for that money. Money was like any other commodity, and its price ought to have been set by supply and demand. For many people, however, high-interest lending smacked unethically of getting something from nothing. Profit without production seemed profoundly unnatural, as it had for centuries. But profit—not production—continued to be the ambition of the capitalists. In 1979, James Roderick, the chairman of U.S. Steel—the company most aligned in the American consciousness with real production—famously pronounced that “the duty of management is to make money, not steel.”101 If that was true for U.S. Steel, it was certainly true for Citibank. The repeal of interest rate caps, which would allow interest rates to rise to the level set by supply and demand in the market, came not from the state or federal government but from the Supreme Court—refashioning the scope of the credit card.

In 1978, in a seemingly insignificant case—now called the Marquette decision—the Supreme Court ruled that interstate loans were governed by the bank’s home state rather than the borrower’s home state.102 A Nebraska bank had been soliciting credit cards in Minnesota with interest rates above the state’s cap. The Court, in a unanimous decision, ruled that since residents of Minnesota could legally go to Nebraska and borrow money there, the residents of Minnesota should not be penalized, as Justice William Brennan wrote, for “the convenience of modern mail.”103 The National Bank Act had long allowed the interest rate to be determined by the regulations of the state where a bank was located, rather than the home state of the borrower. As the case was decided by an interpretation of federal law, rather than constitutional law, however, Justice Brennan emphasized that Congress had the power to alter the law if it desired.

In 1980, a Chase Manhattan banker predicted that the credit card, for the foreseeable future, would have a “low margin that slips back and forth between profitability and unprofitability.”104 Though the high interest rates of 1980 legitimated high interest rates on credit cards, profits were decimated by the comparably high costs for that debt, both from operations and from the expense of capital. A Federal Reserve study in 1981 found that operating costs were 46 percent of the total costs for consumer credit operations compared to 16 percent of commercial credit operations.105 If banks were paying 8 to 10 percent for deposits in the new money market accounts, as Chase Manhattan’s Paul Tongue suggested, then banks could not reduce their interest rates much further than 19 percent and still be profitable.106 As one Minneapolis banker, who ran his bank’s credit card division, remarked, “the cost of money is going nowhere but up.” For commercial banks, the problem was finding a lower cost source of funds than consumer deposits. Between the credit controls and the negative yields, many banks sold off their credit card operations to other banks, with hundreds of thousands of accounts and millions of dollars of outstanding debt.107

For every credit portfolio sold for fear of losing money, however, that same portfolio was bought. Bankers who were bullish believed that economies of scale and correct pricing could make cards profitable. While small banks, with less than $25 million in assets, averaged 203 loans per credit employee, banks with more than $500 million averaged 1,702 loans per credit employee—or eight times as many loans, substantially lowering labor costs.108 By the early 1980s, the top fifty issuers owned 70 percent of the outstanding balances. With these savings, the largest d companies could offer interest rates 4 percent lower than their smaller competitors.109 Smaller banks could do little to compete with the interest rate difference or the lower operating costs.

But even the big banks continued to lose money because of the high costs of funds. Competitive, yet profitable, pricing—fees and interest—would be the key to making credit cards profitable, but that relied on knowing the risk potential of a borrower and finding a cheaper source of capital than savers’ deposits. Citibank’s earnings fell by one-third in the first quarter of 1980, largely from negative yields in credit cards that stemmed from the high cost of funds.110 Despite 18 percent credit card interest rates, a Federal Reserve Study in 1978 found that Citibank’s woes were widespread in the credit card industry. Small banks, with deposits under $50 million, actually lost money equal to 1 percent of outstanding debt, and the largest banks, with deposits over $200 million, had net earnings of only 2.9 percent of outstanding debt. As the cost of funds increased even more in 1979, analysts expected disastrous losses and for small banks to retreat from the bank card business.111

While the Marquette decision made relocation to other states possible, the rising cost of funds made it compulsory. After the decision, large credit card issuers relocated to South Dakota and Delaware, states that lacked interest rate caps, where they could issue cards across the country. The rewards of deregulation for Delaware and South Dakota were considerable. Card receivables in Delaware grew 24,375 percent. In South Dakota receivables grew a staggering 207,876 percent. More importantly, perhaps, tax revenues grew as well, from $3 to $27 million in South Dakota and from $2 to $40 million in Delaware.112 The states acquired a new tax base, and every other state saw their sovereignty undermined by their inability to regulate credit card companies in their borders. Uber–New York Citibank moved its credit card operations to Sioux Falls, South Dakota under duress. Moving credit card operations to another state cost money. Staff had to be trained; buildings had to be found. Such moves were limited to only the largest firms, who could afford to uproot or branch their operations across the country. Smaller banks, still constrained by usury laws, in turn, felt pressure to sell their operations to larger, more efficient banks. Local politicians and customers could and did protest such moves, but without federal support were effective only for a short time. Banks like Minnesota’s First Bank System, for instance, a bank holding company of ninety-two subsidiaries, planned to consolidate all of its credit card operations in a South Dakota affiliate.113 But between bad publicity and the threat of legal action by the state’s attorney, the bank stopped. Such political and consumer pressures, however, could not end the appeal of a lack of usury rates or curb the cost of funds.

Despite scattered counterexamples, like Minnesota’s First, states were largely unable to stop the movement of big banks to deregulated states. During the next five years, two-thirds of states, including Minnesota, removed their interest rate ceilings or raised them far above market levels. Following Citibank’s move, New York, no doubt fearing the loss of other major banks, removed its usury laws. As expected, in states without ceilings, where risk and return could better equilibrate, charge-offs rose alongside interest rates. In 1984, states without interest rate controls had a charge-off rate of 1.38 percent compared to 0.85 percent in states with strict interest rate controls, as lenders sought out riskier customers.114 Consumers also equilibrated their use of credit card debt and home equity debt, as higher interest rates pushed consumers from credit cards to home equity loans. Though such practices were still not common, economists found that in states that did raise their interest rate caps, home equity borrowers tended to use more of their borrowing to purchase consumer durables, since mortgage credit was cheap relative to credit cards.115 Rate deregulation gave millions of Americans access to credit who otherwise would have been denied, but this access came at a higher price. Without rate caps, issuers could explore new, riskier markets for credit cards and Citibank, now in South Dakota, expanded its credit card operations to thirty-five states.116

Credit Cards and Class Performance in the 1980s

One of riddles of credit cards is how they fell from the height of exclusivity at the beginning of the 1980s to the depth of opprobrium by the middle of the 1990s. To have a credit card defined what it was to be rich. Take, for instance, the film Trading Places (1983), where Dan Aykroyd, as the commodity trader Louis Winthorpe III, is turned out from high society and a high-paying job after his bosses, evidently amateur sociologists as well as commodities brokers, offhandedly bet whether Aykroyd would descend to crime when deprived of his money and his social networks. After he has been turned out of the Heritage Club, arrested for drug possession, and humiliated in front of his prim fiancée, his last-ditch attempt to show that he is wealthy and upstanding is to show his credit cards to a recent prostitute acquaintance (“You don’t think they give these to just anyone, do you? I can charge goods and services in countries around the world!”) in an attempt to borrow money from her. When the cards are taken from him minutes later by a bank employee (“You’re a heroin dealer, Mr Winthorpe. . . . It’s not the kind of business we want at First National”), the last vestige of his class identity is taken away. Credit cards were the most basic tool in his performance of wealth, something without which Aykroyd most certainly could not be who he wanted to be any longer.

Credit cards in the 1980s symbolized the care-free consumption of the affluent. Instant gratification became possible on plastic. More than symbolic, however, affluence was the reality of who owned credit cards in 1980. In the early 1980s, as economist Peter Yoo points out, a household in the top decile was five times as likely to have a credit card as a household in the lowest decile.117 Today, credit cards have acquired an air of the disreputable, associated with the broke and irresponsible, but that shift occurred rather quickly during the late 1980s and early 1990s. Before its loss of status, however, credit card companies trucked on their exclusivity to expand their market shares.

In the 1970s, it was easy to market cards. Without fees and with relatively few households having cards, the plastic marketed itself. By the 1980s, however, the most reliable borrowers already had them.118 And it was hard to attract new customers because of the fees. As bank card profitability fell as the interest rate rose, banks looked for ways to increase their revenue, including raising the annual fees charged. Borrowers, it turned out, were far more attuned to fees than interest rates. The annual fees, while accounting for a much lower share of the bank’s revenue than the interest on the revolving debt, were far more noticeable and to the customers, objectionable. A MasterCard survey in 1981 found that in response to higher fees, 9 percent of cardholders canceled at least one card. In contrast, while 54 percent of cardholders had their interest go up, only 19 percent noticed. Raising annual fees simply encouraged borrowers to have only a single card, on which they borrowed more.119 As the credit card industry concentrated and the cost of funds began to wane, credit cards expanded through the middle and upper class, relying on annual fees and a wide spread between interest rates and funding costs to provide profits. And profits for banks more than doubled from 2 percent to 5 percent.120 By the beginning of 1983, as the prime rate fell from 20 percent to 10.5 percent, and credit card rates remained high, the profitability of credit cards returned and issuers began to aggressively expand.121 Despite the desire for revolvers, credit card issuers, fearful of defaults, did not lend to riskier borrowers. Some bankers believed that the credit card market had achieved saturation in 1984, with 53 percent of households. The remainder, a banker writing in the ABA Journal, noted that “the remaining households do not qualify for ownership.”122 Getting those who qualified for ownership to use a particular issuer’s card became the challenge of the mid-1980s. In some ways, the credit card business in the mid-1980s was a zero-sum game: either you used one particular card or another. With rising fees, consumers tended, more and more, to consolidate their cards.

Convincing creditworthy borrowers to get another card required more than just offering a slightly lower interest rate or fee. A creditworthy household would be besieged by solicitations, and soon they would not accept additional fee-issuing cards. Luckily for credit card issuers, credit cards were as much social performance as financial tools for consumers. Issuers looking to expand their market share had to rely on the social meanings of credit cards. Whipping out the right credit card after a business dinner or in front of “frenemies” at the mall enabled consumers to display their social position. Credit cards operated doubly as financial tools and social tools, enabling the performance of class-as-wealth—the prestige card—and class-as-occupation—the affinity card. The social performance of credit enabled credit card companies to overcome the apparent market saturation of creditworthy households in the early 1980s.

The performative affluence of credit cards helped make them desired across all economic strata. And as credit cards became more commonplace, credit card companies used evermore exclusive cards—gold, black, platinum—to wrest market share from their competitors and higher fees from their borrowers. In the early 1980s, credit card issuers decided who would receive the cherished plastic, and then gave those who were the most creditworthy a way to show it off. The emergence of revolving credit “prestige” cards in 1982 marked a turning point in credit card usage, as MasterCard and VISA card issuers sought to displace American Express from the lucrative travel and entertainment segment.123 While a “gold” AmEx had existed since 1966, VISA and MasterCard credit cards had not distinguished between borrowers.

The prestige cards were more than just branding. Borrowers had higher minimum limits (at least $5,000) and sometimes ridiculously high limits of $100,000. Banks aimed the cards at individuals with high incomes and hoped that with higher income would come higher balances—and they were correct. Many bankers found that prestige cards carried double the balances of conventional cards, as their users moved their travel expenses onto the cards. Many bankers shared the feelings of Florida banker Michael Clements: “the premium cards have exceeded all our expectations so far.”124 Clements observed that, “we are dealing with a far different market than that in standard bank cards. Most of the premium cardholders have solid banking records and they aren’t afraid to spend.” These higher-income consumers paid up to $45 a year for the prestige cards, getting the travel and entertainment (T&E) services that traditionally only American Express and Diner’s Club offered, as well as the prestige of having a gold credit card.125 Providing many of the travel services that American Express traditionally provided, but with more participating merchants, the new prestige cards worked just as their issuers hoped. Merchants embraced the shift, since VISA and MasterCard charged them less than AmEx.126 Between the higher balances and the fees, the prestige cards could be very profitable. One banker calculated that his customers were earning the bank a gross margin of 23 percent.127

Such affluent customers were relatively indifferent to changes in interest rates. Though the prime rate fell by 14 percent from 1982 to 1987, credit card interest rates remained steady. The falling cost of funds would seem to allow a competitive firm to offer a lower interest rate and steal market share.128 In 1987, American Express attempted just such a move, fighting back against the revolving credit offerings of VISA and MasterCard, with its first revolving credit card, the Optima. With only a 13.5 percent interest rate, the Optima carried an interest rate far less than the national average, which was closer to 18 percent. Yet consumers proved shockingly insensitive to interest rates. Non-revolvers did not care what the interest rate was. A quarter of cardholders did not even know their interest rates, and 60 percent believed that most rates were about the same. Even banks that attempted to charge less, like the medium-sized Central Bank of Walnut Creek, California, found that customers were indifferent. When Central Bank promoted its 18 percent card, which was 3 percent less than the average California credit card, with $250,000 in advertising, it increased its accounts only from 24,000 to 26,000. In the end, Central Bank simply sold off its portfolio. In 1986, Manufacturer’s Hanover Bank, one of the largest issuers, cut its rate to 17.8 percent from the standard 19.8 percent. Other banks, except Chase Manhattan, simply ignored the move, and within three years, both Chase and Manufacturer’s went back to the standard 19.8 percent.129

Americans, it turned out, were insensitive to relatively small differences in interest rates, if they already had a card and were not worried about repayment. American Express researchers found that the hassle of switching credit cards was offset when there was a difference of at least 4 percent. Though the Optima won customers, it did not overtake the revolving credit market of Visa and MasterCard. Indeed the customers most sensitive to interest rates seemed to be the riskiest, whatever their other characteristics. Optima, with its lower interest rate, reported twice the default rates of VISA or MasterCard.130 By 1992, American Express had lost over $1 billion on the card. Only those worried about paying back their debts switched for less than 4.5 percent.

While prestige cards captured the American Express and Diner’s Club T&E market, card companies invented a new way to sell credit: occupational and, its close-kin, educational identity. Beginning in 1982, Maryland Bank, N.A. (MBNA) pioneered the marketing of so-called “affinity cards” to sell credit cards to self-identified professional groups.131 In a little over a year, MBNA had $230 million in outstanding balances and over 200,000 accounts. By aiming high—75 percent of professionals, the average income was $75,000—MBNA enjoyed the high profitability that prestige cards afforded. MBNA prestige cards were used twice as frequently as regular cards and carried balances 70 percent higher.132 Identity and institutions converged in the affinity cards. Professional groups, from dentists to soldiers, enthusiastically embraced affinity cards.133 A New York bank, for instance, had a 32 percent response rate to a solicitation to military officers, many times higher than the typical credit card solicitation. College students, despite their current lack of income, emerged as a growth market as well. Anticipating higher income after graduation, issuers, with the assistance of the universities, began to offer credit cards to students. The card programs, compared to today, appear shockingly conservative. A Milwaukee banker wrote of how he captured the “attractive market segment” of college graduates in his state. A college student would receive a solicitation, and if that graduate had secured a “career-oriented job paying $12,000 or more a year,” possessed a “permanent Wisconsin home address,” and attended a Wisconsin school, then the bank would give that student a credit card, even though the student did not “qualify for credit under [the] usual criteria.”134 The alma mater frequently got a cut of revenue from such cards for providing access and mailing lists.135 While class identity flagged, perhaps, as a way to organize labor, it rejuvenated the organization of capital among the professional classes.

1986: Tax Reform and Securitization

In 1986, two events made debt more expensive for consumers to borrow and cheaper for banks to lend. While these two events, the Tax Reform Act of 1986 and the first credit card asset-backed security, had nothing to do with one another, they both pushed all forms of consumer debt, in unexpected ways, toward complete interchangeability. Though the Tax Reform Act sought to differentiate credit card debt from mortgage debt, market forces and financial innovation like asset-backed securities pushed them back together.

By the middle of the 1980s, credit cards and other non-mortgage debts were starting to be seen as something not to be encouraged. Owning a house, arguably, served a valuable social function by rooting home owners in a community, but auto loans, much less credit cards, did not. Yet taxpayers could deduct the interest that they paid on any and all consumer debt. The mortgage deduction on the income tax, commonly believed to have been intentionally invented to encourage home ownership, existed more as a residual of an older nineteenth-century idea of borrowing than an intentional policy. When Congress created the U.S. income tax in 1913, borrowing and interest were conceived as strictly business activities. For small businesses, the personal and entrepreneurial were indistinguishable. Borrowing was done for investment—like buying property or inventory—and this interest could be written off as a business expense. As debt became legal and widespread for personal consumption, rather than business consumption, this aspect of the tax code remained unchanged. Borrowing for cars, as well as houses, remained deductible throughout the postwar period. The interest on credit cards, when they became widely available, could be deducted as well.

Until 1986, that is. Congress passed a tax reform law that phased out the interest deduction on all forms of consumer borrowing except for mortgages. Wrapped up in the tax reform act that Ronald Reagan called the “second American revolution,” was a provision to end the long-standing interest deduction for nearly all types of consumer credit.136 Other features of the Tax Reform Act of 1986 received more attention at the time—the top marginal tax rate was dropped from 50 percent to 33 percent while the lowest tax rate increased from 15 percent to 18 percent—but leaving consumers only able to deduct the interest on their home borrowing, radically altered the terrain of consumer credit, transforming the relationship between home equity loans and other forms of consumer credit, as well as making debt absolutely more expensive.

Still seen as a good form of debt, mortgage interest deduction could continue, but other forms of unsecured debt would lose their protected status under the tax code. Actively discouraging credit card borrowing through the Tax Reform Act, policymakers provided home owners with a strong incentive to remove the equity from their homes to pay off their other debts. In theory, this should have lowered their interest payments on their credit cards and given them tax-deductible and lower interest payments on their home equity loans. In practice, however, many borrowers found it difficult not to run up the debts on their credit cards again.

Debtors used the equity from their houses to pay off their credit cards. While some politicians pushed for restrictions on the mortgage deduction for expenses related to housing, such constraints were ultimately dropped, as enforcement would be impossible.137 While maintaining the deduction on home equity was justified through the American dream of home ownership, consumers in practice could use home equity loans for anything. Though in 1986, home equity lines were only a tenth the size of the second mortgage market, lenders expected that home equity loans would grow quickly in the aftermath of the Tax Reform Act.138 Not until 1991, however, would the deduction for all non-mortgage interest be fully phased out. The interest deduction, of course, only mattered to those who paid interest on their consumer credit debts—installment borrowers and credit card revolvers. For those who paid off their cards every month, there would be no interest deduction. For those who paid, however, the interest rate deduction would push them toward a new way of thinking about their home finance.

While the Tax Reform Act caused borrowers to think in new ways, lenders were also thinking in new directions in securitizing debt. More borrowers rolling over their debt meant lenders needed more capital to finance it. While the cost of borrowing funds had abated somewhat by the mid-1980s, the bottleneck of capital had not been fully solved. The lack of new sources of capital constrained lenders’ expansion and they searched for alternatives to the traditional saver’s deposit. Banks mostly lent money from these deposits, which continued to decline as savers put their money in pension funds, money market accounts, and mutual funds instead of the traditional savings account. Commercial banks’ share of deposit assets had fallen from half in 1949 to one-third in 1979.139 Finance companies, in contrast, could issue short-term bonds, called commercial paper, to fund their credit card operations, accounting for up to half of the lent capital of the larger firms.140

Some commercial banks, in an effort to remain competitive, mimicked the finance companies by opening non-consolidated subsidiary corporations.141 Not officially part of the bank, its operations would not be part of the bank’s accounting reports. Rather than depend on high-interest certificates of deposit (CDs) to attract more savings, banks could create a subsidiary corporation that would buy the consumer debt from the bank. This subsidiary, in turn, would issue commercial paper and use this money to pay the bank for the debt. The subsidiary would then use the consumer debt to pay off the commercial paper. This paper would be at a lower-interest rate than the CD, which would increase profits overall and skirt the reserve requirements faced by the bank. In Nebraska, for instance, First National Bank of Omaha created First National Credit Corporation in 1980, moving its credit card debts to the subsidiary. Though First National Bank of Omaha was only in the top 300 banks for its size nationally, it was in the top 20 banks nationally in terms of outstanding credit card debt, growing from $40 million to $110 million in only three years.142 Capital markets and commercial paper could finance expanding consumer demand in ways that traditional bank deposits could not.143

Structured finance innovations in 1986 ended the last bottleneck on funding consumer credit by providing the last direct connection from capital markets to lenders. Between the cost of funds, experience with securitized mortgages, and the growing profitability of credit cards, banks tried to develop ways to securitize credit card debt. While other forms of consumer credit had been securitized earlier, the backing assets of these other consumer debts were all installment credit with fixed repayment schedules similar to mortgages. Mortgage repayments had nowhere near the volatility of credit cards, where payments in any given month could vary from nothing to everything. A mortgage could be prepaid, but that occurred from observable changes in falling interest rates, and when a mortgage wasn’t refinanced, borrowers rarely paid more than the minimum. The ability to securitize the irregular repayment schedules of consumer credit to appear like a regular payment of a bond represented a breakthrough in finance.

The conditions for the invention of credit card securitization had existed for a few years—the rapidly growing demand for revolving credit and the example of collateralized mortgage obligations—but it was a mistake, not a deliberately successful act that made the invention of credit card securities necessary. In 1984, Columbus-based Bank One had paired with a television shopping channel to provide credit to its clientele. The charge-off rates for this portfolio were expected to be 5 to 6 percent, with an initial crop of high charge-offs. A high default rate followed by a decline was common in all forms lending and older, stable accounts called “seasoned” accounts could be expected to be significantly more reliable. In this case, however, the fall-off never came. Charge-offs remained high (around 11 percent) and with $2 billion invested in the portfolio, executives at Bank One were desperate to find a way to find another way to fund this portfolio.144 As William Leiter, who headed the project at Bank One, stated, “our credit card portfolio was growing more rapidly than we felt comfortable funding.”145 Leiter’s public statement was, to some degree, an understatement, but Bank One figured out a way to fund that debt outside the bank, by passing the portfolio’s risk to an investor outside the bank.

When Bank One securitized $50 million in credit card receivables in 1986, a new era in consumer credit opened.146 Pricing and structuring these irregular payments had taken Bank One two years to figure out, but at the end of those two years they had found a way out of that particular situation, and more generally the capital bottleneck for credit card issuers. Securitizing credit card debt moved the debt off the books of the bank, treating the credit card debt like a sold-off mortgage, forgoing the need for new deposits. Securitization allowed banks to expand their lending much faster than their capital, ordinarily, would allow, and doing so without putting their own money at risk. The Bank One securitization augured another aspect of credit card funding that went unreported in the trade press.

The securitization of this credit card debt allowed Bank One to fund this risky portfolio from capital markets, retaining the profitable fee income for itself. Risky lending and securitization demanded one another. Letting capital markets fund the deposits instead, and take on the risks of default, allowed Bank One to focus on servicing the cards and rapidly expanding their customer base. While banks had expanded their deposits since the 1960s by selling negotiable CDs, securitization allowed them to directly sell assets. Though investors in the security would get the interest and principal payments, the bank would still receive servicing income.

Within a year, other card companies issued their own credit card–backed securities, or “card bonds,” and brought closer the relationship between investment banks, which facilitated the issue of stocks and bonds, and commercial banks, which lent money to businesses and consumers. Banking institutions formally separated since the Glass-Steagall Act of 1933, began to work ever-more closely together. The credit card–backed security required the skills and assets of both commercial banks and investment banks, in this case Bank One and Salomon Brothers. Not to be outdone, by January of 1987 MBNA securitized its first pool of credit card receivables through the investment bank Morgan Guaranty.147 Unlike Bank One’s security, which paid a fixed rate, MBNA’s security paid a floating rate pegged to the London Interbank Offered Rate—a standard internationally recognized interest rate. Such a feature eliminated the interest rate risk in such a security, something that could threaten the value of a fixed rate security like a traditional mortgage. But the riskiness of unsecured debt, compared to the secured debt of mortgages, made the card bonds look for ways to decrease the risk of investments. The tranches of collateralized mortgage obligations were not enough to get the card bonds to AA or AAA investment-grade ratings. Credit insurance made up the difference in risk, and in turn, made securitization possible. Insurance companies insured the portfolio against calamitous default. Only by insuring the security against loss could banks get the necessary AA or AAA credit rating that most institutional investors needed.148

For the largest holders of credit card debt, securitization offered an easy way to expand their lending. Citicorp, by the mid-1980s, was by far the largest holder of bank credit card debt, with more than $10 billion in receivables—double the holdings of runner-up Bank of America.149 Through a securitization deal with Goldman Sachs, Citicorp could move some of those billions off its books and into the market, freeing up capital for other investments.150 Securitization could also increase income. First Chicago began to securitize its credit card receivables more frequently through the late 1980s and increased its operating earnings by 22 percent in just one quarter of 1989. Credit card fee revenue increased, in one quarter, from $54.4 million to $82.7 million.151 Securitization offered higher income and freed up capital, allowing credit card issuers to finance ever-higher levels of outstanding debt. While at first the investors were “leery of these assets,” as the Morgan Guaranty treasurer who headed the issue remarked, investors soon learned to overcome their initial hesitation.152 Though the debt was unsecured, the repayment proceeded steadily.

While securitization expanded, subsidiary banks, developed in the early 1980s, persisted. In 1988, Bank One that had pioneered securitization, still resold $1 billion of its credit card debt to a subsidiary, Banc One Funding Corp, which issued short-term commercial paper to finance the debt.153 Credit card securitization was created to solve that one particular problem, but commercial paper remained the solid standby. Commercial paper, though it did not transfer the risk of default to market investors, remained cheaper and more flexible than securitization. Commercial paper markets were deep, allowing the commercial paper issues to be easily resold, which gave investors the liquidity that they prized. Without that liquidity, card bonds commanded a premium. Banks could move large or small amounts of receivables off their books to the subsidiary, which could flexibly issue paper against the receivables. Securitization, with a shallow market and higher transaction costs, still posed obstacles. But that would change quickly, as a recession collided with changes in banking regulation to make securitization come to the fore of consumer debt financing.

While in the late 1980s securitization offered a clever maneuver to find additional funds, by the early 1990s securitization became necessary for banks’ profitability if banks were to comply with the new regulations resulting from the Basel Accord.154 Securitization was not required by Basel, but to comply and to profit, securitization was necessary. The Basel Accord, an international banking regulation agreement between the G-10 under negotiation for most of the 1980s, required banks to hold enough capital to secure their loans, but with a twist: the amount of capital a bank would have to hold depended on the riskiness of its loans, so-called “risk-weighted capital.” Like the risk models developed for consumer lending, banks would hold different amounts of capital for different default risks of loans. The capital ratio would be determined not by the absolute ratio of capital to loans, but by the weighted ratio. Loans were multiplied by their riskiness. Buying a U.S. treasury bond would have a zero weight, as it is considered the safest of bonds, while lending to a corporation would have a 100 percent weight.155 The full range of investments all had risks defined by the agreement, but the risk weights did not reflect an objective reality. Was a business loan exactly double the risk of a home mortgage? Moreover, if a loan was deemed riskless—with a zero weight—then there would be no need to hold capital. Banks could buy, for instance, as many treasury bonds as they wanted without consequences for their capital requirements.

With a required capital-to-asset ratio of 8 percent, the largest banks in the United States and throughout the world would require much more capital to meet the new standards.156 Reallocating a bank’s loan portfolio to comply with the Basel Accord meant searching for ways to get as much return out of those low-risk assets as possible. Securitizing credit card loans gave the bank an effective risk weight of zero, since it no longer “owned” the loan. Similarly, banks faced 50 percent risk-weighted loans if they extended mortgages. Mortgage-backed securities also offered a zero weight, which freed banks from the capital ratio. Securitizing the debt allowed banks to make as many credit card loans, or mortgage loans, as they possibly wanted, as if they were treasury bonds157 The capital requirements meant to hedge risk simply pushed banks toward securitization rather than reducing their lending.

In contrast, off–balance-sheet entities such as nonincorporated subsidiary corporations like Banc One Funding Corporation would have a risk weight for their lending that would have to be capitalized against. The risk weight for these loans would be 100 percent. The new capital rules made securitization the clear winner over subsidiary corporations, despite the higher costs. Instead of acquiring new capital, or restricting lending, securitizing debt allowed banks to comply with the new requirements. Federal accounting regulations, still sovereign in the United States, could have been even stricter, but they were not. Pushing loans off the books into securities, according to Generally Accepted Accounting Principles (GAAP) accounting conventions, lowered the required capital ratio. The confluence of GAAP and Basel made greater securitization for banks necessary. By increasing capital requirements, the regulation unintentionally accelerated and solidified the shift to securitization for banks. Instead of locking up 8 percent of their capital, banks could avoid locking up any capital since securitization required none. Increasingly common, the now-deep card bond markets transformed the American debt industry in other ways as well.

After the tumultuous years of the late 1970s and early 1980s, when revenues hovered around zero or less, credit cards had become breathtakingly profitable. Between 1983 and 1990, according to a Government Accounting Office (GAO) study, the average return on assets—which for banks are primarily loans—was 0.57 percent.158 The return on credit card loans was 4.68 percent—or 8.2 times as great! While credit card loans generally had higher operating costs than conventional business loans, with 8.2 times the revenue they more than made up the difference. Banks frantically sought new sources of capital, like the card bond market, because the profits of credit cards were so much higher than their other investments. Such profits, as always, lured in new competition, who could also rely on these new methods of securitization to fund their loans. As conduit for capital, rather than sources of capital, such lenders no longer faced the greatest barrier to entry in lending—money to lend.

The Rise of Pure Play Credit Card Companies

With the expanding market for securitization, credit card banks with no other business—so-called “pure play” or “monoline” issuers—found their business model worked and expanded rapidly. Securitization lowered, if not eliminated, the capital to lend barrier.159 Fast-growing issuers like First USA, Advanta, and Capital One, all used securitized receivables to fund their growth, rather than internal capital. Such pure play companies, unlike commercial banks, did not have access to deposits; they had no other source of funding. Credit rating agencies rated them BBB, which, if they had not used tranched securitization, would have made their funding prohibitively expensive.160 Tranched securities were the only way that these companies could get AAA ratings on at least part of their securities. First USA, for instance, securitized two-thirds of its debt, according to its CFO Peter Bartholow. In 1994, as First USA’s portfolio doubled to $11 billion, its securitized debt rose from $2.6 billion to $7.2 billion.161 With tranches, First USA’s rival Advanta could offer a tranche of its security as a AAA bond and pay only 0.18 percent more than London Interbank Offered Rate (LIBOR). Advanta would have to pay a much higher rate to its last tranche, and use credit default insurance, but for the other tranches the rate was very low. Instead of not being able to sell any of its debt as a BBB bond, it could sell nearly all of its debt with an AAA rating. Compressing the bulk of the risk to the last tranche and then using insurance to offset that risk, companies with few assets could sell AAA bonds. Card bonds could get higher ratings than an issuer’s own debt ratings.162 Investor demand for high-quality bonds outstripped supply, said Murray Weiss, senior vice president at the investment bank Lehman Brothers, leading many investors to “cross-over buy” similarly rated credit card receivables. High investor demand kept the price of the securities high. The risk premium over treasury securities fell to an all-time low, as investors began to believe in the AAA ratings accorded such debt and the spread narrowed between corporate AAA bonds and credit card AAA bonds.163 Advanta, using such cheap funding, securitized all of its new debt in 1994—$2 billion.164 Without securitizing their debt, these companies could never have expanded so rapidly.

By 1995, a quarter of all credit card receivables were securitized.165 The new pure play companies, relying on securitzation to fund their expansion, grew much faster than the rest of the credit card industry, with a 32.6 percent growth in receivables in 1995, accounting for more than half of the growth of the entire industry ($31.5 billion of $56.5 billion total growth in receivables outstanding).166 In 1990, MBNA was the first pure play credit card issuer to go public and within a few months its stock price doubled.167 While these firms grew quickly, they also realized lower profits, receiving only 10.9 percent interest income compared to the industry average of 12.3 percent. Relying on securitization rather than deposits cost the new companies 1.5 percent more in funding costs than the banks.

With securitization, capital—that rarest of decidedly unnatural resources—suddenly became plentiful. In 1990, 1 percent of U.S. credit card balances were securitized. By 1996, 45 percent were securitized.168 Of the increase in balances from 1990 to 1996, from $165 to $395 billion, securities funded 77 percent of the difference. Without card bonds, credit card debt could never have grown to the scale it achieved in such a short time. By 1997, 51 percent of credit card debt was securitized, marking a turning point when more debt existed in capital markets than on bank’s balance sheets.169 Americans charged three times as much in 1998 as in 1988. Investors, not bankers, lent Americans this money. Unlike traditional investments, however, the borrowed money created no additional production, only additional demand. Capital invested in card bonds was not turned into shoe factories, but shoes. Card bonds necessarily crowded out productive investments. Every dollar funding a credit card was, literally, a dollar not funding a new factory—or any other productive investment. At the same time, card bonds created a supply of capital for consumer credit that allowed lenders to lower interest rates and exacerbated firm competition, enabling evermore marginal borrowers to have access to greater amounts of credit.

Image

Figure 7.4. Charge Account Volume by Year. Even by 1980, retail charge volume still outstripped charges on American Express, Visa, or MasterCard. By 1985, retail charge cards had lost their dominance, and by 1990 even American Express had a greater charge volume. Source: Wertheim Schroder & Co., “US Credit Card Industry: Second Annual Review-Industry Report,” 9–10. In billions of dollars.

D’Amato’s Gambit

In 1991, the economy was once again in recession, and while no decisive policies were implemented, politicians still, nonetheless, had to eat, preferably at well-connected fundraisers. At one such fundraiser, on November 13, 1991, President Bush was speaking to an audience of New York Republicans and made an off-hand comment, unapproved by his speech writers, that he would “like to see the credit card rates down,” believing that lowered rates would help the economy recover.170 This economic reasoning was, at best, obscure, but the vague sense that these high interest rates were to blame possessed a broad appeal, as the recession heightened the criticism of credit cards as unnecessary and expensive indulgences. The junior senator from New York, Alfonse D’Amato, heard the remark, and facing a tough re-election race in New York, seized on it as a political opportunity. The next day, as the houses of Congress negotiated the final stages of a bill to resolve the savings and loan (S&L) crisis, D’Amato, along with then Democrat Joseph Lieberman from Connecticut, proposed an amendment to the S&L bill to cap credit card interest rates nationally at 14 percent.171 In the Senate, the amendment passed quickly, 74 to 19. Lowering credit card rates seemed to fit the political calculus of both Democrats and Republicans.172

While the measure passed quickly, as the reality of what was about to happen sank in, pundits, lobbyists, and policymakers just as quickly denounced the move. Treasury Secretary Nicholas Brady cautioned that cutting back credit access, since issuers would no longer be able to lend to riskier borrowers, would reduce consumer demand exactly when it was needed to strengthen the economy.173 Such a cap, Brady claimed, would result “in credit cards which are elitist.”174 According to the American Bankers’ Association, restricting lending to customers who would be profitable to lend to at 14 percent would have eliminated between $60 to $150 million, or 26 to 66 percent of the outstanding credit card debt. Such a violent reduction in outstanding debt would have, according to industry spokespeople, collapsed the profits of leading banks and pushed the United Sates further into recession. The Wall Street Journal estimated that such a cap would have changed Citicorp’s $1.50 per share earnings into $0.58 losses per share.175

Issuers would have also been forced to unsecuritize their debt, nightmarishly bringing the debt back onto their books. Citibank, for instance, would have found itself 0.75 percent below its capital requirement. Such a cap would have destabilized capital markets as well as issuers. For pure-play issuers who used securitization more heavily, the cap would have spelled their demise. Recognizing the danger of destabilizing the banking system, already weakened from the S&L crisis, House Speaker Thomas Foley, through procedural adeptness, managed to split the measure from the larger banking bill, which allowed it to die in the House. That allowed the session to end without a vote, which was, by then, seen as the sensible thing to do. While lower interest rates might have pleased many cardholders, reducing the number of Americans with credit cards by two-thirds would certainly have offset the political gain from reducing interest rates.

Such high rates, D’Amato and his allies surmised, could only be possible in the absence of competition. If an industry was competitive, it was deemed efficient, and therefore fine. Only monopoly, evidently, could justify regulation. While 6,000 firms issued credit cards, 57 percent of the balances were controlled by the top ten firms. Citicorp alone had 18.3 percent in 1992.176 The exact same rate—19.8 percent—at seven of the largest ten banks, D’Amato asserted, was damning evidence of oligopolistic collusion, demanding, at the very least, an extensive investigation.177 Chuck Schumer (D-NY) pushed for an 18-month study period, which the banking industry opposed, since the uncertainty surrounding the outcome of the investigation would make securitizing debt, in the meantime, much more difficult. Nonetheless, the GAO launched an investigation whose results were published in April 1994.178 The credit card industry, it found, was competitive despite the stability of its interest rates.179 With six thousand issuers nationwide, it was hard to imagine coordination.180 The seemingly collusive interest rates belied a frenzy of competition.

The low barriers to entry fostered by securitization allowed competition between issuers, despite the apparent staggering sums required to achieve economies of scale. To be profitable, the industry had to be concentrated, but a company could easily be dislodged by an upstart with easy access to capital and new ways of lending. The credit card industry in the early 1990s was competitive but concentrated. Though the largest lenders controlled the bulk of the industry, they could not easily dictate prices like an oligopoly. Ironically, the threat of a cap actually made the industry less competitive, since securitization became harder during the 18-month investigation. Investors were afraid that the United States would legislate a lower interest rate, threatening the interest rates on the underlying portfolios behind their card bonds. Securitization, in early 1992, was at one-third the level of a year earlier, with only $1 billion in securities issued compared to $3 billion.

While the interest rate cap failed, the threat of such a cap pushed banks to find ways to lower their interest rates. In 1992, the percentage of credit accounts with interest rates of 16.5 percent or less grew from 9 to 39 percent.181 Credit card accounts with interest rates of 18 percent or more fell from 69 percent of all cards to 43 percent. D’Amato did not get his ceiling, but he fashioned a new discourse critical of credit cards that shifted the focus away from yuppie-era affluence to recession-era indebtedness, and the threat of government regulation was sufficient to push issuers to lower their rates. While the legal cap failed, a new “moral cap” was successful, according to the head of AmEx’s Optima card division.182 Citicorp, in 1992, began to offer a variable rate credit card, which, with its market-driven, rather than monopolist-chosen, prices up or down, would seem to placate policymakers.183 The securities that funded these cards were also variable rate. With a variable rate credit card, Citicorp issued a massive ($1.33 billion) offering of floating-rate credit card securities—nearly doubling the global volume of such securities and the first to be backed by variable rate credit cards.184 By 1993, market fears of a rate cap had abated, but the variable rate cards rapidly took over the industry.185

If securitization underpinned the cheap capital that allowed for a competitive market, perhaps more consequential than D’Amato’s failed attempt to cap the interest rate was the failure of the Financial Accounting Standards Board (FASB)—the group that sets the generally accepted accounting principles (GAAP)—to put securitized debt back on the books of issuing companies. In November 1994, a new FASB director proposed to change the way banks accounted for the securitization of revolving credit. Issuers would have had to hold capital against possible losses. The measure was voted down 6 to 1.186

If this measure had passed, securitization would have died. With such an accounting change, issuers could not have sold off debt as quickly as they lent it, since issuers would have had to grow their capital at the same rate as their loans. While doubling loans was possible, doubling capitalization was considerably more difficult. If the measure had passed, the pure play companies would have gone out of business. No alternative source of funding for their portfolios existed. The continued expansion of securitization came about, in part, because of an obscure accounting trick. This FASB ruling, as much as D’Amato’s interest rate cap, could have squashed the expanding levels of consumer credit in the 1990s.

D’Amato’s push for a cap reflected a greater cultural ambivalence toward credit cards. By the early 1990s, credit cards were no longer seen as the playthings of the glamorous, but the trap of the profligate. The shortlived Debt game show, hosted by the iconic Wink Martindale, showed how differently credit card debt was viewed by the mid-1990s. While in the mid-1980s, credit cards marked high social status, by the mid-1990s credit cards had acquired a patina of desperation and bad choices. Paying off one’s debt seemed impossible, except for the deus ex machina interventions of game-show hosts. Produced by the Walt Disney Company, Debt envisioned a world where a tuxedo-clad Wink Martindale descended from the heavens to erase one’s debt, as fantastic a tale as Cinderella herself.187 Though credit card companies and consumer advocates condemned the show as irresponsible, 5,000 soon-to-be bankrupt debtors clamored for the show’s four hundred spots. The show was canceled after only a few seasons. Americans evidently wanted their own debt divinely erased, but did not want to watch it on television.

Reshuffling Prime and Profitability Models

If D’Amato and the FASB failed to change the rules governing lending, the recession nonetheless changed the ways different classes of Americans borrowed. While banks discovered new sources of capital to fund credit cards, Americans faced the first national recession in a decade. Higher charge-offs cut into the banks’ rate of return, with the return on credit card debt falling by a quarter to 3.55 percent in 1991.188 Unemployment rose again, as it had not since the 1982 recession, triggering a wave of defaults, but also other changes in debt practices as well. While unemployment rose, most Americans surprisingly cut back on their debt. Credit card companies feared that Americans’ prudent use of credit—not borrowing more while in recession—was a dire predictor of things to come. Was the recession a return to frugality?189 Payments were up, but volume was up as well. While consumers continued to spend more on their credit cards—Visa spending grew 7.6 percent in 1991—consumers paid off this increase. Unlike during the recession of 1981, the average borrower did not increase his or her debt load.190 And according to a senior vice president at First Union Bank, “it’s getting worse.”191 Revolving debt grew 15.5 percent per year in the late 1980s, but its growth slowed to 5.1 percent in 1991.192 Balances did not grow at all in 1992.

Meredith Layer, a senior vice president at American Express, lamented that “an anti-materialism movement is forming. Some call it the New Frugality.”193 The head of AT&T’s credit card operation claimed, “the age of Yuppiedom is gone.”194 “In the ‘80s, interest rates were not the topic of cocktail conversation,” the president of a credit card research firm noted, but instead “what they were spending money on with their cards.”195 While credit card executives saw a possible decline in consumption, Americans as a whole felt less prosperous. A lifestyle survey conducted by a prominent advertising company found that between 1975 and 1992, households felt that the ability of their incomes to satisfy “nearly all our important desires” fell from 74 percent to 60 percent.196

The new frugality, though true at an aggregate level, broke down when examined by income level. Economists found that during the 1980s, poor households had expanded their access to credit, but did not expand their debt.197 During the expansion, these poorer households reduced their debt, even as the more affluent expanded their debt. While average card balances for all households doubled from $751 in 1983 to $1,362 in 1989, poor households halved their balances from $723 to $352, using the expansion to reduce their debt burden.198 Fed economist Peter Yoo explained that the growth in credit card debt from 1983 to 1992 was from “households with previous credit card experience and with above-average incomes, not [from] inexperienced, low-income households.”199 The poor were the most prudent users of credit. Middle-class households expanded both their access to credit and their debt burdens. More than simply an increase in the number of households with credit cards, the rising debt was the result of households borrowing more intensely—average household debt rose 117 percent.200

During the recession of 1991, however, the behaviors of the poor and middle class switched. Middle-class households cut back on their debt while poor households used their newly available credit lines to increase their debt burden and weather the recession. Poor households increased their debt burden, on average, to $917, while households overall remained unchanged ($1,366). The number of poor households revolving their credit card debt increased from 54 to 72 percent. After the recession ended, behaviors across income groups converged as poor households continued to expand their debt burdens like middle-class households, rather than reducing their debt burden as during the 1980s expansion. Credit card issuers, in the end, had nothing to worry about with the “new frugality.” The 1992 recession marked a changing point in the credit card borrowing habits of poor Americans, bringing their borrowing more in line with the more affluent—at least in terms of debt burden. The numbers, however, reflected the considerable stringency with which the poor were given credit cards. Only the most prudent were given cards, since their income was so much lower. It makes sense, then, that such borrowers would not run the higher balances of the more lax, more affluent borrowers. All that would change in the 1990s, as seen not only in the labor market pressures of the recession but in the increasingly relaxed lending standards following the recession. While debt practices of the poor and affluent differed in the 1980s, issuers tried their best not to repeat that mistake in the 1990s—all classes would revolve their debt.

In the early 1990s, credit card issuers faced a greater challenge than potential regulation: running out of borrowers. New creditworthy—or “prime”—borrowers were in short supply by the early 1990s. Rejections fell among prime borrowers in the 1990s, from 18.5 to 7.9 percent, as lenders competed keenly for their balances.201 By the mid-1990s, Americans with good credit records had credit cards—the market was saturated. With the advent of prime market prescreening software, credit card companies had completely saturated creditworthy borrowers, as computers quickly told lenders who would be a good credit risk. Seventy-four percent of households had credit cards in 1994. More credit cards (313 million) existed in the United States than people.202

Running out of prime customers, credit card companies aggressively targeted other firms’ best customers, luring them away with introductory teaser rates to transfer balances. Offering low interest rates and no fees, balance transfers could quickly erode a lender’s portfolio. Competition for balances, even before the recession, had been fierce. Beginning with AT&T, in 1990, credit cards rolled back the fees on which they had depended since their inception. AT&T boldly expanded its market share with its Universal Card by being the first issuer to offer no-fee credit cards.203 Other card companies had to drop their fees or lose their customers to AT&T. In the midst of the recession, as balances stagnated, the competition grew even more intense. Bank of America, slow to catch on to the no-fee card, lost 11 percent ($955 million) of its balances from 1992 to 1993 to competitors.204 Issuers resisted dropping their fees, attempting to offer increased benefits in exchange for annual fees, such as GE Capital’s rewards card in September 1992.205 GE’s card flopped and it was forced to continue offering benefits but without a fee.206 Companies that did not eliminate fees saw their balances flee. The dropping of fees and the creation of teaser rates did not stop the consumer capital flight from card to card. So-called “card surfers” shifted their balances from one card to another, following teaser rates.207 The easy movement of revolving balances only hurt profits. Citicorp, for instance, only lost 2 percent of its customers in the fourth quarter of 1997, but lost 26 percent of its revenue, since those lost were revolving balances.208

Though balance transfers seemed an easy way to expand, balance transfers were a lose-lose proposition. The “musical chairs” of balance transfers, as one bank executive termed it, created a game that few companies could win.209 While balance transfers rapidly grew receivables, they were not the best way to sustain long-term growth. Balances transferred under introductory teaser rates could be just as easily transferred again six months later to another card. Even worse, if a borrower could never repay the debt at a higher rate, and could not find another card to transfer the balance, the creditor would be stuck with a massive default. If the customer had a choice, the balance would simply leave after an unprofitable six months and if the customer had no choice, the balance would be defaulted on.210 Such gains were short-term at best. Either way, the card company would lose.

Banks created clever promotions to hold the balances, offering declining interest rates on greater balances and increasing rebates if borrowers deferred their redemption. Rebates on credit cards, where borrowers would get a percentage of their spending back, had emerged during the late 1980s. Mellon Bank took the rebate to a new level, offering a d that if borrowers did not claim their rebate would get an additional 5 percent of their interest back every year. After twenty years all of their interest would be returned! Such schemes neglected a fundamental truth: customers who intensively managed their balance transfers were already financially precarious. Unfortunately for Mellon, its card attracted precisely the revolvers who believed they would never be able to pay off their bills—for whom the 100 percent rebate was attractive. Despite their prime credit ratings, borrower defaults reached 27.6 percent of receivables, compared to a normal 4 percent charge-off for borrowers with similar FICO scores.211 Mellon Bank lost nearly 30 percent of the money it lent. Revolvers who revolved for convenience made for good business. Revolvers who borrowed because they spent more than they earned—persistently—made for bad business. No model could screen for these kinds of revolvers. But computer models, in general, had begun to acquire an accuracy that was impossible only a decade earlier, and it was on the basis of these models that lenders delved further down the risk/return curve, relying on their ability to transfer that default risk to the holders of the credit card securities and, ultimately, the insurance companies that backed those tranches.

The Seduction of the Risk Model

Beginning in 1987, Household Finance Company, by now one of the largest credit card issuers in the United States, began to segment its existing portfolio ever finer, building on the discriminant analysis techniques of the 1970s. Carefully raising credit limits and lowering pricing to encourage high-revolving but low-defaulting borrowers, Household grew more profitable.212 But such partitions were based on past behavior and with existing customers. The question remained: how to predict future behavior, especially the behavior of non-customers? In the late 1980s, credit card and credit rating firms attempted to develop software that would better predict the borrowing behavior of debtors than the slightly-better-than-coin-flipping models of the late 1970s. Not until 1992 could commercially available software accurately predict the future profits of a borrower well enough to make a lending decision. Armed with such models, however, lenders could rely on securitization to provide all the capital they would need.

These models possessed a variety of names: behavioral models, risk models, profitability models. At the center of all these models was an attempt to maximize profits, by lending to revolving customers, while avoiding the losses of defaults. The revolving customer inhabited a narrow band between defaulter and non-revolver. Finding that sweet spot of revolving balances for each customer was extraordinarily difficult—much harder than just avoiding defaults. Behavioral models, the first to be widely used, focused on avoiding default. Fair, Isaac, the company that created the FICO score and foremost credit model firm, estimated that while in 1989 issuers used such models on only 14 percent of credit card accounts, 75 percent did by 1992.213 To a large degree, this timing was technologically determined. In only a few years, computing power doubled, software prices fell by half, and data availability grew, allowing the vast majority of issuers to employ such techniques. Companies frequently developed their own software, often in concert with outside advisors. Banc One, for instance, turned to Andersen Consulting for help developing its proprietary system, Triumph, which used the latest data-mining techniques to price risk. But software developed outside a company was often superior. Credit rating agencies had an advantage over individual lenders; they knew all of a debtor’s history. TRW, for instance, developed its own in-house probability models that generated repayment probabilities based not only on one particular debt, but on a borrowers’ entire portfolio of debts—information that individual lenders did not always have. The challenge with such software was that if one lender knew that information, then every lender could, leading to excessive lending to a borrower. Fair, Isaac, not to be outdone, released PreScore in 1992, to prescreen credit bureau data to find prospective borrowers.214

When a borrower missed a payment, behavioral software calculated the probability that a borrower would resume payments or whether the missed payment was an aberration. Rather than waiting 180 days before referring an account to a collection agency, risk models helped debt collectors spot accounts far earlier than ever before.215 A good model could help a lender collect the debt before any other lenders. By focusing on risk rather than time overdue, collections could be more efficiently conducted. Rather than call clients who would probably pay anyhow—and potentially lose their business by offending them—collectors could focus on the clients who would not.

Behavioral models allowed credit card companies to be stricter with those likely to default and appear laxer to those who would have paid anyway. This balance of strictness and laxness was necessary to maximize the profitability of revolvers. Credit card issuers conducted in-house experiments with control and test groups to find out how their collection techniques affected repayment. Too harsh methods resulted in repayment, but with subsequent lower balances, which lowered profits. Retaining the revolvers, who might miss an occasional payment, and keeping a watchful eye, helped boost profits.

Calculating the expected future profit of an account depended on more than just the likelihood of default. Risk, after all, was not the same as profit. Software had to know a particular company’s costs and prices. The commercial release of software to calculate the potential profit of a borrower, rather than simply the probability of default, took until 1992.216 So-called “profitability scoring” required computer resources and data that most companies did not possess. Companies needed at least three years of detailed credit data and the computing power, data, and software to do that kind of analysis. It did not exist before 1992, when MDS Group, the second largest credit scoring company behind Fair, Isaac, offered the service for $40,000 to $60,000. Mark Argosh, a vice-president at the consulting company Mercer, estimated that profit-per-account rose 2.5 times by using such techniques.217

Profitability was not just about potential defaults but also potential attrition. Lenders normally lost between 10 and 20 percent of their cardholders each year to competing firms.218 Knowing in advance what the warning signs of such losses were, and which borrowers to hold on to, would save lenders a great deal of money. Keeping a customer was always cheaper than finding a new customer. Such profitability models helped lenders decide where to aim their retention efforts. Retention efforts cost money, however, and selectively aiming efforts at the most profitable could help the bottom line. While lenders and credit rating companies developed these models to handle the risk of the prime market, the models would prove to be essential after the 1992 recession and in the subsequent expansion of the credit card industry to those with less than perfect credit.

This riskier market would prove, through changes in law and technique, to be the most lucrative during the recovery. The death of the fee card forced issuers to use the profitability models to make better choices in lending. By 1996, only 2 percent of bank card revenue came from annual fees compared to 76 percent from interest income.219 GE, for instance, provided a 2 percent rebate against charge volume. Without finance charges, such a rebate quickly became a loss. In 1996, GE instituted a “maintenance fee” for customers who had less than $25 in finance charges annually. When issuers offered to waive fees for revolvers, consumers in focus groups took it badly. Such a move appeared “self-serving” for the issuer—which it was. The pure play, securitization-driven issuers like Advanta, which was the tenth largest issuer by 1996, opted for a more focused approach than that of GE. Using profitability modeling, Advanta attempted to increase the profits on its non-revolving accounts—resorting to fees only as a last step.220 First USA did the same. A former account-acquisition specialist with First USA claimed that while 60 percent of new accounts began as non-revolvers, First USA was “very good at turning convenience users into revolvers.” Profits from revolving credit seemed fair while punishing those borrowers who paid on time seemed unfair. Fairness and profitability dictated that the sweet spot of revolving debt be made rather that just discovered. One credit card consultant opined that “the challenge is to instigate, accentuate, and accelerate desirable behavior.”221

Subprime Lending after the Recession

As the economy recovered, the business of credit cards looked very different in the mid-1990s than it had in the mid-1980s.222 While credit card debt’s relative profitability fell from the heights of the late 1980s, such debt continued to be much more profitable than other commercial bank investments, leading banks to continue to throw as much money in their direction as possible.223 Larger organizations had greater resources and expertise to devote to the risk management necessary for continued expansion. Over the late 1990s, balances continued to concentrate in fewer and fewer lenders. But, at the same time, new risk models made these large lenders confident that they could safely lend to ever-riskier borrowers.

The pure play companies had come of age. What the Federal Reserve called a “credit card bank”—a bank with its assets primarily in consumer loans and whose consumer loans were 90 percent or more in credit card balances—accounted for 77 percent of all balances in 1996. These banks, in 1996, reported net earnings of 2.14 percent of outstanding balances, below the peak of 1993, but still higher than the average return of 1.86 percent on all commercial bank assets.224 The credit card in the post-recession world functioned differently than it had in the 1980s. Issuers estimated that 80 percent of new accounts were variable rate.225 In 1991, 23 percent of cards floated; by 1998, 68 percent did.226 No-fee cards were now standard.227 Though the cost of funds fell 1 percent in 1993, the average margin on interest income fell only 0.5 percent. Market competition drove borrower interest rates down. Market competition also drove borrowers to the highest efficiency lenders, who could cut interest rates the lowest, further concentrating the industry. Attrition rates continued to be 11 to 13 percent, despite all the attempts to stanch balance transfers and canceled cards. Continued expansion meant that new markets would have to be conquered.

If prestige cards had targeted the wealthy or soon-to-be wealthy, other bankers were beginning to discover how to lend to the other half. The cost of finding new borrowers had grown immensely. One direct-mail credit card vendor, marketing to prime customers, recounted that while he had gotten a 6 to 7 percent response rate in the 1980s, he now got 0.02 percent.228 The market was saturated. With such low response rates, costs per new prime customer grew incredibly.

Controlling the risk of lending to less sound borrowers—the other 26 percent without credit cards—meant relying ever-more on risk models. To keep growing, lenders had to make riskier and riskier loans. By 1995, 58 percent of households earning under $20,000 received credit card offers in the mail each month, up from 40 percent in 1993. Edward Bankole, vice-president in Moody’s structured finance group, said that “more and more of the new wave of cardholders tend to be the ones who are on the low end of the credit risk spectrum.”229 By 1998, with average losses of 6 cents on the dollar, risk management was essential to a successful business. At the same time, however, card companies keenly felt competitive pressure on interest rates and special promotions from other lenders. These twin challenges of risk and return were answered by expanding into the terrain of more risky borrowers—the so-called subprime market. Willing to pay higher fees and interest rates for credit access, those who were traditionally denied appeared to offer a lucrative opportunity when viewed through risk models, and lenders believed they could lend profitably to such borrowers.

The models made possible the expansion of lending to minority groups, for both financial and legal reasons. Lending to minorities was at the center of the subprime expansion, since this group tended to have no prior relationship with a bank. One-fifth of Americans had no relationship with a bank, the so-called “unbanked,” and that one-fifth overlapped strongly with the 26 percent of American households without a credit card. Only 45 percent of lower-income families had a credit card in 1995.230 Unbanked Americans were disproportionately African American and Latino.231 Convincing these groups to apply and then to correctly screen them for risk would provide immense profits for the firm clever enough to figure out how to do it. The “uncarded ethnics,” as an article in Credit Card Management referred to minorities without credit cards, were seen as a risky, but lucrative growth field.232 Lending to such groups heavily relied on the computerized risk models, but these groups were not like previous populations without credit cards who already had other forms of credit. While traditional credit card holders developed their credit histories at department stores, these populations did not have credit histories at such stores. Lenders looked to nontraditional credit records, based on utility payments and phone companies, rather than retailers.233 Such “thin file” customers, minority or not, made issuers rely on factors other than individual credit histories. As many as 20 percent of applicants had thin-file histories and would have been denied credit in the 1980s, but with the new models these thin-file applicants could receive credit as well. With “behavioral models and other risk-management tools,” lenders could take a “calculated gamble” on these thin-file borrowers, Credit Card Management reported.

Defaults could still be terminal for issuers, however. Every potential borrower could not be given a card, no matter how great the returns on fees. Accessing riskier customers—who were more likely to pay late and more likely to revolve—required the most cutting-edge risk management tools. The risk models allowed borrowers to anticipate when a borrower would default. A better prediction model would enable the lender to get to that borrower before the other 4.5 creditors to whom the average defaulting credit cardholder owed money.234 Firms like Baltimore’s Neuristics, which grew out of Johns Hopkins University research in artificial intelligence, combined all the varieties of risk modeling into one product designed for subprime lending. Eight of the top 25 credit card firms used the product, Neuristics Edge, to send preapproved solicitations to potential borrowers with FICO scores under 650—the boundary between prime and subprime.235 The CEO of Neuristics, Richard Leavy, explained, “25 percent of the people in that population are ‘bad,’ but that means 75 percent are good.” Finding that 75 percent would “eliminate most of the fat and deliver the profitable customers.” But if a company lent to that 25 percent, it would no doubt take devastating losses.

What made borrowers a good credit risk—income, job stability—was homogenous, while what defined a bad credit risk was heterogeneous.236 The subprime market was made up of two groups: emerging credit and recovering credit. Emergers had little credit history, whether because they were students or simply unbanked. For recent immigrants and racial minorities, who because of discrimination or of preference did not shop at the stores where transitionary forms of credit were offered, like store charge cards, lenders had little data for their models to rely on. Recovers, in contrast, had an extensive, but bad, credit history. Some bad credit histories, though, resulted not from bad choices but from cataclysmic events outside the borrower’s control like illness or job loss. These borrowers, now healthy and employed, could actually be good candidates for credit, despite their poor repayment histories. Still, while some borrowers had fallen on hard times, others were simply unable to manage their finances, moving in and out of bankruptcy multiple times. These borrowers would always be a credit risk. Differentiating that heterogeneous risk would be the source of all subprime profits.

Pattern recognition in these models grew more complex, aggregating data from a variety of sources. New transaction models examined what a customer purchased, and based on that data, predicted bankruptcy. Many cash advances late in the month, for instance, could indicate a borrower was running out of money before payday. Meals charged at casinos could indicate gambling. Put together, this borrower’s riskiness looked very different than one who just bought meals and got cash advances.237 Such analysis probed deeply into the everyday buying patterns of borrowers, something that creditors never had full knowledge of before. While cash loans were fungible, credit on plastic was not. The loans were always for specific ends. While creditors gave credit card holders greater freedom in their borrowing, their knowledge of what was actually bought by borrowers expanded. The appearance of freedom was deceiving, however, since credit lines could be revoked at any point.

The models enabled more inexperienced lenders to be overconfident, taking the model for reality. Modelers expected smooth continuity, not abrupt discontinuity. While such models took account of individual account demographics and history, they did not take account of future economic forecasts or unexpected population differences. Lending in an expanding economy is always less risky. Perhaps the marginal borrower would be more susceptible to downturn than the traditional borrower. If the data used for the model were less than the length of the business cycle, the lender would not have a stress-tested portfolio.238 The senior project manager for Fair, Isaac’s Horizon system explained that the challenge of risk models was that “the borrowers who tend to go bankrupt look just like a lender’s most profitable customers.”239 And the markers for default for subprime borrowers might be different than those of prime market borrowers. As the vice president of a subprime credit card issuer remarked, “the customers are different (from the traditional market) and the customer’s behavior is really different.”240 If this was true, then the risk models developed with the data of prime market borrowers would fail.

The models used data borne of a long period of falling interest rates. While interest rates had changed since the early 1980s, they had steadily fallen. As one credit card banker remarked in 1995, “the fundamentals of the business are being sustained by the Fed, not by the underlying consumer proposition.”241 Interest rates had fallen since the 1970s, so when then Chairman of the Federal Reserve Alan Greenspan briefly raised rates in 1995, credit card issuers were uncertain what would happen. Issuers feared a backlash from the public, who, by then, commonly switched cards for small interest differences.242 Though consumers had enjoyed the downward flexibility of floating rate cards, their response as rates went up was more uncertain. More important, for the issuers, was how other issuers would respond. A rising cost of funds could allow companies with deep pockets to subsidize their borrowers, and watch the balances flow in as customers switched to their lower rates. Issuers like GE, Capital One, and NationsBank all offered teaser rate cards with fixed rates 3 percent below prime, to encourage just such movements.243 Interest rates, despite up’s and down’s, continued to fall until the early 2000s, when they could fall no further, effectively driving the real cost of borrowing to less than zero. The reality of floating rates was, however, much more volatile for the economy as a whole than just a shifting balance of market share among credit card companies.

Collective hazards resulted as well from the automated screening systems pitching cards to the same marginally creditworthy borrowers. The aggressive expansion campaigns netted issuers with even riskier customers than they had anticipated.244 Because so many issuers competed simultaneously for the same clients, borrowers had accepted multiple cards at the same time. Unbeknownst to other lenders, borrowers went from zero to many credit cards all at once. As Robert Hayer, a director at Smith Barney, explained, “You [the borrower] may be able to handle what I gave you from a credit perspective, but that doesn’t protect me from X-Y-Z bank down the street issuing more credit to you.”245 Issuers, like Capital One, sometimes nearly doubled the interest rates of customers who opened additional credit lines. . .246 Customers complained that they paid their bills on time and ought not be charged more, even though the risk model predicted a higher rate of default. Nigel Morris, president of Capital One, defended the practice, suggesting that “the credit card business [of the future] will look more like the insurance business, with pricing based on likely outcomes instead of one price for everybody.” Yet while death was a certainty for everyone, catalogued on tables with billions of data points, the economic futures of borrowers were far more unique and less susceptible to the prognostications of actuaries.

More broadly, though, Capital One recognized the danger that expansive access to credit could pose for a household. Smaller regional banks took more time than large banks between mailing and approving credit, leading to serious lags in credit information, which left them overexposed to a decrease in creditworthiness. These issuers could not handle the risk of these new borrowers and began, in 1995 and 1996, to sell off their receivables as they had during the consolidations of the late 1980s.247 Larger issuers, with cheaper collection mechanisms, could then profitably buy the distressed portfolios of these smaller banks. For the larger issuers, who then securitized their expanding portfolios, the models seemed to work. Losses on securitized credit card debt had fallen, by 1996, to a six-year low.248 Lenders that tried to set aside capital, like Bank of New York, were punished as their stock prices fell. Investors presumably believed that capital would be better invested than conserved against possible loss, believing in the validity of the lending models.249 Subprime lending, relying on ever-more clever models, underpinned by securitization of debt, drove pure play stock prices and American indebtedness ever-higher.

The Marquette decision of 1978 had allowed credit card companies complete freedom with interest rates, but interest rates, while not subject to regulated caps, were still subject to market competition, much to D’Amato’s chagrin. With so many issuers, and such liberal access to securitized capital, interest rates were very competitive. Set too high, a rate and a borrower would jump ship. Fees, however, were harder to compare between lenders. Unfortunately for credit card companies, fees, unlike interest rates, were still regulated by individual states, until 1996 that is. In 1996, a Supreme Court decision, Smiley v. Citibank, extended the Marquette decision to allow banks to charge late fees if the card was issued in a state that allowed such fees.250 Barbara Smiley, a California homemaker, had two credit cards through Citibank, in South Dakota.251 In 1992, she brought a class action lawsuit against Citibank on behalf of California borrowers charged $15 late fees, which she saw as illegal under California law. The Court upheld the position of the Comptroller of the Currency, that fees were simply another form of interest. While this might seem like a broad interpretation of the word “interest,” such reasoning unpinned all the usury laws of the twentieth century, which had treated fees as a surreptitious form of interest that drove up the real cost of borrowing. If fees were a form of interest, then the Marquette decision held that individual states could not regulate issuers in other states. While such flexible definitions made sense to lawyers, for consumers the difference between fee and interest was more than financial. Interest rates were very public and perceptible, while penalty fees were more clandestinely concealed in the fine print of contracts, making it harder for consumers to compare credit offerings. In practice, Smiley undid thirty years of truth-in-labeling legislation for consumer credit. Consumers, moreover, generally expected to pay on time and tended not to include fees in their comparisons. Credit card companies, hamstrung by competition on interest rates, began to focus on maximizing fee income.

The Smiley decision came just in time for the credit card industry. As the profitability of interest income fell through the mid-1990s, penalty fee income had risen, but was still subject to state laws. By the late 1990s, profits in the credit card industry, while still higher than commercial loans, continued to slide. By 1996, after-tax return on assets was at 1.2 percent, nearly half of what it was in 1993.252 The low barriers to entry and the frenzied competition had driven profits down. A senior analyst at Moody’s remarked that the “competition of the past five years has changed the nature of the industry.”253

What profits remained were sustained by the increased interest and fee income derived from subprime lending, lower charge-offs derived from home equity loans, and higher fee income from penalties and merchant fees.254 One-quarter of all customers paid their bills late and issuers happily charged them an industry-average $26 fee.255 As customers used their cards for more mundane spending, income from merchants rose, enabled by the Smiley decision.256 While many credit card analysts were ambivalent about the subprime expansion, credit card issuers had little choice if they were to grow. While bond rating analysts might argue that the issuers who are “moving down the credit spectrum . . . are moving too far down,” the prime customer market was saturated and issuers took what was left.257 For subprime borrowers, issuers began to offer interest rates less than a borrowers’ actual riskiness justified, to act as loss-leaders for the more profitable fees.258 Lenders lost money on the interest rates, but recouped those losses with fees. Effective interest rates, therefore, were much higher, even if the borrowers believed they were paying only a few percent more than prime borrowers. With fees, the effective interest rates on lending could be much higher, and more difficult to compare for borrowers, allowing even riskier lending.

While D’Amato had been concerned about interest rates being too high at the beginning of the 1990s, after Smiley he should have been concerned that rates were too low! By the end of the 1990s, interest rates, especially for subprime lenders, were mere loss-leaders for the more lucrative fees. As consumers loaded up on credit, many debtors turned to debt consolidation offered by home equity loans to get them out of their situation.

Home Equity Loans Revisited

In 1991, a frustrated wife wrote a letter to Leonard Groupe, a financial advice columnist with the Chicago Sun-Times, a tabloid daily in Chicago. After her husband had attempted to convince her to take out a home equity loan to consolidate their debt, she had remained skeptical. Though the home equity interest would be deductible and have a lower interest rate, the thought of a home equity loan still “scare[d]” her. Home equity, in the 1990s, had begun to be used more frequently to consolidate credit card debt. The four-year phase out of deductible credit card interest, authorized by the Tax Reform Act of 1986, culminated in 1991—leaving only the interest on mortgages and home equity loans deductible. Debt consolidation did not become the leading use of home equity loans until 1991, when the tax deduction on other forms of debt was fully eliminated. Unlike second mortgages, home equity loans revolved, making them easier to add to, even though many had minimum borrowing amounts. Once the paperwork was finished, borrowers could borrow more or less against the house.

The columnist’s advice to this fearful wife echoed that of many consumer advocates. Though the interest is deductible, a home equity loan carried other dangers. An auto payment might be more expensive, but defaulting on an auto loan only ended with the auto being repossessed. If payments were missed on a home equity loan, borrowers would have their homes foreclosed. In the short-run, such a foreclosure was unlikely, since the payments would be reduced from what the husband and wife currently paid. More dangerous, the columnist wrote, in keeping with the standard advice, was that “people like your husband, who apparently has grown accustomed to accumulating installment debt, probably lack the discipline to genuinely profit from a second mortgage home-equity debt-consolidation loan.” The couple would probably run their credit card debt back up, and then be stuck with that debt on top of their increased mortgage. Consumer rationality and spending habits rarely go hand-in-hand.259 Incurring more mortgage debt was difficult, but charging was even easier. If the debt could be easily shifted, hard-won equity could be easily depleted. Paying off the credit cards might have been a good idea, but doing so left home owners with tempting lines of unused credit on credit cards. Credit counselors saw that may of their customers who paid off their credit cards with their home equity also found that they soon had to pay for a “transmission” or “battery” that went out and they did not have the cash.260

Despite the advice of this columnist, and nearly every other popular writer on the subject, from 1996 to 1998 four million households paid back $26 billion in credit card debt from their homes’ equity.261 Forty percent of home equity loans were for debt consolidation—nearly twice the percentage of the next most frequent use, home improvement.262 The advantages of home equity loans over credit card debt were clear. In addition to the tax incentive, the interest rates on home equity loans were considerably less than those for credit cards. In 1997, the average home equity loan charged 1.27 percent over prime or 9.77 percent—much less than even the lowest credit card rates, and home equity interest was deductible. Even subprime home equity lenders, like Household and The Money Store, offered home equity interest rates less than the average credit card.

The transubstantiation of credit card debt into home equity debt, which resulted in few charge-offs, made card bonds appear to be better risks than they actually were. In 1998, consumers increased their credit card repayment rates but did so largely by using the equity in their homes; borrowers increasingly relied on this equity rather than their income to repay their debts. Eventually, however, home equity runs out. While home equity made credit card borrowers appear to be better risks, bond ratings experts at Moody’s attributed the drop in charge-offs to tighter underwriting standards and accurate models.263 Sapping the equity savings of America made the models appeared more reliable than they actually were. While credit card lending was intended to rely only on income, it actually relied on asset appreciation.

These home equity loans went well beyond traditional first or even second mortgages. Lenders began to provide extremely high loan-to-value mortgages. While FHA lenders of the 1930s winced at the idea of lending more than 80 percent to a borrower, some lenders provided 100 percent, 125 percent, even 150 percent of a home’s value. The “equity” being borrowed against served only as a pretense of collateral. By definition, home owners could not have more equity in a house than it was worth.264 The average home equity borrower, like the ideal credit card borrower, had income but not too much. The average home equity household earned $62,664, had children (2.6), a steady job (7.6 years), a steady address (7.6 years), and was relatively young (35-49 years old). In 1997, this average home equity borrower accessed his or her account 4.4 times over the course of the loan for an average withdrawal of nearly $30,000.265 While credit card companies jockeyed to avoid being the unused “fifth card in a wallet,” home equity lenders had no such problem.266 With high up-front fees, 75 percent of home equity borrowers tended to use the line. Home equity loans, which had no grace period, also offered lenders a way to avoid the punishing non-revolver problem of credit cards. As soon as the money was lent, interest began accruing. Still, if a bank sold home equity loans to its own customers, the loan would eliminate the most profitable segment of its portfolio—the revolvers. If the bank refused to consolidate the loans of the revolvers, however, another bank would step in. Market competition drove banks to consolidate their own credit card customers, and with a lower interest rate the bank would actually lose money through loan consolidation.267

Credit card issuers, especially commercial banks, noticed the resurgent interest in home equity loans. While these loans accessed through credit cards had flopped in the 1980s, banks again began to experiment with such novelties following the recession.268 Interest rates on such cards were much lower. Los Angele’s Sanwa Bank, for instance, offered a Visa Gold card with a variable interest rate of 6.38 percent. Credit cards were also so much easier to use by the 1990s; retailers preferred cards to checks, which was an obstacle during the 1980s. By the end of the 1990s, larger and larger banks attempted to combine home equity and credit cards. In April 2000, Washington Mutual (WaMu) offered its “On the House” card with a teaser rate of 5.99 percent, as well as a picture of a generic house.269 The credit limit was constrained only by the value of the borrower’s house. Consumer advocates, like Stephen Brobeck of the Consumer Federation of America, continued to question the ease of borrowing inherent in such a card, since consumers could so easily “spen[d] the hard-earned savings that [they had] accumulated in [their] home, savings that most people need for their retirement years.”270 But for consumers living through the late 1990s, house prices seemed to go in only one direction—up. In 2000, 44 percent of credit cardholders paid their bills in full, compared to 29 percent in 1991, mostly not out of their paychecks but their home equity.271 While home equity loans and credit cards could still appear different, in the capital markets the two became ever more indistinguishable.

Many home equity lenders, like The Money Store, operated similar to pure play credit card companies, lending to home owners and then securitizing the loans for sale in the market. By 1996, home equity securities overtook auto loans, to be the second largest volume of asset-backed securities, behind credit cards.272 The big issuers in the home equity securities market were subprime lenders, like Household Finance, which by itself was one-quarter of the market, followed by the Money Store, Oldstone, Alliance, and Advanta.273 Like credit cards, insurance and the tranche structure made these securities possible. The high ratings of the bonds came not from the inherent credit quality of the mortgages but from the insurance underpinning the issues. As the managing director of Moody’s structured finance group said in 1996, “most transactions in today’s market are rated AAA because they are insured by one of the major bond insurance companies.”274 Bond insurance companies made the securitization of these loans possible, which, in turn, made the expansion of home equity debt possible. Home equity loans and credit cards, available to consumers on plastic and funded by securitization, became ever-more indistinguishable.

In turn, the boundaries between the institutions that made this profitable convergence possible—insurance companies, commercial banks, and investment banks—were blurred as well. Ratifying this fact in law, the Gramm-Leach-Bliley Act of 1999 removed the official boundaries between these types of firms that had been put in place during the Great Depression to prevent excessive speculation, but had through numerous exceptions broken down over the 1980s and 1990s. Commercial bank Citibank joined with the insurance giant Traveler’s Group to become Citigroup. The financial services industry could now formally consolidate itself—issuing, underwriting, and insuring all the debt Americans could possibly borrow.

Conclusion: To Float and To Fall

By the end of the 1990s, credit cards and mortgages alike had floating interest rates. Both credit cards and houses were financed by capital found in securities markets. Flexible, adjustable, and managed with the most cutting edge of risk management tools, credit card debt and mortgage debt were predicated on models purporting to accurately reflect reality. And yet those models depended on data collected for only a few years—considerably less than the shortest business cycle, and much less than the long-term oscillations in the economy. Indeed, even the terms “cycle” and “oscillation” imply a certainty to the motions of the economy more appropriate to the motions of the heavens. At least astronomers’ models have billions of years of data.

The convergence of different kinds of credit and the fluid transfer of capital from markets to consumers presented an unprecedented danger for the economy. While there had always been doomsayers surrounding credit and the economy, the debt economy of the 1990s was something new. Pushing capital from across the world into the sweet spot of revolving credit allowed no wiggle room, and few lenders appreciated how aberrant the data were on which they based their models—just the past few years. These relatively new loan types, as well as laxer lending practices, put both subprime and prime borrowers at increased risk in addition to lenders. In an era of variable rates, a sudden rise in interest rates in 1995 would have a much more dire effect than it would have had in 1975. American households could now finance all their major borrowings with a floating rate—cars, houses, home equity loans, credit cards—and a rise in interest rates would hit all of their debt obligations at once. While floating rate loans made sense to individual lenders concerned about the cost of their funds, as a collective policy floating rates could lead to calamity. All debts would rise at once. Rather than reducing interest rate risk, variable rates increased it—for both borrowers and lenders.

The instruments of debt changed after 1970, but the more essential difference for borrowers was their place in the productive economy. While critics of the credit card industry have pointed to the Marquette decision as harmful for consumers, the decision allowed a more competitive industry to develop that ultimately lowered interest rates for consumers. The balance of power in capitalism was not determined by the interest rate caps for consumers, but whether they were able to pay back what they borrowed. Thirty years of wage stagnation made paying back those debts through anything but accidental asset inflation—homes and home equities—impossible. In the 1990s, the full flower of deindustrialization pummeled not only blue- but white-collar America as well.

While a generation of postwar consumers could safely borrow against rising incomes, the promise on which American prosperity had been built now cracked. The evanescent promise of getting a good-paying job that a generation earlier would have been seen as a sure path to upward mobility, only led in the 1990s to increased debt and certain bankruptcy after that job was downsized.275 Even those with college educations found themselves downsized in the 1990s as information technology increased the efficiency of office work, and their wages converged with high school graduates. The only educational group that received a substantial increase in income during the 1990s was those with graduate degrees. These best-educated workers, who could multiply their efforts through the new information technology of models and data, produced tremendous profits for their firms, and were amply rewarded.276 If their models put others to the brink of bankruptcy, for those who created them the models produced untold fortunes.

Credit issuers may have found the revolvers, and even pressured them to borrow, but the labor markets kept them in debt. While issuers struggled to push and to prod borrowers into not paying their bills, capitalism did their job for them—and did so effortlessly. Consumers borrowed more because their lives were more volatile and because more credit was made available to them. More credit was made available because credit card debt, on average, was a more profitable investment for banks than other investments. The same banal and brutal process of allocating capital that had made postwar America prosperous had come to undermine its long-term viability.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.121.131