7

COLLEGES AND UNIVERSITIES

Let’s take as our first case study the realm of higher education, the ground zero of my own investigations of metric fixation. Comprising a huge sector of the national economy and a central institution of all advanced societies, colleges and universities exemplify many of the characteristic flaws and unintended consequences of measured performance, as well as some of its advantages.

RAISING THE METRIC: EVERYONE SHOULD GO TO COLLEGE

Once we become fixated on measurement, we easily slip into believing that more is better.

More and more Americans are going on to post–high school education, encouraged to do so by both governments and nonprofit organizations. According to the U.S. Department of Education, for example, “In today’s world, college is not a luxury that only some Americans can afford to enjoy; it is an economic, civic, and personal necessity for all Americans.”1

One of many nonprofit organizations that convey the same message is the Lumina Foundation. Its mission is to expand post-secondary educational attainment, with a goal of having 60 percent of Americans hold a college degree, certificate, or other “high-quality postsecondary credential” by the year 2025. Its “Stronger Nation” initiative, as the foundation declares on its website,

is all about the evidence of that learning—quantifying it, tracking it, pinpointing the places where it is and isn’t happening…. Lumina is also working with state policy leaders across the nation to set attainment goals and develop and implement strong state plans to reach them. So far, 26 states have set rigorous and challenging attainment goals—15 in the last year alone. Most of these states are taking concrete steps—such as implementing outcomes-based funding, improving developmental education, and making higher education more affordable—to increase attainment and reach their goals.2

The Lumina Foundation is steeped in metrics and proselytizes on its behalf: its website proclaims, “As an organization focused on results, Lumina Foundation uses a set of national metrics to guide our work, measure our impact and monitor the nation’s progress toward Goal 2025.”

The Lumina Foundation’s mission comports with a widely shared conviction about the role of higher education in American society: the belief that ever more people should go on to college, and that doing so increases not only their own life-time earnings but also creates national economic growth.

RAISING THE NUMBER OF WINNERS LOWERS THE VALUE OF WINNING

That article of faith, and the performance targets to which it gives rise, may simply be mistaken. As Alison Wolf, an educational economist at the University of London, has pointed out, it is true that those who have a B.A. tend to earn more on average than those without one. Thus, on the individual level, the quest for a B.A. degree may make economic sense. But on the national level, the idea that more university graduates means higher productivity is a fallacy.3

One reason for that is that to a large extent education is a positional good—at least when it comes to the job market. For potential employers, degrees act as signals: they serve as a shorthand that allows employers to rank initial applicants for a job. Having completed high school signals a certain, modest level of intellectual competence as well as personality traits such as persistence. Finishing college is a signal of a somewhat higher level of each of these. In a society where a small minority successfully completes college, having a B.A. signals a certain measure of superiority. But the higher the percentage of people with a B.A., the lower its value as a sorting device. What happens instead is that jobs that once required only a high school diploma now require a B.A. That is not because the jobs have become more cognitively demanding or require a higher level of skill, but because employers can afford to choose from among the many applicants who hold a B.A., while excluding the rest. The result is both to depress the wages of those who lack a college degree, and to place many college graduates in jobs that don’t actually make use of the substance of their college education.4 That leads to a positional arms race: as word spreads that a college diploma is the entry ticket to even modest jobs, more and more people seek degrees.

Thus, there are private incentives for increasing numbers of people to try to obtain a college degree. Meanwhile, governments and private organizations set performance measures aimed at raising college attendance and graduation.

HIGHER METRICS THROUGH LOWER STANDARDS

But the fact that more Americans are entering college does not mean that they are prepared to do so, or that all Americans are capable of actually earning a meaningful college degree.

In fact, there is no indication that more students are leaving high school prepared for college-level work.5 One measure of college preparedness is the performance of students on achievement tests, such as the SAT and the ACT, which are used to predict likely success in college (they are, in part, aptitude tests). For the most part, these tests are taken only by high school students who have some hope of going on to higher education, though in an effort to boost student achievement, some states have taken to mandating that ever more students take such tests. (Probably a case of misplaced causation. Students who took the tests tended to have higher levels of achievement. So, it was mistakenly reasoned, by getting more students to take the test, levels of achievement would be raised. The flaw is that better-performing students were more likely to take the test in the first place. That is, policymakers mistook cause for effect.) The ACT tests four subject areas: English, math, reading, and science. The company that develops the ACT has developed benchmarks of scores that indicate that the test taker has a “strong readiness for college course work.” Of those who took the ACT test most recently, a third did not meet the benchmark in any of the four categories, and only 38 percent met the benchmarks in at least three of the four areas. In short, most of those who aspire to go on to college do not have the demonstrated ability to do so.6

The results are predictable—though few want to acknowledge them. Since more students enter community colleges and four-year colleges inadequately prepared, a large portion require remedial courses. These are courses (now euphemistically rechristened “developmental” courses) that cover what the student ought to have learned in high school. A third of students who enter community colleges are placed in developmental reading classes, and more than 59 percent are placed in developmental mathematics courses.7 Students who are inadequately prepared for college also make additional demands on the institutions they attend, thus raising the costs of college education: the growth on campuses of centers of “educational excellence” is a euphemistic response to the need for more extracurricular help in writing and other skills for students inadequately prepared for university-level work.

Colleges, both public and private, are measured and rewarded based in part on their graduation rates, which are one of the criteria by which colleges are ranked, and in some cases, remunerated. (Recall the Lumina Foundation’s encouragement of state governments to engage in “outcomes-based funding.”) What then happens is that outcomes follow funding. By allowing more students to pass, a college transparently demonstrates its accountability through its excellent metric of performance. What is not so transparent is the lowered standards demanded for graduation.8 More courses are offered with requirements that are easily fulfilled. There is pressure on professors—sometimes overt, sometimes tacit9—to be generous in awarding grades. An ever-larger portion of the teaching faculty comprises adjunct instructors—and an adjunct who fails a substantial portion of her class (even if their performance merits it) is less likely to have her contract renewed.

Thus, more students are entering colleges and universities. A consequence of students entering college without the ability to do college-level work is the ever larger number of students who enroll but do not complete their degrees—a widespread and growing phenomenon that has substantial costs for the students who do so, in tuition, living expenses, and earnings foregone.10 High dropout rates seem to indicate that too many students are attempting college, not too few.11 And those who do obtain degrees find that a generic B.A. is of diminishing economic value, because it signals less and less to potential employers about real ability and achievement.12 Recognizing this, prospective college students and their parents seek admission not just to any college, but to a highly ranked one.13 And that, in turn, has led to the arms race of college rankings, a topic to which we will return.

Lowering the standards for obtaining a B.A. means that using the percentage of those who attain a college degree as an indicator of “human capital” becomes a deceptive unit of measurement for public policy analysis. Economists can evaluate only what they can measure, and what they can measure needs to be standardized. Thus economists who work on “human capital” and its contribution to economic growth (and who almost always conclude that what the economy needs is more college graduates) often use college graduation rates as their measure of “human capital” attainment, ignoring the fact that not all B.A.’s are the same, and that some may not reflect much ability or achievement. This lends a certain air of unreality to the explorations of what one might call the unworldly economists, who combine hard measures of statistical validity with weak interest in the validity of the units of measurement.

One assumption that lies behind the effort to boost levels of college enrollment and completion is that increases in average educational attainment somehow translate into higher levels of national economic growth. But some distinguished economists on both sides of the Atlantic—Alison Wolf in England, and Daron Acemoglu and David Autor in the United States—have concluded that that is no longer the case, if it ever was. In an age in which technology is replacing many tasks previously performed by those with low to moderate levels of human capital, national economic growth based on innovation and technological progress depends not so much on the average level of educational attainment as on the attainment of those at the top of the distribution of knowledge, ability, and skill.14 In recent decades, the percentage of the population with a college degree has gone up, while the rate of economic growth has declined. And though the gap between the earnings of those with and those without a college diploma remains substantial, the falling rate of earnings for college graduates seems to indicate that the economy already has an oversupply of graduates.15 By contrast, there is a shortage of workers in the skilled trades, such as plumbers, carpenters, and electricians—occupations in which training occurs through apprenticeship rather than through college education—who often earn more than those with four-year degrees.16

To be sure, public policy ought to aim at more than economic growth, and there is more to college education than its effect on earning capacity, as we will explore in a moment. But for now, it is worth underscoring that the metric goal of ever more college graduates is dubious even by the economistic criteria by which higher education is often measured.

PRESSURE TO MEASURE COLLEGE PERFORMANCE

In the decades since Elie Kedourie penned his critique of the centralizing policy of Margaret Thatcher’s Conservative government, central government control over British institutions of higher education has expanded and intensified. Much of that control takes the form of management through performance metrics. For scholarship in many fields, the results have been deleterious.

In England, as elsewhere, an ever larger proportion of the population is attending university, in keeping with the government’s aims. In 1970 less than 10 percent of men and women in each age cohort attended university. By 1997, it was close to a third, and by 2012, 38 percent of nineteen-year-olds were enrolled in some form of tertiary education.17 Paying for them is an ever more onerous task, and in recent years the costs have been increasingly shifted to the students themselves (or their families) in the form of tuition fees. But government expenditure remains substantial, and in an effort to control expenses and achieve “value,” that control increasingly takes the form of payment for purported results. That performance is evaluated through metrics that focus upon the measured output of each department and institution.

In an attempt to obtain “value,” successive British administrations have created a series of government agencies charged with evaluating the country’s universities, with titles such as the “Quality Assurance Agency.”18 There are audits of teaching quality, such as the “Teaching Quality Assessment,” evaluated largely on the extent to which various procedures are followed and paperwork filed, few of which have much to do with actual teaching.19 But one clear result has been that professors are forced to devote more and more of their time to paperwork rather than to research or teaching. And there has been a ballooning of the number of professional staff, including the newly created post of “quality assurance officers,” dedicated to gathering and analyzing the data for what was once known as the Research Assessment Exercise, since rechristened as the Research Excellence Framework.20 The cost of these exercises in metrics in England alone was estimated at £250,000,000 in 2002.21 A mushroom-like growth of administrative staff has occurred in other countries that have adopted similar systems of performance measurement, such as Australia. In most such systems, metrics has diverted time and resources away from doing and toward documenting, and from those who teach and research to those who gather and disseminate the data for the Research Assessment Exercise and its counterparts.22 The search for more data means more data managers, more bureaucracy, more expensive software systems. Ironically, in the name of controlling costs, expenditures wax.

The closest parallel in the United States are the accrediting organizations that grant legitimacy to American colleges and universities. They are regional in scope, but since receiving federal funds requires accreditation by such agencies, they also serve as instruments of the federal government.23 While they do not control funding in the manner of their British counterparts, they play a major role nevertheless. And in recent decades, that role has been to pressure the colleges and universities they accredit to adopt ever more elaborate measures of performance, under the rubric of “assessment.”24

Reward for measured performance in higher education is touted by its boosters as making universities “more like a business.” But businesses have a built-in restraint on devoting too much time and money to measurement—at some point, it cuts into profits. Ironically, since universities and other nonprofit institutions have no such bottom line, government or accrediting agencies or the university’s administrative leadership can extend metrics endlessly.25 The effect is to increase costs or to divert spending from the doers to the administrators—which usually suits the latter just fine. It is hard to find a university where the ratio of administrators to professors and of administrators to students has not risen astronomically in recent decades.26 And the same holds true on the national level.

THE RANKING ARMS RACE

Another increasingly influential set of performance metrics in the field of higher education are university rankings. They take a variety of forms. On the international level, there is the Shanghai Jiao Tong “Academic Ranking of World Universities” (which was developed to provide the Chinese government a “global benchmark” against which Chinese universities could assess their progress in an attempt to catch up on “hard scientific research” and hence gives a 90 percent weighting to publications and awards in the natural sciences and mathematics)27 and the Times Higher Education Supplement “World University Rankings,” which tries to include teaching, research (including volume of publications and citations), and “international outlook.” Within the United States, the most influential ratings are those of US News and World Report (USNWR), with competition from Forbes, Newsweek, Princeton Review, Kiplinger (which tries to balance quality with affordability), and a host of others. These rankings (or “league tables” as they are known in Britain) are an important source of prestige: alumni and members of the board of trustees are anxious to have their institutions rate highly, as are potential donors and, of course, potential students. Maintaining or improving the institution’s rankings tends to become a priority for university presidents and their top administrators.28 Indeed, some American university presidents are awarded contracts that specify a bonus if they are able to raise the school’s rank. So are other top administrators: since one factor that affects rankings is the achievement scores of incoming students, the dean of admissions of at least one law school was remunerated based in part on the scores of the admitted students.29

Recently I was puzzled to find that a mid-ranked American university was taking out full-page advertisements in every issue of The Chronicle of Higher Education, touting the important issues on which its faculty members were working. Since the Chronicle is read mostly by academics—and especially academic administrators—I scratched my head at the tremendous expenditures of this not particularly rich university on a seemingly superfluous ad campaign. Then it struck me: the USNWR ratings are based in good part on surveys of college presidents, asking them to rank the prestige of other universities. The criterion is of dubious validity, since most presidents are simply unaware of developments at most other institutions. The ad campaign was aimed at raising awareness of the university, in an attempt to boost the reputational factor of the USNWR rankings.

Universities also spend heavily on glossy brochures touting their institutional and faculty achievements. These are mailed to administrators at other universities, who vote on the USNWR surveys. Though universities (and schools within them, such as law schools) spend untold millions on these marketing publications, there is no evidence that they actually work. Most, in fact, are tossed, unopened, into the recycling bin by their recipients.30

In addition to expenditures that do nothing to raise the quality of teaching or research, the growing salience of rankings has led to ever new varieties of gaming through creaming and improving numbers through omission or distortion of data. A recent scholarly investigation of American law schools provides some examples. Law schools are ranked by USNWR based in part on the LSAT scores and GPAs of their admitted, full-time students. To improve the statistics, students with lower scores are accepted on a “part-time” or “probationary” basis, so that their scores are not included. Since the scores of transfer students are not counted, many law school admissions offices solicit students from slightly lower ranked schools to transfer in after their first year. Low student to faculty ratios also contribute to a school’s score. But since those ratios are measured during the fall term, law schools encourage faculty to take leaves only during the spring term.31 These techniques for gaming the rankings system are by no means confined to law schools: much the same goes on at many colleges and universities.32

Is it all worthwhile? Some recent research shows that small differences in college rankings have much less effect on enrollment than college administrations believe, and that the resources expended to raise rankings are not commensurate with their actual impact.33 If so, that message has yet to filter down to many university officials.

MEASURING ACADEMIC PRODUCTIVITY

In the attempt to replace judgments of quality with standardized measurement, some rankings organizations, government institutions, and university administrators have adopted as a standard the number of scholarly publications produced by a college or university’s faculty, and determined the number of these publications using commercial databases that aggregate such information.34 Here is a case where standardizing information can degrade its quality.

The first problem is that these databases are frequently unreliable: having been designed to measure output in the natural sciences, they often provide distorted information in the humanities and social sciences. In the natural sciences, and some of the behavioral sciences, new research is disseminated primarily in the form of articles published in peer-reviewed journals. But that is not the case in fields such as history, in which books remain the preeminent form of publication, and so a measurement of the number of published articles presents a distorted picture. And this is only the beginning of the problem.

When individual faculty members, or whole departments, are judged by the number of publications, whether in the form of articles or books, the incentive is to produce more publications, rather than better ones. Really important books may take many years to research and write. But if the incentive system rewards speed and volume of output, the result is likely to be a decline in truly significant works. That is precisely what seems to have occurred in Great Britain as a result of its Research Assessment Exercise: a great stream of publications that are both uninteresting and unread.35 Nor is the problem confined to the humanities. In the sciences as well, evaluation solely by measured performance leads to a bias toward short-term publication rather than long-term research capacity.36

In academia as elsewhere, that which gets measured gets gamed. Take the practice of “impact factor measurement.” Once it was recognized that not all published articles were of equal significance, techniques were developed to try to measure each article’s impact. This took two forms: counting the number of times the article was cited, either on Google Scholar or on commercial databases; and considering the “impact factor” of the journal in which it was published, a factor determined in turn by the frequency with which articles in the journal were cited in the databases. (Of course, this method cannot distinguish between the following citations: “Jerry Z. Muller’s illuminating and wide-ranging book on the tyranny of metrics effectively slaughters the sacred cows of so many organizations” and “Jerry Z. Muller’s poorly conceived screed deserves to be ignored by all managers and social scientists.” From the point of view of tabulated impact, the two statements are equivalent.) The journals were grouped by disciplines, and for most purposes, only citations in the journals within the author’s discipline were counted. That too was problematic, since it tended to shortchange works of trans-disciplinary interest. (Such as this one.)

Moreover, in another instance of Campbell’s Law (explained in chapter 1), in an attempt to raise their citation scores, some scholars formed informal citation circles, the members of which made a point of citing one another’s work as much as possible. Some lower-ranked journals actually requested that authors of accepted articles include additional citations to articles in the journal, in an attempt to improve its “impact factor.”37

What, you might ask, is the alternative to tallying up the number of publications, the times they were cited, and the reach of the journals in which articles are published? The answer is professional judgment. In an academic department, evaluation of faculty productivity can be done by the chair or by a small committee, who, consulting with other faculty members when necessary, draw upon their knowledge, based on accumulated experience, of what constitutes significance in a book or article. In the case of major decisions, such as tenure and promotion in rank, scholars in the candidate’s area of expertise are called upon to provide confidential evaluations, a more elaborate form of peer review. The numbers gathered from citation databases may be of some use in that process, but numbers too require judgment grounded in experience to evaluate their worth. That judgment grounded in professional experience is precisely what is eliminated by too great a reliance on standardized performance indicators.38 As one expert in the use and misuse of scientific rankings puts it, “[A]ll too often, ranking systems are used as a cheap and ineffective method of assessing the productivity of individual scientists. Not only does this practice lead to inaccurate assessment, it lures scientists into pursuing high rankings first and good science second. There is a better way to evaluate the importance of a paper or the research output of an individual scholar: read it.”39

THE VALUE AND LIMITS OF RANKINGS

Public rankings of the sort offered by USNWR do have some real advantages. For the uninformed, they provide at least some preliminary indication of the relative standing of various institutions. And they have prompted colleges and universities to release information of possible utility to potential students, such as the college’s retention and graduation rates. What they generally fail to do is provide information that might explain why rates of retention and graduation are particularly high or low. A college that admits students who are well prepared will tend to have high rates of retention and graduation. But for institutions that aim to educate students who are less well prepared to begin with, “transparent” metrics make them seem to be failures, whereas they may be relatively successful given the students they have admitted. Their students are more likely to need remedial courses, are less likely to acquire a degree, and also likely to do less well in the job market. As in the case of hospitals in impoverished areas that are penalized for their relatively high rate of readmissions (which we will examine in chapter nine), colleges that serve low-income students are likely to be penalized for dealing with the particular populations who it is their mission to serve. Rankings create incentives for universities to become more like what the rankings measure. What gets measured is what gets attention. That leads to homogenization as they abandon their distinctive missions and become more like their competitors.40

GRADING COLLEGES: THE SCORECARD

Among the strongholds of metrics in the United States has been the Department of Education, under a succession of presidents, Republican and Democratic. During President Obama’s second term, his Department of Education set out to develop an elaborate “Postsecondary Institution Ratings System.” It was intended to grade all colleges and universities, to disaggregate its data by “gender, race-ethnicity and other variables,” and eventually to tie federal funds to the ratings, which were to focus on access, affordability, and outcomes, including expected earnings upon graduation. “The public should know how students fare at institutions receiving federal student aid, and this performance should be considered when we assess our investments and priorities,” said Department of Education Under-Secretary Ted Mitchell. “We also need to create incentives for schools to accelerate progress toward the most important goals, like graduating low-income students and holding down costs.”41 The administration’s plans for a comprehensive rating system ran into opposition from colleges and from Congress. In the end, the Department of Education settled on a stripped-down version, the “College Scorecard,” which was made public in September 2015.

It was the product of good intentions, intended to address real problems in the provision of higher education. One such hazard was the extremely spotty record of for-profit institutions offering career-oriented education in fields like culinary arts, automotive repair, or health aids, which had been expanding by leaps and bounds. Some of these companies (such as Corinthian and ITT, both of which were ultimately closed down by the government) were predatory by any standard, preying upon the least informed potential students and promising that the degrees they could obtain would lead to lucrative jobs. In fact, the quality of education was often deficient, and graduates had little success in the job market. Moreover, some 90 percent of tuition flowed from the Department of Education into the coffers of the for-profit corporations, loans that were to be paid off by the student borrowers. But in reaction to a genuine problem at the low end of the for-profit sector, the department responded with far-reaching demands with consequences for all colleges and universities.

What the advocates of greater government accountability metrics overlook is that the very real problem of the increasing costs of college and university education is due in part to the expanding cadres of administrators, many of whom are required in order to comply with government mandates. One predictable effect of the new plan would have been to raise the costs of administration, both by diverting ever more faculty time from teaching and research into filling out forms to accumulate data, and by increasing the number of administrators to gather the forms, analyze the data, and hence supply the raw material for the government’s metrics.

Some of the suggested objectives of the original plan (the Postsecondary Institution Ratings System) were mutually exclusive, while others were simply absurd. The goal of increasing college graduation rates, for example, was at odds with increasing access, since less advantaged students tend to be not only financially poorer but also worse prepared. The better prepared the students, the more likely they are to graduate on time. Thus community colleges and other institutions that provide greater access to the less prepared would have been penalized for their low graduation rates. They could, of course, have attempted to game the numbers in two ways. They could raise the standards for incoming students, increasing their likelihood of graduating—but at the price of access. Or they could respond by lowering the standards for graduation—at the price of educational quality and the market value of a degree. It might be possible to admit more economically, cognitively, and academically ill-prepared students and to ensure that more of them graduate; but only at great expense, which was at odds with another goal of the Department of Education, namely holding down educational costs.

Another metric that the colleges and universities were to supply was the average earnings of their students after graduation. That makes sense for occupationally focused, for-profit institutions, which, as we’ve seen, are particularly prone to overpromising and graduating students with degrees of dubious quality. But for most colleges and universities, not only is this information expensive to gather and highly unreliable—it is downright distortive. For many of the best students will go on to one or another form of professional education, insuring that their earnings will be low for at least the time they remain in school. Thus a graduate who proceeds immediately to become a greeter at Walmart would show a higher score than her fellow student who goes on to medical school. But there would be numbers to show, and hence “accountability.”

Then there is the broader problem of the growing costs of college education, costs that have continued to rise well beyond the level of inflation. The issue of affordability was exacerbated by the tendency of many states to cut back their financial support for state colleges. Perhaps the least transparent element of college affordability is the actual cost of attending a particular institution, because of the gap between the sticker price and the net price. The sticker price is the official cost of tuition, room, and board; the net price is the actual amount paid by students and their parents, after accounting for financial aid based on economic need or on academic merit. The difference is often substantial, and for many people counterintuitive: because the most prestigious institutions tend to be the most well-endowed, they can afford to subsidize much of the undergraduate education of the students they admit. Thus a student poor in economic resources but rich in promise may find the actual costs of attending an elite college less than those at a less prestigious, and nominally cheaper, college. To the extent that rankings convey such information, as the College Scorecard tries to do, they provide a real service.

In keeping with Obama’s announced goal of helping students and their parents to “get the most bang for your educational buck,” the Scorecard highlighted three metrics: the rate of graduation, average annual cost, and “salary after attending” measured at ten years after entering college, rather than immediately after graduation.42 The figures were problematic, in that they included only data from students who had received federal aid, which meant that the results applied only to those from lower economic backgrounds. Since those of wealthier parentage are more likely to attain greater earnings,43 the salary figures are skewed, albeit in different directions for various colleges, depending on the mix of backgrounds of the student body. More worrisome yet is the fact that the Scorecard “makes no effort to isolate the school’s contribution to earnings from what one could reasonably expect based on family incomes and test scores of its students or the level of degrees it offers.”44 Yet college outputs tend to be highly correlated with inputs: students who enter with higher levels of academic ability (and who are more often the offspring of parents with high levels of educational achievement or income) tend to be more successful on standardized assessments of college outcomes.45 The Brookings Institution has tried to overcome this hurdle by using additional information to try to calculate the “value added,” by which it means the increase in income provided by each college, in light of the available data on the backgrounds of the students entering each institution. The hope is that such metrics “will benefit the many people interested in knowing how well specific colleges are preparing students for remunerative careers.”

THE MESSAGE OF THE METRICS: COLLEGE IS TO MAKE MONEY

Let us leave aside the accuracy and reliability of these metrics to explore a more important issue: the message conveyed by the metrics themselves. The College Scoreboard treats college education in purely economic terms: its sole concern is return on investment, understood as the relationship between the monetary costs of college and the increase in earnings that a degree will ultimately provide. Those are, of course, legitimate considerations: college costs eat up an increasing percentage of familial income or entail the student taking on debt; and making a living is among the most important tasks in life.

But it is not the only task in life, and it is an impoverished conception of college education that regards it purely in terms of its ability to enhance earnings.46 Yet that is the ideal of education that the College Scorecard embodies and encourages, as do similar metrics. If we distinguish training, which is oriented to production and survival, from education, which is oriented to making survival meaningful, then the College Scorecard is only about the former.47 And indeed, the Scorecard and Brookings systems tend to rank most highly institutions that are focused on engineering and technology—the stuff of production. The sort of life-long satisfaction that comes from an art history course that allows you thereafter to understand a work of art in its historical context; or a music course that trains you to listen for the theme and variations of a symphony or the jazz interpretation of a standard tune; or a literature course that heightens your appreciation of poetry; or an economics course that leaves you with an understanding of key economic institutions; or a biology course that opens your eyes to the wonders of the structures of the human body—none of these is captured by the metrics of return-on-investment. Nor is the fact that college is often a place where life-long friendships are made, often including that most important of friendships, marriage. All of these should be factored in when considering “return on investment”: but because they are not measureable in quantifiable terms, they are not included.

The hazard of metrics so purely focused on monetary return on investment is that like so many metrics, they influence behavior. Already, universities at the very top of the rankings send a huge portion of their graduates into investment banking, consulting, and high-end law firms—all highly lucrative pursuits.48 These are honorable professions, but is it really in the best interests of the nation to encourage the best and the brightest to choose these careers? One predictable effect of the weight attributed to future income in college rankings will be to incentivize institutions to channel their students into the most high-paying fields. Those whose graduates go on to careers in less remunerative fields, such as teaching or public service, will be penalized.49

A capitalist society depends for its flourishing on a variety of institutions that provide a counterweight to the market, with its focus on monetary gain. To prepare pupils and university students for their roles as citizens, as friends, as spouses, and above all to equip them for a life of intellectual richness—those are among the proper roles of college. Conveying marketable skills is a proper role as well. But to subordinate higher education entirely to the capacity for future earnings is to measure with a very crooked yardstick.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.206.244