CHAPTER 3

The Overlooked Economics of Technology

A box without hinges, key, or lid. Yet golden treasure inside is hid!

—JRR Tolkien

There have been a number of previous instances where talk of a “new economy” has emerged, only for the suggestion to be shot down wholesale when the subsequent market crash arrived. This condemns the valid observations to get buried in the frenzy of retroactive rejection, hedging, and caveats. More specifically, the missing ingredient in most prior debates and analyses of technological economics is a sufficient examination of the technology-driven convergence of previously unrelated forces.

We have established earlier that while people have grown accustomed to seeing all forms of consumer technology continuously decline in price, very few take the next step and observe the ever-widening array of products that continue to merge into this river of technological deflation. Fewer still contemplate the effect this has on the broader macroeconomy, and why this was too insignificant to matter until recently, but no longer. It is surprising how little thought is given to this even by established economists and governments, despite how it affects nearly everything of economic and social consequence. Why might this be?

The Intertwining of Disparate Phenomena

To approach the nexus that this whitepaper seeks to address, we must first map the roads leading to it. There are three unrelated groups of experts who do not yet see that their fields are beginning to overlap significantly for the first time.

The first are the futurists and technology forecasters watching technological progress and predicting technological disruption (Ray Kurzweil being the most illustrious among them). They have done yeoman work in evangelizing why the rate of technological change is exponential and accelerating, and tracking examples that demonstrate this.

The second group is one of monetary policy experts observing every word uttered by the central banks of the world. They try to assess the impacts of various monetary expansion programs, and whether the style administered by one central bank is as effective as that done by another.

The third group consists of macroeconomists and fiscal policy experts who keep track of government spending, taxation, debt levels, the Laffer-Curve, bond yields, and so on, across each major nation–state. The budgetary process of their government is very important to their professional work and annual calendar.

But here we are in 2016, with each of these three groups growing increasingly baffled as to why their models and assumptions can no longer explain the peculiar disconnects that are appearing across financial markets, central bank liquidity actions, and economic indicators that steer government fiscal policy. The latter two groups are part of the establishment and prevailing zeitgeist, while the first group is small, seen as eccentric, mostly tied to the field of computer science, and has insufficient marketing expertise to generate mainstream awareness of their work. To my knowledge, no Western politician or central banker has ever uttered a single sentence about the accelerating rate of technological change and how policy has to mirror it in both agility and scope.

I have seen brilliant and acclaimed thinkers in each discipline figuring out a fraction of the composite body of knowledge presented here, but not an entire holistic view, much like the old story of a few blind men and an elephant. Part of this is due to not knowing where to continue the investigation. Why should a budgetary analyst read about accelerating technological change and Moore’s Law? Why should an AI expert dive deeply into central bank balance sheets?

When the spaces between previously unrelated fields begin to ignite the sparks of new knowledge, it is usually from an outside agent. I am not part of the formal establishment in any of these three groups, and perhaps that might just be what can enable a vision of what is to come.

Accelerating Technological Deflation, and the Federal Reserve

The primary discovery that every recommendation in this whitepaper rests on, is that if rapidly deflating technological products are now 2% of GDP, there must be some deflation affecting the broader economy. To detect this, we turn to the customary actions that governments take if they find inflation to be too low. If the government has been taking actions to fight deflation, and this deflation appears to be exponential, perhaps it has origins in the spread of technology through the economy.

In the United States, the Federal Reserve controls the Fed Funds (FF) rate, which it raises when it expects inflation to be higher in the future, and lowers when the economy is weakening and/or inflation is trending too low. Until the end of the 20th century, this process was relatively straightforward, with the FF rate very rarely ever going below 3% or so. Inflation was discussed as though it could fall in only two categories; “high” and “very high.” It was further assumed that whenever employment reaches a threshold of “full employment” that inflation was certain to accompany this. Many practices that require inflation to succeed, such as taking on mortgage debt, were assumed to be indisputable wisdom that had no such dependency.

However, after the technology boom and bust at the turn of the century, inflation was conspicuously missing. The Federal Reserve had the freedom to lower the FF rate all the way down to 1% in 2004, and while observers expected this would finally cause inflation, it still did not. Relatively few economists were particularly curious about why that might be, since the rate was still above zero, and the possibility of rates at zero did not seem realistic. Japan had lowered its own rate to zero, and still struggled with deflation. But since Japan has lower birth and immigration rates than the United States, this explanation was deemed sufficient and Japan was not seen as an indicator of a broader phenomenon that could also emerge in the United States.

As the economy strengthened, the Federal Reserve, expecting inflation, steadily increased the 1% FF rate all the way up to 5.5% by 2007, only to find that this was too high and that the housing market, and with it the entire economy, was weakening precipitously. The Fed reacted with a rapid reversal of rates all the way down to not just the 1% of 2003–04, but to nearly 0%. However, to the surprise of observers, even 0% was not enough to create inflation, so they began a form of monetary expansion known as “Quantitative Easing” (QE). QE was designed to simulate the conditions of negative rates without deposits actually being docked an interest charge by banks. Some liken QE to “money-printing,” but that is not quite accurate, as the impact of each dollar can vary based on the method of QE.

Effectively, the Federal Reserve embarked on a campaign to expand the monetary supply via a process of asset purchases. They would buy bonds, and hold the bonds on the balance sheet, with the implied understanding that the bonds would be “sold” into the open market at some future time. By purchasing bonds, the Federal Reserve lowers interest rates even for longer-term loans, which would make borrowing attractive for consumers and corporations. The Federal Reserve thought that the first program of QE would be the only one, but when equities could not sustain any gains after the conclusion of the easing program, economic indicators weakened. In response, the Fed had to embark on a second program, calling in QE2. When the conclusion of QE2 promptly led to yet another major equity correction, a third bout, QE3, was ramped up. As of early 2016, there is still an assumption that QE3 is the final round of expansion that the Federal Reserve will do, and that even the FF rate can be increased and kept above zero. This will most certainly not be the case.

Traditionally, money-printing has caused inflation in times before technology was an offsetting force. The Weimar Republic of Germany (1919–33) is often cited as an example of such peril. When the first round of QE started, a crowd of hyperinflation fearmongers arose, committed to a narrative that we were doomed to repeat the Weimar experience if we embark on this slippery slope. This group found a natural synergy with the technophobe movement, which is built around an insistence that technology has not created any real economic changes in the last century. Strident opposition to QE became quite fashionable, with all QE being equated to the mismanagement of Venezuela under Hugo Chavez. Some expended considerable effort to assert their supposed expertise by insisting that inflation was much higher than the data indicated.

As QE commenced, however, the inflation was minimal and transitory at best. There has certainly not been any sustained “high” inflation to this day despite the immense amount of QE. Whether one looks at the official Consumer Price Index (CPI) or the MIT Billion Prices Project, inflation is far below the zone where it could be considered adequate, let alone high. The hyperinflation cult has seen membership shrink, but new questions have emerged amidst the ashes of their failed predictions. Where is all that QE vanishing to? At what rate? Is this pattern of disappearance permanent? Is the QE turning up somewhere else?

Cynthia Wu and Fan Dora Xia have published research on what is termed as the Fed Funds Shadow Rate. While this research is little-known outside the immediate field, the discovery has profound significance, perhaps even greater than Ms. Wu and Ms. Xia realize. The shadow rate, which was updated monthly while the FF rate was near zero, roughly tracks the effect of U.S. QE on generating negative FF rates.

This shadow rate reveals that increasing levels of QE still did not generate noteworthy inflation, and this may be synchronous with concurrent ATOM deflation. The rounds of QE temporarily pushed the Wu-Xia shadow rate not merely to zero, but negative. The movement from 0% to −1% and −2% was swift, and the trajectory seemed to indicate that the trend of increasingly negative rates was not linear, but exponential. When the United States stopped QE, the Wu-Xia shadow rate quickly rose back to 0%, and this coincided with increased deflation and a massive crash in the price of almost every commodity. This crash was despite the fact that excluding the United States, the other central banks of the world were creating a combined total of over $200 billion/month as of early 2016.

Hence, if the Wu-Xia shadow rate is a tool to indirectly estimate the current ATOM deflation rate, then perhaps the measure of sufficient versus insufficient QE is the gap between the two. Accordingly, when the rate is above where the ATOM indicates technological deflation to have reached by that point, then liquidity is insufficient and deflation manifests. When the rate is the same or lower than the ATOM deflation rate, then there is sufficient liquidity and a proportional level of inflation. This means that if there is to ever be significant inflation, the Wu-Xia shadow rate has to be more deeply negative than the estimated ATOM deflation rate. This itself is impossible when the FF rate is above zero, pinning the Wu-Xia shadow rate to the same.

Now, if technology is rising as a percentage of world GDP, this could mean that the progression of the ATOM deflation rate from −1% to −2% is an ongoing trend. The rate could similarly double again from −2% to −4%, and, amazingly, from −4% to −8% by the 2030s, merely by technology rising from the current 2% of GDP to 4%, 8%, and beyond. This sounds extraordinary, but unless one thinks that technology will shrink as a share of GDP, it is the course we are presently on. The level of monetary expansion needed to truly generate inflation is thus far higher than most economists think.

This theory, while still somewhat speculative, is supported by the fact that the amount of cumulative “QE” by all the central banks of the world seems to be accelerating exponentially despite no apparent aggregate quota being agreed to by the banks. Each central bank is reacting to the conditions in its own country, but as the ATOM is global, the deflationary effect concentrates into countries with high technology density.

Even if one particular bank, such as the U.S. Federal Reserve, declares that it will not conduct more QE, other central banks fill the gap, inadvertently ensuring that the combined total continues to rise. Despite over $16 trillion in monetary expansion as of 2016, the crash in commodity prices emphatically buries the fears of inflation, “peak oil,” and “a return to the gold standard” that incorrectly arose from outdated assumptions about such massive monetary action. It is obvious that all this newly created money has merely offset deflation. As structural deflation accelerates, the level of world QE has to keep rising and be more diffuse than current programs.

While not every type of monetary creation has the same impact per dollar, the rising total is indicative of an all-important phenomenon. Note how the chart bears an uncanny resemblance to the exponential curves found in the writings of Mr. Kurzweil and other futurists. If this exponentially rising monetary expansion is associated with the trend of technological deflation, then monetary expansion, far from ending, has to be made permanent across all major world economies, be declared as such, and rise at a rapid rate each year. From the chart, it is apparent that the notion of ever selling purchased assets on central bank balance sheets back into the market (a reversal of monetary expansion) is entirely out of the question, making the balance sheet itself a moot concept.

image

So if all this newly created money does not cause inflation, is it utterly vanishing? On the contrary, the nature of technology is such that the liquidity is being metabolized by the ATOM. This increases the size and scope of the ATOM, which in turn demands more liquidity, which then produces yet more technology. This self-reinforcing process generates new productivity and economic growth, and is in fact an indicator of the macro economic growth trend seeking to return to the long-term trend line. Hence, this pattern of exponentially rising monetary expansion is itself the fuel that will keep the economic growth trend going. Over time, as technology becomes a sufficiently large portion of the economy, these two exponents will begin to merge.

As we will see in a later section, this perpetual process can be modified into an exceptionally good circumstance and inaugurate a new age of prosperity. Unfortunately, central banks of the world are very far from internalizing this ATOM-reinforcing paradigm on multiple levels. Current monetary easing programs lead to the money accumulating disproportionately in the largest banks and technology companies, leaving most other sectors and affiliated individuals missing out. This narrow concentration is part of the reason that the various world central bank actions are not as effective as they could be. Furthermore, none of them are ready for the unprecedented technological deflation that is soon to arrive from AI.

The Economics of Artificial Intelligence

The first item in the earlier “Panoply of Creative Destruction” list was AI, and it is important enough to warrant a full section devoted to it. While this whitepaper will not enter the debate about what meets the increasingly stringent yet strangely fluid definition of AI, there are some crucial factors that most factions in the AI debate have failed to consider. This leaves them and those who follow their guidance unprepared for some of the largest ripple effects of AI.

AI is a field that gets insufficient credit for the advances that it has made. Invariably, each new threshold set for AI capabilities becomes a nonevent once met (such as when an AI defeated the top-ranked chess player in 1997). Additionally, each major new AI advance gets reclassified into its own industry (robotics, high-frequency trading, intelligent search engines, etc.), and is no longer counted as AI. These factors contribute to a broad underestimation of how pervasive early AI has already become, leading to a doubly false narrative that AI is both job swallowing and has suddenly appeared out of nowhere.

image

There has been a recent torrent of articles ranking jobs in relation to their vulnerability to AI replacement (see chart from Business Insider built via The Economist). This is a very incomplete oversimplification of the topic. Even those who recognize that past technological disruptions have always created an increase in net output and employment somehow worry that this time, the speed of replacement and widening skill mismatch chasm portend to massive dislocation and permanent unemployment. This is not only an incorrect prediction that fails to recognize how much more output will be generated per unit of input, but it distracts the debate from the other side of the coin. The simple fact is that for each job that AI can perform at lower cost than a human employee, an entrepreneur can save that payroll expense relative to a previous cost structure, enabling either widening margins or more hires elsewhere. Hence, job displacement through AI can only increase new business formation by the same or greater proportion. That is, if overt human meddling (whether through government or otherwise) does not unwittingly prevent this process from occurring.

A recent spate of articles discuss why AI is back in the spotlight after over 20 years of hibernation. Common topics include what various subcategories of AI could be like, and how it may augment human abilities in some areas while be an invisible in others, becoming a utility of sorts within a new status quo. I generally agree with this conclusion, but as far as AI competing with human jobs, these articles overlook the largest factor of all—the AI’s borderless and untaxable nature.

Whether an AI performs only the most repetitive work, or has capabilities that surpass that of any human, it can operate from anywhere. The AI can be owned by a corporation located in the most tax-friendly place available, changing its country of domicile in an instant if necessary. The AI does not care about the weather, commute distances, parking spaces, and holidays. The AI is not governed by cost-of-living constraints beyond the minimal costs of running the hardware that hosts the AI. By contrast, human output is taxed at marginal rates that often exceed 50%, and the higher-paying human jobs are concentrated in very expensive areas.

Hence, the primary handicap to human competitiveness in the face of AI is not the raw output of the human, but the taxation of the human’s productivity, and the high operating costs that a human incurs. This additionally means that tax increases on higher-income workers are more likely to hasten their marginalization in the face of AI. The state, instead of increasing taxes on productivity, has to figure out a way to move policy in the opposite direction. Tax immunity means that AI enables technology to start tightening the screws on government revenue as well, which we will elaborate on in the next chapter. This process will be irreversible long before governments even notice the cumulative revenue erosion.

But as enormous of a factor as unfavorable taxation and megacity living costs may be, they are not the only reasons human workers may be uncompetitive with AI. Human employees demand medical, dental, and vision coverage from their employers. Humans have to interrupt their work several times a day for various aspects of personal maintenance. The AI that can do the work of a thousand humans can reside on hardware that fits in a single room in a remote location and consumes just a few hundred dollars of electricity per month. By contrast, each of those The Overlooked Economics of Technology 37 thousand human workers requires a house, a cubicle or office, a car, roads for the car, a food production chain, schools for their children, and so on. If that were not enough, human workplaces have recently come under siege by extortionists demanding various politicizations of hiring, even at the cost of company productivity. When taking all these disadvantages into account, it may appear that humans stand no chance whatsoever, and is the basis for many pessimistic statements about the impact of AI, including from Bill Gates and Elon Musk. If even these luminaries of technology are apprehensive about what AI may do to human well-being, is this the beginning of the end?

One way to approach the concept is to recognize that technological displacement of jobs within the process of productivity improvements has already been underway for centuries. There was once a time when 70% of the U.S. population worked in agriculture, but now just 2% of the population work in agriculture. Despite this, there is far higher production of calories per person and far greater overall employment in the economy (mostly indoors). But this methodology is somewhat inaccurate as what has occurred is a productivity revolution in agriculture. Job creation in other sectors is a subsequent by-product of the productivity revolution.

Instead, the most accurate measurement technique is to chart input costs relative to output generated, and observe that human jobs tend to sprout up around this output over time in the process of managing, transacting, and consuming it. Continuing the prior example about agricultural employment and output going in opposite directions, the next sector, manufacturing, has been the subject of countless agonizing over the last 45 years of American economic media coverage. Everyone knows that manufacturing jobs have vanished and some categories of the working class have seen hardship. Yet the overlooked fact remains that U.S. manufacturing output never stopped rising, as per this chart from Prof. Perry (that parabolic exponential curve shape appears yet again). Advances in automation have greatly increased output per worker, and shortening time between doublings of output is yet another example of exponential and accelerating productivity. The running joke in these circles is that the continuation of these trends implies an imminent outcome where the United States produces $10 trillion of manufactured goods while employing just one person. Additionally, despite the perception that U.S. manufacturing jobs have moved to China, the reality is that China has lost even more manufacturing jobs than the United States, while simultaneously increasing their own output through robotic replacement of human workers. Anecdotes about job loss from manufacturing can easily be used to whip up emotion, but comprehensive data proves that the average American lives in much higher prosperity than during the supposed manufacturing heyday of 1946–69. This is true even if measured in purchasing power of any standard manufactured goods, without even counting how many categories of manufactured products did not exist then.

image

As the ATOM transformed the agricultural and manufacturing sectors, the service sector was the beneficiary. But the ATOM is now at the service sector’s doorstep, and services will undergo an acceleration of the churning process that removes tasks (and some jobs) in the lower rungs to create new tasks and jobs in the higher rungs. This chart from a McKinsey study indicates that it is not a binary outcome of a job surviving or being eliminated, but rather the percentage of tasks per job that can be automated with existing technology. The study estimates that 45% of contemporary tasks can be automated with existing AI, without even waiting for upcoming advancements in AI. This data indicates that there are already many examples of two jobs that can be compressed into one with perhaps a higher salary than either. It additionally indicates that unless you have adopted as much AI as is fully possible for your profession, you are or soon will be a laggard. Fears over “outsourcing” have been a distraction, since by the time a job can easily be outsourced to a low-cost country, the job is already on the verge of displacement by AI. But most importantly, this chart indicates which functions an entrepreneur can now have done at a fraction of the previous cost through use of an AI instead of a human. Indeed, some entrepreneurs may see charts like this, select which functions are the most completely assimilated into AI, and build a business entirely from only the functions that AI can perform. Once thousands or even millions of entrepreneurs migrate in this direction, there is far more output generated per human. Within this process, the focus should be on the much higher aggregate output that AI will soon generate.

The hourly-wage rate alone is not a strong predictor of automatability, despite some correlation between the two.

Comparison of wages and automation potential for U.S. jobs Ability to automate, % of time spent on activities1 that can be automated by adapting currently demonstrated technology

image

Hourly wage, $ per hour2

1Our analysis used “detailed work activities,” as defined by O’NET, a program sponsored by the US Department of Labor, Employment and Training Administration.

2Using a linear model, we find the correlation between wages and automatability in the US economy to be significant (p-value <0.01), but with a high degree of variability (r2 0.19).

Source: O’NET 2014 database; McKinsey analysis
McKindey&Company

Fundamentally, if AI can produce the same $10 trillion of economic output that today takes 100 million workers, then those 100 million people can transition (with varying levels of ease) to produce an additional $10 trillion of output elsewhere. Hence, a total of $20 trillion is now generated across the same number of people. Note the difference between “output generated” and “jobs created,” a distinction that often escapes many participants in this debate, yet will soon be too pronounced to overlook. It will not be uncommon to see new types of small businesses earning $10 million/year in annual revenue with only three highly paid human employees.

This effect is certain to broaden the breadth and depth of globalization. What many a globalization pundit gets wrong is the discussion of outsourcing, as if jobs are finite and employers are wrong to seek lower costs. In reality, by the time any job category can be outsourced en masse, it is already very near to replacement by automation. But from the perspective of the employer or entrepreneur, the situation is inverted. If a highly paid professional in an advanced economy can be replaced by an AI, that same capability is now available in backwater countries that did not even have any such human professionals before. The expertise gap between two economies may narrow in some areas, and widen in others, as the ability to harness AI will be the greatest determinant of competitiveness. Fountains of productivity may erupt in the most unexpected places. As a widening array of tasks can be performed through AI, new business models from agile entrepreneurs will keep emerging.

Not everyone, of course, is built for entrepreneurship or is at a stage in life where it can be entertained on short notice. In addition, our educational system is not structured to teach a child to think like an entrepreneur—quite the opposite, in fact. Therefore, the practical obstacle in this theoretical ascension of AI is the widening skills mismatch across the human workforce, both vertical and horizontal. Humans are not reprogrammable the way computers are, where one program can be uninstalled to make way for another to be installed in mere minutes. As of early 2016, there are almost six million open positions in the United States according to the Bureau of Labor Statistics (BLS), even as several million people remain unemployed, some of them for years. A midcareer accountant or dermatologist cannot simply become a software engineer, let alone an experienced one, after just three months of training. Even when rapid retraining is possible, employers have to adapt correspondingly and accept the retraining as valid, or this will discourage other prospective employees. The subjective cost of stress derived from career uncertainty should not be dismissed. These, along with the aforementioned squeeze that AI will inflict on the tax base of all high-tax locations, are challenges for which I present a solution later in this chapter.

Lastly, there is a recurring fear that AI will subjugate or even exterminate humans over resource competition, as depicted in many science fiction works. I believe that this is not a risk, since AI does not consume the same fuels that humans do other than electricity, which itself is becoming cheaper as described earlier. However, there is reason to believe that AI might elect to force humans into more productive/tech-advancing behavior, as determined by the goals of the AI. How this unfolds remains to be seen.

The Tyranny of Insufficient Nominal GDP

A curious thing happened several decades ago, when metrics to measure the output of a nation were being devised. The concepts of gross national product (GNP), and later gross domestic product (GDP), measured economic growth by consumption and investment, without particular high emphasis on productivity. Unfortunately, this meant that high inflation could make many economic statistics appear better than true economic conditions warranted. While some bouts of high inflation were due to one-time demographic factors (such as in the United States during the 1970s), others were due to outright mismanagement. Some infamous examples were deliberate actions by corrupt governments.

In reaction to the effect that inflation occasionally had on boosting GDP without true increases in living standards, a mechanism to deduct inflation from raw (nominal) GDP was devised. This inflation-adjusted GDP was given the credibility-enhancing prefix of “Real.” Real GDP worked well for a while, as it stripped out inflation, and thus was more closely tied to true gains in productivity and hence living standards. However, economists got carried away with real GDP, which is only useful if measured over lengthy periods of time. Measuring real GDP on a quarterly basis has no value outside of academia, yet it is headline news in the financial media each of the three or more times it is released and revised for a given quarter.

At the same time, NGDP is not even reported by the financial media. If someone wants to see an official report on the latest nominal U.S. GDP, they have to go to a government website and download an Excel file. Hence, the release for NGDP does not show up in Google searches, so the notion of using the data is that much further from occurring to anyone. By training generations of economists, journalists, and financiers to look only at real GDP, there is a huge cognitive dissonance about the fact that most other economic indicators are tied to NGDP, as is the performance of every investment vehicle. Real estate, mutual funds, art, wine, and corporate valuations certainly rise in tandem with NGDP, not “Real” GDP, and given how most real estate is highly leveraged, this is critically important. Major economic indicators such as auto sales, home sales, job growth, and retail sales are similarly tied more to NGDP than real GDP.

Inflation is similarly viewed through an outdated lens. Trauma from decades-old predicaments gave rise to economic assumptions that are starting to become obsolete. The high inflation of the 1970s created a tribe of “inflation hawks” who continued to overrate the imagined horrors of less than terrifying inflation rates of 4%. Intellectually lazy metrics like the “misery index” emerged (a straight sum of the inflation rate and unemployment rate). Such a metric not only presumes that a 1% rise in the unemployment rate causes as much hardship as a 1% rise in the unemployment rate, but implies that nontechnological deflation is a good thing. A society with a 5% unemployment rate and 3% inflation rate is seen as no worse than a society with a 9% unemployment rate at –1% inflation rate, when in fact the latter climate is vastly worse for almost every socioeconomic class. A society that has steered a majority of households into acquiring debt to purchase real estate on leverage should be vastly more worried about deflation than inflation, even if 4% inflation were to appear.

image

This brings us to an extension of the prior discussion about how deflation can be problematic. Some observers have noted that recent economic recoveries in the United States have gotten progressively weaker, and that this has constrained job growth (“jobless recoveries”). But these observers still focus on how U.S. Real GDP has fallen from 3% to 2% annual growth rates, overlooking the far more worrisome shadow trend of NGDP falling from 7% to 4% annual growth rates. There is evidence that insufficient NGDP contributes to financial crises, which are the more common type of recession in the current era, rather than manufacturing-based production recessions. It was assumed that low inflation did not constrain real GDP, but apparently both inflation and real GDP are trending lower in tandem, suggesting that the two have become correlated to each other.

Think of sufficient NGDP as being the speed at which a bicycle can move forward smoothly, and how insufficient speed makes the bicycle wobbly. An important component of NGDP is the concept of the velocity of money (VM), or how often the same dollar is transacted per unit time. Sluggish NGDP has greatly slowed VM, which in turn is a further retardant to future NGDP. This vicious cycle is difficult to break, for when the economic commentariat fixates exclusively on real GDP, there is an underestimation of how much VM has in fact slowed with the NGDP erosion.

Corporations make decisions on capital expenditure and hiring based on the expected growth trajectories of revenue and profits, which are a function of NGDP, not “Real” GDP. No corporation reports its quarterly results in both nominal and inflation-adjusted terms, so academics are baffled as to why businesses are not hiring or spending just because real GDP has decelerated slightly from 3 to 2%. As we can see from this BLS chart, percentage job growth is indeed trending lower in tandem with NGDP growth. Paradoxically, NGDP is more “real” (and certainly more relevant in real-time) than what is termed as real GDP.

image

Additionally, insufficient NGDP has greatly constricted the technology industry, and hence technological progress. For one thing, the valuation multiples are not as high as they could be under a higher NGDP economy, as earnings growth rates would be higher. While safer value stocks perhaps saw their forward Price to Earnings (P/E) ratios compress from 12 to 10, high-growth companies saw their forward P/E ratios compress from 60 to 30. This leads to the practice of some corporations (such as Comcast) prioritizing a dividend payout ahead of innovation, since dividends are valuable in a low-inflation climate.

You may think that technology startup valuations are high now (most people only notice them at the topmost years of the cycle, not during the other three-fourths of the business cycle). But even these levels are less than what it naturally would be under the more optimal NGDP growth rate. These lower valuation multiples lengthen the duration from inception to liquidity for many tech startups, keeping investor money illiquid for longer. This makes it hard for the entire start-to-exit process to complete within a single economic growth cycle of six to nine years. Such malaise has worsened the risk or reward profile of prospective venture capital rounds, and has moved the entire curve downward, ensuring that medium-risk is the new high-risk, and low-risk is the new medium-risk. Technology ventures with negligible sunk costs and no inventory builds get favored, while the more profound projects with large upfront costs become too risky and take too long to break even. For those dismayed by a technological future of social media addiction and underwhelming apps rather than space exploration, this is precisely the reason for that. Aside from Elon Musk and Google, very few entities are willing to risk the upfront costs of ambitious ventures such as private spaceflight and electric cars.

Funding of lower capex “fluff” at the expense of more serious technology reduces long-term, inflation-offsetting productivity gains. Over time, technological progress slows and gets further and further behind its long-term trendline. At present, my proprietary calculations estimate that after the 2001 technology bust, technological progress has been at only 60 to 70% of its natural rate, due to insufficient NGDP. This happens to be why many technological predictions made in 1999 to 2000 for circa 2016, including by Ray Kurzweil, are consistently five to eight years behind schedule across many seemingly unrelated subsectors of technology. The impedance is holistic and pervasive. There is thus a tremendous opportunity cost involved in this excessive fear of even 3% inflation, which has not been seen in two decades, a fear originated from conditions that can no longer arise in a world where technologically deflating products are prevalent.

Some members of the Federal Reserve have indicated that monetary policy should target NGDP instead of inflation, and that the NGDP target should be 5%. This policy, if formalized, is a huge step in the right direction, but the target NGDP should be 6 to 7%, for economists will be surprised to see that even such NGDP leads to just 2% inflation. Thus, their precious real GDP will in fact register a superb 4 to 5% growth rate. Higher NGDP means more technology, which keeps inflation low, even at that higher NGDP, which produces more technology. This virtuous cycle can begin if the current vicious cycle is decisively attacked and broken.

Equity Valuations as Harbingers of Future ATOM Growth

There is a robust and highly visible indicator that is corroborative of the centuries-proven accelerating rate of economic growth, and how that concentrates within technology. That indicator is the percentage of equity market capitalization comprised of companies selling products experiencing rapid technological deflation. How much can it reveal to us about future technological diffusion and resultant growth acceleration?

The S&P500 is a broad equity index in the United States weighted by market capitalization (unlike the Dow Industrials Average, which knowledgeable investors give far less importance to than the S&P500). The S&P500 contains about 92 to 94% of the market cap of the entire U.S. equity market. With almost half of the profits of S&P500 companies derived from overseas, it is a very comprehensive index. There was a time when companies categorized as part of the technology sector were not selling products that deflated in price so quickly (“high-tech” was just electrical equipment and motor vehicles). But once semiconductors and software started to advance in sophistication and scope, business models built around such rapidly deflating products proliferated and some became incredibly profitable. At first, only one or two such companies became large enough to be included in the S&P500 index. More followed those as computing began to percolate throughout the economy. Even after the technology bust of 2001–03, technology companies returned to being among the most valuable and highest-earning in the entire market.

As of 2016, the technology sector constitutes about 20% of the market cap, and contributes 20% of the earnings of the S&P500. The most purely deflating and materially efficient product category of all, software, emerged as the dominant product category sold by the most profitable companies. The other essentials of computing such as semiconductors and storage also feature prominently. Biotechnology is another subsector built around price-deflating products slowly penetrating the healthcare and pharmaceutical fortress. One might think that rapidly deflating product prices would have an adverse impact on revenue, life-cycle management, and inventory, yet the companies producing and selling these products generate 20% of the profits of the entire S&P500. Within these new business models resides a window into the future of the entire economy, for these economic fundamentals, forged in the crucible of tech companies, are propagating outward.

Companies established enough to be part of the S&P500 have a market valuation derived from an expectation of future earnings, with a net present value (NPV) calculation applied to appropriately weight the near future higher than the more distant future. As the P/E ratio of the technology sector is no higher than the broader index despite the higher earnings growth rates of the sector, eventually the price-to-sales ratio of the technology sector may converge to that of the rest of the S&P500 as well. This could occur from either direction, whether through technology revenue rising greatly, or the price of other sectors rising to synchronize their price-to-sales ratio to that of the technology sector. Remember that some current technology companies may no longer be categorized as such in the future, even if their products are of a rapidly deflating nature. The NPV method and standard discount rates estimate this time horizon to be about 10 to 15 years, for any years further away than that would have too small of a weight under the NPV calculation. We hence have an approximate timeline for this rise in structural valuation, even amidst the booms and busts that will certainly occur along the way.

While this methodology is highly speculative, this coincidentally is along the same timeline where the technological percentage of world GDP is anticipated to reach 8% or higher, and provides independent support to that prediction. This is quite consistent with the exponential, not linear, deflationary trend we are seeing in exponentially rising world QE totals. The trend we have seen in both the computing and economic growth sections of this whitepaper is further supported, and we are indeed very near to the “knee” of the curve.

Do you remember the earlier mention of nation–state risk to exponential, accelerating economic growth? It is time to elaborate on what that means, and what forward-thinking governments can and must do to manage risk.

Characteristics of the ATOM

Tying all of these observations and analyses together, the comprehensive definition of what the ATOM is and how it behaves can be summarized as follows, and in the attached PowerPoint:

1. Technological change, despite occasional deviations from the trendline, is exponential and accelerating.

2. Economic growth is driven by technology, and has always been exponential and accelerating. Half of all world economic growth that has ever occurred has happened after 1997.

3. Technological disruptions generally displace one set of industries and workers, while creating more wealth elsewhere. More wealth is created than destroyed, but often in different places.

4. Technology invariably finds a way to displace a commodity, organization, or industry that is resistant to technology or otherwise obstructs the progress of technology, whether directly or very indirectly.

5. No industry is immune to technological disruption, and industries that resist this process merely experience a sharper disruption at a later date.

6. Technological disruptions tend to be interconnected with each other, and a rapid disruption in one area exerts a strengthening force on other nascent disruptions.

7. Artificial Intelligence (AI) will eliminate many jobs, but will also create a vast category of new business models and careers. Media coverage of AI focuses only on the former effect, ignoring the latter.

8. Technology is inherently deflationary. While this effect was too minor to matter until recently, with technologically deflating products now comprising 2% of annual world GDP, this deflation now has significant (and still rising) macroeconomic effects. AI in particular will be exceptionally deflationary.

9. An increasingly outdated focus on “Real” GDP, instead of NGDP, has led to a primary cause of economic sluggishness and weak job creation being overlooked. It is erroneous to assume that low inflation does not correspondingly decrease real GDP growth.

10. The Federal Reserve should aim for an NGDP target, rather than an inflation target. Inflation will still be just 2 to 3% within a 6 to 7% NGDP environment.

11. The central banks of the world have been generating new money in a pattern that is rising exponentially, contrary to what they expected. This is due to the need to offset technological deflation.

12. Despite talk of QE and other expansion programs ending, they cannot end, nor can they even fail to increase the amount of QE each year.

The next question becomes how the governments of the world should transition to this new reality. Policy inertia and status quo bias are the default situation for most countries. This has introduced a variety of imminent risks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.189.250