Chapter 15

Quantitative Trading in Equities

Kun Gao

Tudor Investment Corporation

INTRODUCTION

Equity trading began when the Dutch East India Company issued the first stock in 1606. During most of the more than 400 years that have followed, equity trading was treated more like a game than a science, and many of the more famous players in this game were speculators. But over the last 60 years, advances in financial theory set the stage for a more scientific approach. Together with the rapid development of computer technology and the increasingly fast speed with which information is disseminated, these theoretical advances sparked a quantitative revolution over the past 30 years. The combination of advances in financial theory, mathematics, computer technology, and informational access, together with dramatic reductions in trading costs, have inspired a new scientific approach to the trading of equities that has come to be known as quantitative trading.

Quantitative trading is the systematic trading of securities using rule-based models and executed through computer algorithms. These computer models, sometimes called systems, are often based on economic theory or patterns observed in the market, fully backtested using historical financial data on a large number of stocks across a long period of time, and encoded in programs to be traded automatically via computers with little or no human intervention.

Selected Key Events in Quantitative Equity Trading

1982: James Simons, a noted professor of mathematics at Stony Brook University, founds Renaissance Technologies. Prior to that, Simons set up his first investment management firm, Moemetrics, in 1977. Renaissance would later prove to be one of the most successful quant-driven hedge fund management firms of all time.

1983: Gerry Bamberger starts trading stock pairs, now known as “pairs trading,” at Morgan Stanley with $500,000 and a small group of traders. Later Nunzio Tartaglia would take over the group and rename it Automated Proprietary Trading, or APT. In 1986, APT pulled in what was then an eye-popping $40 million. It pulled in another $50 million in 1987.

1986: David Shaw is hired to Tartaglia's APT group in Morgan Stanley.

1988: David Shaw starts up his own investment firm, D. E. Shaw, with $28 million in capital.

1990: After a few less-than stellar early years, Renaissance Technologies’ flagship fund Medallion gains 55 percent after fees. In 1993, with $280 million in assets, Medallion was closed to new investors.

1992: Peter Muller joins Morgan Stanley. By 1994, Muller put together a team of math and computer experts known as the Process Driven Trading (PDT) group. During the late 1990s and early 2000s, PDT accounted for one-quarter of Morgan Stanley's net income.

1994: Clifford Asness joins Goldman Sachs and launches the Quantitative Research Group.

1995: Asness starts Global Alpha, a Goldman Sachs internal hedge fund. By late 1997, the Quantitative Research Group was managing $5 billion in a long-only portfolio and nearly $1 billion in Global Alpha.

1998: Asness leaves Goldman Sachs and starts his own hedge fund, AQR, with $1 billion in start-up capital. It represents one of the largest hedge fund launches on record to that point, and three times as much as the founders originally projected they could raise.

2000: The dot.com bubble begins to burst in March, and quant funds suffer huge losses. Renaissance's Medallion fund lost $250 million in three days, nearly wiping out its year-to-date profit. AQR was on life support and had to come up with $600 million of its $1 billion seed capital, in part due to investors pulling out of the fund.

2007: Quant funds experience an August meltdown. The Renaissance Institutional Equities Fund (RIEF), which managed about $26 billion in assets, was down 8.7 percent from the end of July to Aug 9, 2007—a loss of nearly $2 billion. On a percentage basis, the Medallion fund suffered worse, losing a whopping 17 percent in the same period, which translated to a loss of roughly $1 billion. Goldman's Global Alpha was down nearly 16 percent in August, a loss of about $1.5 billion. AQR and PDT lost about $500 million and $300 million, respectively on Aug 8, 2007.

This timeline is adapted from Scott Patterson's excellent book The Quants: How a New Breed of Math Wizzes Conquered Wall Street and Nearly Destroyed It (New York: Crown Publishing, 2010).

Why do some people trade equities using highly quantitative models? The short answer is “Because these models afford benefits that are not available with less quantitative, more traditional approaches.” First, quantitative models are rule-based, which means they can be backtested using historical data. Backtesting allows us to investigate the model's performance, and by implication the ideas that motivate the model, in a reproducible and, therefore, more scientific way. This has considerable appeal over more traditional methods that tend to be much more ad hoc. Second, the quantitative approach allows us to explicitly incorporate risk modeling into the backtesting regime, and this, in turn, can lead to better risk-adjusted returns. The result is that quantitatively-driven portfolios often have much lower volatilities than traditional portfolios. Third, computerized models can evaluate thousands of securities and discover market mispricing that human traders are likely to miss, thus enriching trading opportunities. Fourth, quantitative models are more disciplined, so they significantly reduce trading mistakes that often accompany bursts of human emotions such as greed and fear. Indeed, emotion-based trading by non-quantitative traders often causes behavioral anomalies in the markets that can be exploited readily by quantitative models. Lastly, quantitative models can trade more cheaply and more efficiently because of the inherent economies of scale and the lower risk of human error. Compared with more traditional investment styles, quantitative trading offers investors investment products that have moderate, but stable, returns that often have low correlation to the equity market. This low correlation itself is another benefit if the investor only partially diversifies away from more traditional equity approaches.

As already noted, the seed for quantitative equity trading lies in advances in financial theory, especially modern portfolio theory (MPT). Modern portfolio theory began with the work of Harry Markowitz who, in the early 1950s, developed the foundations of optimal portfolio selection in a mean-variance context. Markowitz's (1952) work attracted little attention at first. But over time it led to broad advances in academic research, inspiring later developments such as the capital asset pricing model (CAPM), other asset pricing models, and optimal execution strategies. Before MPT, the decision to include a particular security in a portfolio was made either on speculation or some, often crude, fundamental analysis of the firm based on the firm's financial statements and its dividend policy. Markowitz's breakthrough was his insight that the value of a security to an investor might best be evaluated by its expected future returns, its risk, and its correlation to other securities in the portfolio. Assets’ expected returns are directly related to certain components of their risk. Given a group of stocks’ expected returns and their full covariance matrix, Markowitz showed that one can use mathematics, specifically mean-variance optimization, to select a portfolio with the highest possible expected return for a target level of risk. Or, for a target future return, one can select a portfolio with the lowest possible risk. Investing is essentially a careful balancing act between risk and expected return. These principles are also the theoretical foundation for quantitative equity trading.

STRUCTURE OF QUANTITATIVE EQUITY MODELS

To this day, the general framework for quantitative research is a twostep process: estimation and implementation. Estimation is to find signals that forecast the key statistics as inspired by Markowitz: stock expected returns, risks, and transaction costs. Implementation is using key statistics to generate and trade portfolios that optimally balance risks and returns.

Estimate Key Statistics

Developing accurate forecasts of key statistics is the first and most critical step in the quantitative investment process. Due to the large number of stocks typically traded, it is impractical for quantitative managers to conduct detailed research on individual stocks to estimate their expected returns, risks, and costs of trading. Instead, quantitative managers heavily rely on statistical models to forecast such metrics.

Forecast Expected Returns

Expected future returns are perhaps the most important statistics to estimate. To estimate expected returns, quantitative managers first identify a set of signals that might be able to forecast future returns. Next, they backtest the performance of each signal both by itself and in combination with other signals. Finally, they blend the effective signals to generate the expected returns.

There are many starting points in the search for signals that have very little cost to the modeler. Among others, good places to start include academic papers, sell-side research reports, finance and investment books, and trader forums and blogs. Authors of trading ideas often describe their models in great details and show their backtest results. However, before trading such strategies, one needs to thoroughly test whether these ideas make intuitive sense, are free of data errors and survivorship bias, sufficiently profitable to cover all transaction costs, and profitable across a variety of economic environments and for a sufficiently broad universe of stocks. But developing a successful investment strategy requires more than just implementing other people's ideas. Successful strategies need to have their edge, which may come from original thoughts, improvements on well-known ideas, or technological advantage.

Typically, quantitative signals are most often classified as either technical or fundamental, depending on the nature of the data used to generate them. Technical strategies try to exploit opportunities in price and volume patterns. Fundamental strategies use company-reported accounting numbers to make investment decisions. Another approach to classify signals is by the way they are traded. After identifying repeating patterns in the data, quantitative managers will either bet that the patterns will reverse themselves or that they will continue. The former is often called mean reversion, and models that employ this approach are sometimes called “mean reverters” or “convergence” strategies. Strategies that take the view that a pattern will continue in the same direction are often called “momentum strategies.” These are “trend following” in some sense. Mean reversion and momentum are found in both technical and fundamental data, and perhaps it is a more general way to classify quantitative signals.

Mean Reverter or Convergence Models.

From an historical perspective, mean reversion is possibly the earliest strategy that quantitative equity managers used for trading. In the 1980s, a group of quantitative traders at Morgan Stanley started to use a version of a mean reversion strategy called “pairs trading” to exploit temporary market inefficiencies. They noticed that large block trades would often significantly move the price of a stock, while the price of another stock in the same industry group barely changed. For example, Coca-Cola and Pepsi stocks often move in tandem. If a large buy order on Coca-Cola hits the market its stock price will increase while Pepsi's price will most likely stay about the same. This temporarily elevates the typical spread between the two stock prices. A mean reversion strategy could benefit by buying Pepsi and selling Coca-Cola simultaneously and waiting for the prices of the two to converge to their more normal state. Since this enlarged spread is caused by temporary liquidity imbalance in the market, it usually reverts back quickly, and a profit is earned in the process. Other examples of pairs trading include dual-listed stocks that are traded in more than one locale. The most famous example of pairs trading, however, is the spread between Royal Dutch and Shell. Prior to 2005, Royal Dutch Petroleum and Shell Transport and Trading were two companies that jointly owned the Royal Dutch/Shell entity. The two companies shared the cash flows generated by their jointly owned entity at a contractually-fixed ratio, but the two companies were separately traded on two different exchanges, London and Amsterdam. Therefore, when the prices of these stocks differed due to different liquidity conditions at the two exchanges, pairs trading algorithms would buy the cheaper one and sell the more expensive one to earn a profit when the market corrected.

Mean reversion is also observed in fundamental data. For example, Fama and French (1992, 1998) found that value stocks—which have high book-to-price, earnings-to-price, and cash flow-to-price ratios—often outperform growth stocks (which have much lower ratios). They found that, over time, there was a tendency for these ratios to revert to the mean ratio for the group as a whole. The mean ratio is the average of the same ratio (such as book-to-price) for a group of companies in the same industry. Thus, one would tend to buy those stocks with high ratios and sell those stocks with low ratios. When prices adjust and the spreads narrow, a profit should be earned. There is no guarantee that mean reversion, if it occurs at all, will happen in any specific period of time, so patience is often necessary.

A key characteristic of mean reversion strategies is that they do not aim to price the stock in absolute terms. Instead, they try to identify stocks’ attractiveness relative to each other, and then form portfolios that go long on the most attractive stocks and short on the least attractive ones. The goal is to capture the relative inefficiency between the long and short positions. For this reason, these strategies are often viewed as subsets of a broader type of strategy called relative value arbitrage. A long–short portfolio often has a negligible beta and therefore minimal exposure to the market. Indeed, some managers carefully weight the components of these long-short portfolios in such a way that the overall beta is exactly zero. Such portfolios are called “beta neutral” or “market neutral.” As you would expect, the returns from market neutral strategies can be completely uncorrelated with market returns.

The advantage of relative value arbitrage is that it avoids the difficult task of determining the true values of stocks. Theoretical equity pricing models, such as dividend discount models, require predictions of a company's future earnings, payout ratios, interest rates, and so forth. All these statistics are time-varying and thus are very difficult to estimate. Further, small estimation errors often lead to significant changes in the final values obtained. Relative value arbitrage attempts to resolve this difficulty by comparing the prices of securities that have similar characteristics. By the law of one price, regardless of whether the market overestimates or underestimates the general level of stock prices, the large spreads between similar stocks are likely to diminish as prices converge.

Why does mean reversion exist in the equity market? Over short time spans, as illustrated in the earlier block trading example, price distortions could be caused by the fact that buyers and sellers come to the market place at random times, thus causing temporary supply and demand imbalances. Market forces gradually correct these aberrations. Over longer time spans, distortions may occur because one company gains a technological edge over its industry competitors. Given time, competitive pressures force the other companies to innovate as well. This can easily explain, for example, why value companies tend to catch up over time to growth companies. Even if growth companies continue to do well, they may acquire value companies, and thus the value spread will still be reduced.

In recent years, behavioral economists have suggested that price distortions may be caused by human traders’ overreaction to events. In many ways this theory is at odds with the efficient market theory, which maintains that, collectively, human beings always respond appropriately to events. In the decades after it was introduced in 1970s, the efficient market theory became a cornerstone of academic thought on the subject of market pricing. The theory is predicated on the assumption that people behave rationally at all times—if not individually, then collectively. But more and more evidence has been unearthed over the past twenty years to indicate that while rational behavior may be the norm, there are, at times, significant deviations from rational behavior as a direct consequence of our biological evolution and our psychological imprinting. These departures from the assumption of rationality are such that they can explain a number of behavioral phenomena that can lead to market distortions. The well documented “herd instinct” is probably the simplest of these behavioral traits that is at odds with rational behavior.

But, even without human behavioral errors, “noise” alone might be sufficient for mean reversion to exist in the equity market. Mean reversion strategies provide liquidity to the market place by selling stocks at times when many people want to buy, and by buying stocks at times when many people want to sell. In the short term they can often buy at the bid and sell at the ask, thus capturing the visible bid-ask spread. In the longer term they capture the invisible spreads that markets reward to liquidity providers.

Momentum or Trend-Following Models.

In contrast to mean reverter models, momentum models try to detect signals that indicate a price trend in a stock. They then take a position on the side of the trend: long for up-trending stocks and short for down-trending stocks. One rationale behind the approach is that some people have access to information before others. As the information gradually becomes known to more people, the new recipients of the information push the price further. For example, if a stock's price is rising faster than its peers, it is likely that it is being driven up by traders armed with new bullish information. By positioning themselves on the side of the trend, they will win as long as the trend remains intact. Critical to such strategies is the ability to recognize when the trend has ended in order to exit the position before the profit dissipates.

Such trending activities are common in human behavior in areas other than finance. In many situations, following other people is not a bad strategy. For example, it is difficult for one person, on his own, to spot a grizzly bear in Yellow Stone National Park because they are rare and too well camouflaged. However, if he is willing to go where he sees other people gather, the chance that he will find a grizzly bear will greatly increase.

In the book, The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, the author points out that “under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them” (Surowiecki 2004, XIII). The right circumstances are (1) diversity of opinion; (2) independence of members from one another; (3) decentralization; and (4) a good method for aggregating opinions. In other words, if randomly selected individuals with independent judgments and diverse information sources choose the same action, such actions should be respected and perhaps be followed. If these conditions are not met, it will be the blind leading the blind, and the ditch is but a little way on.

Momentum is widely observed in both technical data and fundamental data. For example, even though price reversion is often observed over very short and very long time spans, in the intermediate term prices often exhibit momentum. Jegadeesh and Titman (1993, 2001) were among the first to document price momentum in the intermediate time frame. They find that momentum strategies that buy stocks with high returns over the previous 3 to 12 months and sell stocks with low returns over this same time period perform well over the following 12 months. On the fundamental front, well documented post-earnings announcement drift is an example of the momentum phenomenon. Research has shown that stocks that announce earnings that beat expectations continue to outperform stocks that miss expectations. Expected earnings also exhibit momentum when a leading analyst revises up or down the forecast of a company's earnings and other analysts gradually follow suit.

Momentum models try to capture the persistence of local trends while reversal models seek to identify the inversion of local trends, such as price reversals. They coexist in both the technical data and the fundamental data for different stocks and over different time horizons. Together, momentum and reversal models are the most widely used modeling techniques. They also perform differently in different market conditions. Mean reversion works well under normal market conditions so that “what goes up will come down.” Momentum strategies work best when markets experience large up or down trends. To some degree, mean-reversion is similar to valuation-based strategies while momentum relates more to human psychology. A successful quantitative strategy needs to have both flavors in order to survive all market conditions.

Forecast Risk

Risk is the second key statistic suggested by modern portfolio theory. In Markowitz's mean-variance framework, the risk associated with an equity portfolio is measured as the variance of the portfolio's return. This, in turn, is derived from the variances of the individual stocks’ returns and the covariances of the returns among the different stocks included in the portfolio. (Note that the same results can be obtained using correlations rather than covariances.) The variance of a return gauges the range and likelihood of possible values that the return can assume. A small variance indicates a narrow potential range and therefore lower risk. A large variance indicates a broader potential range and therefore greater risk. Covariance measures the co-movements of returns among stocks. Asset return covariance matrices are key inputs to portfolio optimization algorithms used for asset allocation and active portfolio management.

In practice, quantitative managers rarely estimate the full covariance matrix directly because the number of individual elements is too large to be estimated precisely. Factor models have become pervasive in risk modeling because they offer a parsimonious way to estimate risk without a large and unreliable security covariance matrix. A factor model decomposes an asset's return into factors common to all assets and an asset-specific factor. The common factors are interpreted as the systematic risk components, and the factor model quantifies an asset's sensitivities to these risk factors.

The first, and still the most famous, factor model is the capital asset pricing model (CAPM), which was developed by financial researchers extending Markowitz's mean-variance portfolio theory. One of its creators, William Sharpe, was a student of Markowitz. They later shared the 1990 Nobel Prize in Economic Science for their contributions.1

CAPM answers the question left behind by Modern Portfolio Theory: How do you measure an asset's expected return and risk? The model demonstrates that, when in equilibrium under the assumptions of Modern Portfolio Theory, the expected excess return of an asset is equal to its sensitivity to the market risk times the market's expected excess return. Excess returns are defined as returns in excess of the risk-free return. This sensitivity to the market is called a stock's beta, and it cannot be eliminated through diversification. The risk associated with a single asset is then the sum of its non-diversifiable market risk and its specific risk. Further, the covariance between two assets is the product of their betas with market risk. Thus, CAPM was the first single-factor model capable of measuring both return and risk.

Today's risk models may all be viewed as extensions of the CAPM model. However, instead of using market return as the single explanatory factor, most practical risk models use multiple risk factors and measure an asset's exposures to those risks. For instance, it is logical to assume that the risk associated with a stock would also be influenced by the risk of the sector it operates in, its leverage, and its sensitivity to interest rates, and so on. Depending on the source of the risk factors, multifactor risk models are of three main types: (1) macroeconomic risk models; (2) fundamental risk models; and (3) statistical risk models.

Macroeconomic risk models use observable economic time series, such as interest rates and inflation, as measures of pervasive or common risk factors contributing to asset returns. CAPM is a special case of such a model. Another famous macroeconomic model was developed by Chen, Roll, and Ross (1986). They found that factors such as surprises in inflation, surprises in GNP, surprises in investor confidence as measured by the corporate bond risk premium, and shifts in the yield curve work well in explaining stock risks. In contrast to macroeconomic models, fundamental factor models use observable firm- or asset-specific attributes such as firm size, earnings yield, and industry classification to determine common factors in asset returns. An example of commercially available models of this type is BARRA.2 Statistical risk models treat the common factors as unobservable or latent factors, and are estimated using statistical methods such as factor analysis and principal components analysis.

Fundamental and macroeconomic risk models have the advantages that all risk factors are easy to understand and less subject to excessive data mining and spurious price patterns. Their disadvantages include potentially correlated risk factors, and slow reaction to changing market conditions. For example, volatility and company leverage are often used as explanatory variables in fundamental risk models. Financial theory shows that companies with high leverage have high volatility, therefore these two risk factors are correlated with each other. However, they are not completely redundant, because a stock's volatility also depends on the nature of the company's business and firm specific news. Unfortunately, using correlated risk factors to estimate risk exposures will cause high estimation errors.

Statistical risk models have the advantage that they are easy to implement, and have good statistical properties such as uncorrelated risk factors. If done carefully, they are more adaptive to market conditions, and thus are more likely to capture risk factors that fundamental models may miss. On the downside, they operate like a black box, and it is hard to interpret the practical meanings of the risk factors. Additionally, the user is more likely to be accused of excessive data mining and more likely to have data errors.

Forecast Transaction Costs

Transactions incur both explicit and implicit costs. Explicit costs include commissions and infrastructure charges. Implicit costs are often called market impact (or slippage), which is the price concession traders must pay to liquidity providers who accommodate their trades—particularly when those trades are large.

Explicit costs are easy to measure and tend to be relatively small. This is because employers of quantitative strategies often use only the brokers’ infrastructure to go to the market. Once brokers already build the infrastructure, the marginal cost for facilitating more trades is minimal. Market impact is often measured as “implementation shortfall,” which is the difference between the price that triggers the trading signal and the average execution price of the entire order. This is often described as measuring trading costs relative to an arrival price benchmark. Forecasting market impact is more difficult because researchers observe prices only for completed trades. They cannot determine what a stock's price would have been without these trades. In other words, they cannot step in the same river twice. Further, market impact depends on the way the orders are executed. The faster the orders are executed, the larger the market impact will tend to be, and vice versa. In practice market impact costs are much larger than the explicit costs. According to Investment Technology Group (ITG), in 2009 the average commission-related explicit cost for U.S. stocks was 9 basis point (bps), while the average market impact cost was 48 bps, more than 5 times the explicit cost.

Transaction cost estimation is an often overlooked, but very important, subject in quantitative trading. This is especially true for institutional managers who manage much larger portfolios than individual investors. To see how important transaction costs are, Coppejans and Madhavan (2007) show that, assuming a moderate 40 basis points transaction costs and 200 percent annual turnover, a typical fund's information ratio3 is halved when transaction costs are taken into account. The authors also show that a strategy's information ratio is partially determined by the correlation between predicted and realized costs, which underpins the importance of transaction cost modeling to a strategy's realized performance.

Most transaction cost models measure market impact as two components: temporary impact and permanent impact. Temporary impact is caused by orders taking liquidity out of the order book but do not bring fundamental news that alters the market's long-term view. In this case, a buy order will temporarily increase a stock's price, and a sell order will temporarily decrease its price, but the disturbance is short-lived, and the market will revert to its original state quickly. Permanent impact, on the other hand, occurs when an order's private information is leaked to the market via the act of trading, and thus changes the market's long-term view of the stock. It is closely related to the academic research on strategic trader models, which study how informed traders hide behind the flow of “noise” traders, and how market makers infer the informational content of trades from order flow. By definition, permanent impact is nontransient, and it impacts subsequent executions and valuations.

A typical transaction cost model uses such inputs as relative order size, market capitalization, stock volatility, and spreads to estimate transaction costs. Many quantitative managers estimate transaction costs using the experience of their own trades. This approach is ideal because different trading styles have different market impact patterns. For example, mean-reversion types of strategies provide liquidity and thus have less market impact than momentum types of strategies, which take liquidity. Strategies that trade large cap stocks have less trading costs than those that trade small cap stocks because large cap stocks are less volatile and their average daily volumes are higher. For these reasons, it is preferable to use the trader's own trades when developing proprietary transaction cost models.

Implementation

Implementation is the process of translating key statistics into investment profits. It includes portfolio construction and trade execution. Portfolio construction takes the key statistics, namely expected returns, covariance estimates, transactions cost estimates, and the current portfolio as inputs, and generates an output for a target portfolio driven by the investor's objective function. Trade execution is the process of moving the current portfolio to the target portfolio. Both portfolio construction and trade execution involve careful balancing of risk and return. The trade-off between risk and return is the central feature of both academic and practitioner finance. Investment managers need to measure risks, model the relationship between risk and return, and decide which risks to take and how much of them to take.

In practice, portfolio construction can range anywhere from simple and straightforward to mathematically complicated and computationally intensive. An example of a simple portfolio construction methodology is stratification. In this approach, a few mutually exclusive and collectively exhaustive risk factors, such as size and industry group, are identified. Then stocks with similar risk profiles are grouped together. For example, large cap energy stocks will be in one group, and small cap retailers will be in another. Within each group, stocks are sorted by their signals. Stocks ranked the highest in each group are bought long and stocks ranked the lowest in each group are sold shorts. Despite its simplicity, stratification allows a strategy to concentrate on capturing the mispricings indicated by its signals while eliminating exposure to outside risks, such as sector risk and capitalization bias, by taking both long and short positions within the same risk bucket. The disadvantages of stratification are also readily apparent. For example, it does not allow managers to explicitly control for trading costs, and it ignores the magnitude of the signals.

A more general way to construct a portfolio for quantitative managers is portfolio optimization, which is the classic framework for portfolio construction pioneered by Markowitz. Portfolio optimization uses computer algorithms to find a set of optimal weights that maximize a portfolio's expected future returns after transaction costs for a target risk level. Unlike stratification, portfolio optimization takes all the information about alpha, such as risks and magnitude of signals, into account, and it is very convenient to incorporate trading cost controls and other investment constraints in the optimization criteria.

In general, portfolio optimization works better than simpler methods partly because it uses more information. However, more information is a double-edged sword. If not used appropriately, it may hurt the performance of an optimizer. For example, researchers have long found that the original mean-variance optimizer is very sensitive to estimation errors of returns and risks, which are almost unavoidable in practice. As a result, unbounded mean-variance optimized portfolios are often dominated by the equal weighting alternative.

There are a couple of ways to mitigate these problems. The simplest ad hoc solution is to incorporate constraints. Constraints limit the maximum weight assigned to any single stock, and force the optimizer to spread out weights to more stocks. The second method is called portfolio resampling, which uses Monte Carlo simulation to resample the data and create a mean-variance optimized portfolio for each sample. The final weights are the average weights of all simulations, which is usually more stable than the plain vanilla mean-variance optimizer that uses only one realization of data history. The third approach is to use Bayesian theory. The Black-Litterman (1992) model is the best known within this category. It starts with the market capitalization equilibrium portfolio as a prior, then uses Bayesian techniques to adjust the portfolio to reflect the investor's signals in proportion to their informational contents. Since the market capitalization equilibrium portfolio is the benchmark portfolio and it acts as a center of gravity, the resulting portfolio weights will be more robust to estimation errors.

Transaction costs link the two implementation components together. To reduce transaction costs, quantitative managers should reduce unnecessary turnovers in the portfolio construction stage and trade smarter in the trade execution stage. In the portfolio construction stage, turnover can be either specified in the objective function or as a constraint. Most commercial optimizers can handle overall turnover limit, and some of them can handle sector-specific turnover limits.

Trade execution is, itself, essentially an optimization problem. In principle, traders want to trade as quickly as possible after they acquire new information so that they can profit from it before anyone else has it. However, they cannot trade too quickly as orders executed over a short period of time will have a greater market impact cost. Orders executed in multiple smaller lots but over a longer period of time may have a lesser market impact, and therefore smaller expected cost, but are more risky, since the asset's price can vary greatly over longer periods of time and the trader's information may become stale. Therefore, to trade a list of stocks efficiently, investors must strike the right balance between trading costs and execution risk. The tradeoff here is quite similar to the risk/return tradeoff in modern portfolio theory.

Two popular benchmarks for trade execution are the previous day's closing price and the current trade date's opening price. Actual trade prices are compared against the benchmark prices, and the difference is the implementation shortfall, which captures the variable part of transaction costs. Practical execution algorithms follow the mean-variance approach to minimize the expected value of implementation shortfall for a given variance of implementation shortfall.

Among passive investors, another popular benchmark is “volume weighted average price,” commonly called the VWAP. In this execution approach, traders split their orders for the target trading horizon in proportion to market volume in the same horizon. Thus it aims to achieve the average execution price. The VWAP strategy is easy to implement and requires less mathematical complicacy. However, the estimation of market impact in this approach is very crude and it ignores the importance of opportunity costs. Further, it is only suitable for trades that are relatively small compared with total trading volume over the trading periods. For larger traders, the trades themselves will distort the benchmark.

Due to the large number of stocks and the frequency with which quantitative managers trade, most of their orders are executed electronically. Besides traditional exchanges, quantitative traders increasingly use alternative trading venues such as electronic communication networks (ECNs) through some form of direct market access (DMA) provided by their brokers. An ECN is a computer system that facilitates trading of financial products outside of stock exchanges. Since the Securities and Exchange Commission first authorized their creation in 1998, ECNs have becoming increasing popular due to their liquidity and automated direct matching of buyers and sellers. ECNs provide traders more anonymity and more control over their order flows. ECNs tend to be better for traders who are not in a hurry, since they pay liquidity providers for their order flow while they charge liquidity takers for their order flow (in order to pay the liquidity providers).

Most of the published research on trade execution is in the area of market microstructure, which studies the detailed mechanisms of how markets work and prices are formed. Interest in microstructure and trading is not new, but the market crash in October 1987 spurred vast new interest in this area. Recent literature is characterized by more theoretical rigor and extensive empirical investigation using new databases. It remains an active field in financial research.

OUTLOOK

Building a successful trading strategy is not an easy task. Numerous studies, for example, Malkiel (1990), have shown that the majority of professional money managers have been unable to beat the market on a risk-adjusted basis. Does that imply that the market is always efficient and that there is no need to waste one's time to find trading opportunities? Perhaps not. Grossman (1976) and Grossman and Stiglitz (1980) convincingly demonstrate that perfectly efficient markets are an impossibility. This is because if markets are perfectly efficient, then the return for gathering information is zero, in which case there would be little reason to trade. Consequently markets have no reason to exist. A more practical version of an efficient market, as suggested by Lo and MacKinlay (1999), is a market with occasional excess profit opportunities on average and over time. However, it is not possible to earn such profits consistently without some type of competitive advantage.

In his article “What Does It Take to Win the Trading Game?” Jack Treynor (1981) argues that there are two ways traders can make profits in the stock market: Either the trader uses superior information, or the trader applies superior reasoning to existing information. These are the general guidelines to find good trading opportunities for both quantitative and more traditional managers.

For quantitative managers, superior information either comes from the possession of new data sources or quicker access to useful data than others have. Historically, before standardized fundamental data became commercially available, traders who acquired such data through their own research would have a competitive advantage over those who did not. Quicker access to data has been one key to market success throughout stock market history. In 1815, for example, the London banker Nathan Rothschild made a huge profit in the stock market because he got advanced news of the outcome of the battle of Waterloo. Today many quantitative funds, especially the high frequency trading funds, invest heavily in technology in order to acquire information and trade on that information just a few fractions of a second before other market participants.

Superior analysis is the other key ingredient in successful quantitative investment. In “How I Helped to Make Fischer Black Wealthier,” Jay Ritter (1996), a professor and ex-futures trader described how Fischer Black made a profit from him by correctly pricing Value Line futures contracts. In that trade, both parties were well versed in financial theory, but Fischer conducted superior analysis by noticing that the Value Line Index is a geometric average rather than an arithmetic average, and thus should be priced differently from the textbook model.

Superior analysis can be conducted throughout the process of quantitative investment as discussed above, and this is what many quantitative researchers strive to achieve. For example, in an effort to forecast expected returns, some quantitative researchers try to apply chaos theory and neural network models to handle the nonlinear patterns in data. In risk modeling, many develop new methods to estimate covariance matrices using tick-by-tick data. In portfolio and trade optimization, the original Markowitz normality assumptions are often relaxed, and more robust optimization techniques are applied.

The quantitative trading business, with quantitative models as its products, is just like any other business. In the long run, no company can survive with just one magic product that works under all market conditions at all times. Likewise, quantitative funds need to keep improving existing models, inventing new models, and enhancing their technology in order to keep pace with the market and with their competitors. Any quantitative fund can be made obsolescent by a failure to keep up with the competition or by bad management, and, of course, these things are no less true for companies in other industries. But the quantitative trading business will likely continue because there will always be those who innovate.

NOTES

1. See Sharpe (1964).

2. The BARRA Integrated Model is a multiasset class model for forecasting the asset and portfolio level risk of global equities, bonds, and currencies. The model is now owned by a subsidiary of Morgan Stanley.

3. The information ratio is one of the measures of risk-adjusted return. It is defined as the ratio of the portfolio's active return (i.e., alpha) to the portfolio's tracking error, where tracking error is the standard deviation of the active return.

REFERENCES

Black, F., and R. Litterman. 1992. “Global Portfolio Optimization.” Financial Analysts Journal 48 (September/October): 28–43.

Chen, Nai-Fu, Richard Roll, and Stephen Ross. 1986. “Economic Forces and the Stock Market.” Journal of Business 59:3, 383–403.

Coppejans, M., and A. Madhavan. 2007. “The Value of Transaction Cost Forecasts: Another Source of Alpha.” Journal of Investment Management 5:165–78.

Fama, E., and K. French. 1992. “The Cross-Section of Expected Stock Returns.” Journal of Finance 47:2, 427–465.

Fama, E., and K. French. 1998. “Value versus Growth: The International Evidence.” Journal of Finance 53:6, 1975–1991.

Grossman, S. 1976. “On the Efficiency of Competitive Stock Markets where Trades Have Diverse Information.” Journal of Finance 31:1, 573–585.

Grossman, S., and J. Stiglitz. 1980. “On the Impossibility of Informationally Efficient Markets.” American Economic Review 70:393–408.

Investment Technology Group. 2010. “Global Trading Cost Review.” www.itg.com/news_events/papers/ITGGlobalTradingCostReview_2009Q4.pdf.

Jegadeesh, N., and S. Titman. 1993. “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.” Journal of Finance 48:1, 65–91.

Jegadeesh, N., and S. Titman. 2001. “Profitability of Momentum Strategies: An Evaluation of Alternative Explanations.” Journal of Finance 56:2, 699–720.

Lo, A. and C. MacKinlay. 1999. A Non-Random Walk Down Wall Street. Princeton, NJ: Princeton University Press.

Markowitz, H. M. (1952). “Portfolio Selection.” Journal of Finance 7:1, 77–91.

Malkiel, B. 1990. A Random Walk Down Wall Street. New York: W. W. Norton & Company.

Patterson, Scott. 2010. The Quants: How a New Breed of Math Wizzes Conquered Wall Street and Nearly Destroyed It. New York: Crown Publishing.

Ritter, J. R. 1996. “How I Helped to Make Fischer Black Wealthier.” Financial Management 25:4, 104–107.

Sharpe, W. F. 1964. “Capital Asset Prices—A Theory of Market Equilibrium Under Conditions of Risk.” Journal of Finance 19:3, 425–42.

Surowiecki, James. 2004. The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. New York: Doubleday.

Treynor, J. L. 1981. “What Does It Take to Win the Trading Game?” Financial Analysts Journal 37:1, 55–60.

ABOUT THE AUTHOR

Kun Gao recently joined Tudor Investment Corporation (“Tudor”) as a Research Portfolio Manager in the firm's quantitative trading group where he researches and develops portfolio management systems. Tudor is part of the Tudor Group, a group of affiliated companies engaged in trading in the fixed income, equity, currency, and commodity markets headquartered in Greenwich, Connecticut. Prior to joining Tudor, Kun was a portfolio manager at WorldQuant LLC, a quantitative hedge fund based in Old Greenwich, Connecticut. He also had extensive trading and research experience in quantitative equity strategies at Morgan Stanley and Caxton. He received a Ph. in statistics from Yale University.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.68.28