Quantitative Equity Portfolio Management

ANDREW ALFORD, PhD

Managing Director, Quantitative Investment Strategies, Goldman Sachs Asset Management

ROBERT JONES, CFA

Chairman, Arwen Advisors and Chairman and CIO, System Two Advisors

TERENCE LIM, PhD, CFA

CEO, Arwen Advisors


Abstract: Equity portfolio management has evolved considerably since the 1950s. Portfolio theories and asset pricing models, in conjunction with new data sources and powerful computers, have revolutionized the way investors select stocks and create portfolios. Consequently, what was once mostly an art is increasingly becoming a science: Loose rules of thumb are being replaced by rigorous research and complex implementation. While greatly expanding the frontiers of finance, these advances have not necessarily made it any easier for portfolio managers to outperform the market. The two approaches to equity portfolio management are the traditional approach and the quantitative approach. Despite the contrasting of these two approaches by their advocates, they actually share many traits such as applying economic reasoning to identify a small set of key drivers of equity values, using observable data to quantify these key drivers, using expert judgment to develop ways to map these key drivers into the final stock-selection decision, and evaluating their performance over time. The difference in the two approaches is how they perform these tasks.

Equity portfolio management has evolved considerably since Benjamin Graham and David Dodd published their classic text on security analysis in 1934 (Graham and Dodd, 1934). For one, the types of stocks available for investment have shifted dramatically, from companies with mostly physical assets (such as railroads and utilities) to companies with mostly intangible assets (such as technology stocks and pharmaceuticals). Moreover, theories such as the modern portfolio theory and the capital asset pricing model, in conjunction with new data sources and powerful computers, have revolutionized the way investors select stocks and create portfolios. Consequently, what was once mostly an art is increasingly becoming a science: Loose rules of thumb are being replaced by rigorous research and complex implementation.

Of course, these new advances, while greatly expanding the frontiers of finance, have not necessarily made it any easier for portfolio managers to beat the market. In fact, the increasing sophistication of the average investor has probably made it more difficult to find—and exploit—pricing errors. Several studies show that a majority of professional money managers have been unable to beat the market (see, for example, Malkiel, 1995). There are no sure bets, and mispricings, when they occur, are rarely both large and long lasting. Successful managers must therefore constantly work to improve their existing strategies and to develop new ones. Understanding fully the equity management process is essential to accomplishing this challenging task.

These new advances, unfortunately, have also allowed some market participants to stray from a sound investment approach. It is now easier than ever for portfolio managers to use biased, unfamiliar, or incorrect data in a flawed strategy, one developed from untested conjecture or haphazard trial and error. Investors, too, must be careful not to let the abundance of data and high-tech techniques distract them when allocating assets and selecting managers. In particular, investors should not allow popular but narrow rankings of short-term performance obscure important differences in portfolio managers’ style exposure or investment process. To avoid these pitfalls, it helps to have a solid grasp of the constantly advancing science of equity investing.

This entry provides an overview of equity portfolio management aimed at current and potential investors, analysts, investment consultants, and portfolio managers. We begin with a discussion of the two major approaches to equity portfolio management: the traditional approach and the quantitative approach. The remaining sections of the entry are organized around four major steps in the investment process: (1) forecasting the unknown quantities needed to manage equity portfolios—returns, risks, and transaction costs; (2) constructing portfolios that maximize expected risk-adjusted return net of transaction costs; (3) trading stocks efficiently; and (4) evaluating results and updating the process.

These four steps should be closely integrated: The return, risk, and transaction cost forecasts, the approach used to construct portfolios, the way stocks are traded, and performance evaluation should all be consistent with one another. A process that produces highly variable, fast-moving return forecasts, for example, should be matched with short-term risk forecasts, relatively high transaction costs, frequent rebalancing, aggressive trading, and short-horizon performance evaluation. In contrast, stable, slower-moving return forecasts can be combined with longer term risk forecasts, lower expected transaction costs, less frequent rebalancing, more patient trading, and longer-term evaluation. Mixing and matching incompatible approaches to each part of the investment process can greatly reduce a manager’s ability to reap the full rewards of an investment strategy.

A well-structured investment process should also be supported by sound economic logic, diverse information sources, and careful empirical analysis that together produce reliable forecasts and effective implementation. And, of course, a successful investment process should be easy to explain; marketing professionals, consultants, and investors all need to understand a manager’s process before they will invest in it.

TRADITIONAL AND QUANTITATIVE APPROACHES TO EQUITY PORTFOLIO MANAGEMENT

At one level, there are as many ways to manage portfolios as there are portfolio managers. After all, developing a unique and innovative investment process is one of the ways managers distinguish themselves from their peers. Nonetheless, at a more general level, there are two basic approaches used by most managers: The traditional approach and the quantitative approach. Although these two approaches are often sharply contrasted by their proponents, they actually share many traits. Both apply economic reasoning to identify a small set of key drivers of equity values; both use observable data to help measure these key drivers; both use expert judgment to develop ways to map these key drivers into the final stock-selection decision; and both evaluate their performance over time. What differs most between traditional and quantitative managers is how they perform these steps.

Traditional managers conduct stock-specific analysis to develop a subjective assessment of each stock’s unique attractiveness. Traditional managers talk with senior management, closely study financial statements and other corporate disclosures, conduct detailed, stock-specific competitive analysis, and usually build spreadsheet models of a company’s financial statements that provide an explicit link between various forecasts of financial metrics and stock prices. The traditional approach involves detailed analysis of a company and is often well equipped to cope with data errors or structural changes at a company (e.g., restructurings or acquisitions). However, because the traditional approach relies heavily on the judgment of analysts, it is subject to potentially severe subjective biases such as selective perception, hindsight bias, stereotyping, and overconfidence that can reduce forecast quality. (For a discussion of the systematic errors in judgment and probability assessment that people frequently make, see Kahneman, Slovic, and Tversky, 1982.) Moreover, the traditional approach is costly to apply, which makes it impracticable for a large investment universe comprising many small stocks. The high cost and subjective nature also make it difficult to evaluate, because it is hard to create the history necessary for testing. Testing an investment process is important because it helps to distinguish factors that are reflected in stock prices from those that are not. Only factors that are not yet impounded in stock prices can be used to identify profitable trading opportunities. Failure to distinguish between these two types of factors can lead to the familiar “good company, bad stock” problem in which even a great company can be a bad investment if the price paid for the stock is too high.

Quantitative managers use statistical models to map a parsimonious set of measurable factors into objective forecasts of each stock’s return, risk, and cost of trading. The quantitative approach formalizes the relation between the key factors and forecasts, which makes the approach transparent and largely free of subjective biases. Quantitative analysis can also be highly cost effective. Although the fixed costs of building a robust quantitative model are high, the marginal costs of applying the model, or extending it to a broader investment universe, are low. Consequently, quantitative portfolio managers can choose from a large universe of stocks, including many small and otherwise neglected stocks that have attractive fundamentals. Finally, because the quantitative approach is model-based, it can be tested historically on a wide cross-section of stocks over diverse economic environments. While quantitative analysis can suffer from specification errors and overfitting, analysts can mitigate these errors by following a well-structured and disciplined research process.

Table 1 Major Advantages of the Traditional and Quantitative Approaches to Equity Portfolio Management

Traditional approach
Depth Although they have views on fewer companies, traditional managers tend to have more in-depth knowledge of the companies they cover. Unlike a computerized model, they should know when data are misleading or unrepresentative.
Regime shifts Traditional managers may be better equipped to handle regime shifts and recognize situations where past relationships might not be expected to continue (e.g., where back-tests may be unreliable).
Signal identification Based on their greater in-depth knowledge, traditional managers can better understand the unique data sources and factors that are important for stocks in different countries or industries.
Qualitative factors Many important factors that may affect an investment decision are not available in any database and are hard to evaluate quantitatively. Examples might include management and their vision for the company; the value of patents, brands, and other intangible assets; product quality; or the impact of new technology.
Quantitative approach
Universe Because a computerized model can quickly evaluate thousands of securities and can update those evaluations daily, it can uncover more opportunities. Further, by spreading their risk across many small bets, quantitative managers can add value with only slightly favorable odds.
Discipline While individuals often base decisions on only the most salient or distinctive factors, a computerized model will simultaneously evaluate all specified factors before reaching a conclusion.
Verification Before using any signal to evaluate stocks, quantitative managers will normally backtest its historical efficacy and robustness. This provides a framework for weighting the various signals.
Risk management By its nature, the quantitative approach builds in the notion of statistical risk and can do a better job of controlling unintended risks in the portfolio.
Lower fees The economies of scale inherent in a quantitative process usually allow quantitative managers to charge lower fees.

On the negative side, quantitative models can be misleading when there are bad data or significant structural changes at a company (that is, “garbage in, garbage out”). For this reason, most quantitative managers like to spread their bets across many names so that the success of any one position will not make or break the strategy. Traditional managers, conversely, prefer to take fewer, larger bets given their detailed hands-on knowledge of the company and the high cost of analysis.

A summary of the major advantages of each approach to equity portfolio management is presented in Table 1. (Dawes, Faust, and Meehl [1989] provide an excellent comparison of clinical (traditional) and actuarial (quantitative) decision analysis.) Our focus in the rest of this entry is the process of quantitative equity portfolio management.

FORECASTING STOCK RETURNS, RISKS, AND TRANSACTION COSTS

Developing good forecasts is the first and perhaps most critical step in the investment process. Without good forecasts, the difficult task of forming superior portfolios becomes nearly impossible. In this section we discuss how to use a quantitative approach to generate forecasts of stock returns, risks, and transaction costs. These forecasts are then used in the portfolio construction step described in the next section.

It should be noted that some portfolio managers do not develop explicit forecasts of returns, risks, and transaction costs. Instead, they map a variety of individual stock characteristics directly into portfolio holdings. However, there are limitations with this abbreviated approach. Because the returns and risks corresponding to the various characteristics are not clearly identified, it is difficult to ensure the weights placed on the characteristics are appropriate. Further, measuring risk at the portfolio level is awkward without reliable estimates of the risks of each stock, especially the correlations between stocks. Similarly, controlling turnover is hard when returns and transaction costs are not expressed in consistent units. And, of course, it is difficult to explain a process that occurs in one magical step.

Forecasting Returns

The process of building a quantitative return-forecasting model can be divided into four closely linked steps: (1) identifying a set of potential return forecasting variables, or signals; (2) testing the effectiveness of each signal, by itself and together with other signals; (3) determining the appropriate weight for each signal in the model; and (4) blending the model’s views with market equilibrium to arrive at reasonable forecasts for expected returns.

Identifying a list of potential signals might seem like an overwhelming task; the candidate pool can seem almost endless. To narrow the list, it is important to start with fundamental relationships and sound economics. Reports published by Wall Street analysts and books about financial statement analysis are both good sources for ideas. Another valuable resource is academic research in finance and accounting. Academics have the incentive and expertise to identify and carefully analyze new and innovative information sources. Academics have studied a large number of stock price anomalies, and Table 2 lists several that have been adopted by investment managers. (For evidence on the performance of several well-known anomalies, see Fama and French [2008].)

Table 2 Selected Stock Price Anomalies Used in Quantitative Models

Growth/Value: Value stocks (high B/P, E/P, CF/P) outperform growth stocks (low B/P, E/P, CF/P).
Post-earnings-announcement drift: Stocks that announce earnings that beat expectations outperform stocks that miss expectations.
Short-term price reversal: One-month losers outperform one-month winners.
Intermediate-term price momentum: Six-months to one-year winners outperform losers.
Earnings quality: Stocks with cash earnings outperform stocks with non-cash earnings.
Stock repurchases: Companies that repurchase shares outperform companies that issue shares.
Analyst earnings estimates and stock recommendations: Changes in analyst stock recommendations and earnings estimates predict subsequent stock returns.

For portfolio managers intent on building a successful investment strategy, it is not enough to simply take the best ideas identified by others and add them to the return-forecasting model. Instead, each potential signal must be thoroughly tested to ensure it works in the context of the manager’s strategy across many stocks and during a variety of economic environments. The real challenge is winnowing the list of potential signals to a parsimonious set of reliable forecasting variables. When selecting a set of signals, it is a good idea to include a variety of variables to capture distinct investment themes, including valuation, momentum, and earnings quality. By diversifying over information sources and variables, there is a good chance that if one signal fails to add value another will be there to carry the load.

When evaluating a signal, it is important to make sure the underlying data used to compute the signal are available and largely error free. Checking selected observations by hand and screening for outliers or other influential observations is a useful way to identify data problems. It is also sometimes necessary to transform a signal—for instance, by subtracting the industry mean or taking the natural logarithm—to improve the “shape” of the distribution. To evaluate a signal properly, both univariate and multivariate analysis is important. Univariate analysis provides evidence on the signal’s predictive ability when the signal is used alone, whereas multivariate analysis provides evidence on the signal’s incremental predictive ability above and beyond other variables considered. For both univariate and multivariate analysis, it is wise to examine the returns to a variety of portfolios formed on the basis of the signal. Sorting stocks into quintiles or deciles is popular, as is regression analysis, where the coefficients represent the return to a portfolio with unit exposure to the signal. These portfolios can be equal weighted, cap weighted, or even risk weighted depending on the model’s ultimate purpose. Finally, the return forecasting model should be tested using a realistic simulation that controls the target level of risk, takes account of transaction costs, and imposes appropriate constraints (e.g., the nonnegativity constraint for long-only portfolios). In our experience, many promising return-forecasting signals fail to add value in realistic back-tests—either because they involve excessive trading; work only for small, illiquid stocks; or contain information that is already captured by other components of the model.

The third step in building a return forecasting model is determining each signal’s weight. When computing expected returns, more weight should be put on signals that, over time, have been more stable; generated higher and more consistent returns; and provided superior diversification benefits. Maintaining exposures to signals that change slowly requires less trading, and hence lower costs, than is the case for signals that change rapidly. Other things being equal, a stable signal (such as the ratio of book-to-market equity) should get more weight than a less stable signal (such as one-month price reversal). High, consistent returns are essential to a profitable, low-risk investment strategy; hence, signals that generate high returns with little risk should get more weight than signals that produce lower returns with higher risk. Finally, signals with more diversified payoffs should get more weight because they can hedge overall performance when other signals in the model perform poorly.

The last step in forecasting returns is to make sure the forecasts are reasonable and internally consistent by comparing them with equilibrium views. Return forecasts that ignore equilibrium expectations can create problems in the portfolio construction step. Seemingly reasonable return forecasts can cause an optimizer to maximize errors rather than expected returns, producing extreme, unbalanced portfolios. The problem is caused by return forecasts that are inconsistent with the assumed correlations across stocks. If two stocks (or subportfolios) are highly correlated, then the equilibrium expectation is that their returns should be similar; otherwise, the optimizer will treat the pair of stocks as a (near) arbitrage opportunity by going extremely long the high-return stock and extremely short the low-return stock. However, with hundreds of stocks, it is not always obvious whether certain stocks, or combinations of stocks, are highly correlated and therefore ought to have similar return forecasts. The Black-Litterman model was specifically designed to alleviate this problem. It blends a model’s raw return forecasts with equilibrium expected returns—which are the returns that would make the benchmark optimal for a given risk model—to produce internally consistent return forecasts that reflect the manager’s (or model’s) views yet are consistent with the risk model. (For a discussion of how to use the Black-Litterman model to incorporate equilibrium views into a return-forecasting model, see Litterman [2003].)

Forecasting Risks

In a portfolio context, the risk of a single stock is a function of the variance of its returns, as well as the covariances between its returns and the returns of other stocks in the portfolio. The variance-covariance matrix of stock returns, or risk model, is used to measure the risk of a portfolio. For equity portfolio management, investors rarely estimate the full variance-covariance matrix directly because the number of individual elements is too large, and for a well-behaved (that is, nonsingular) matrix, the number of observations used to estimate the matrix must significantly exceed the number of stocks in the matrix. To see this, suppose that there are N stocks. Then the variance-covariance matrix has N(N + 1)/2 elements, consisting of N variances and N(N − 1)/2 covariances. For an S&P 500 portfolio, for instance, there are 500 × (500 + 1)/2 = 125,250 unknown parameters to estimate, 500 variances and 124,750 covariances. For this reason, most equity portfolio managers use a factor risk model in which individual variances and covariances are expressed as a function of a small set of stock characteristics—such as industry membership, size, and leverage. This greatly reduces the number of unknown risk parameters that the manager needs to estimate.

When developing an equity factor risk model, it is a good idea to include all of the variables used to forecast returns among the (potentially larger) set of variables used to forecast risks. This way, the risk model “sees” all of the potential risks in an investment strategy, both those managers are willing to accept and those they would like to avoid. Further, a mismatch between the variables in the return and risk models can produce less efficient portfolios in the optimizer. For instance, suppose a return model comprises two factors, each with 50% weight: the book-to-price ratio (B/P) and return on equity (ROE). Suppose the risk model, on the other hand, has only one factor: B/P. When forming a portfolio, the optimizer will manage risk only for the factors in the risk model—that is, B/P but not ROE. This inconsistency between the return and risk models can lead to portfolios with extreme positions and higher-than-expected risk. The portfolio will not reflect the original 50-50 weights on the two return factors because the optimizer will dampen the exposure to B/P, but not to ROE. In addition, the risk model’s estimate of tracking error will be too low because it will not capture any risk from the portfolio’s exposure to ROE. The most effective way to avoid these two problems is to make sure all of the factors in the return model are also included in the risk model (although the converse does not need to be true—that is, there can be risk factors without expected returns).

A final issue to consider when developing or selecting a risk model is the frequency of data used in the estimation process. Many popular risk models use monthly returns, whereas some portfolio managers have developed proprietary risk models that use daily returns. Clearly, when estimating variances and covariances, the more observations, the better. High-frequency data produce more observations and hence more precise and reliable estimates. Further, by giving more weight to recent observations, estimates can be more responsive to changing economic conditions. As a result, risk models that use high-frequency returns should provide more accurate risk estimates. (For a detailed discussion of factor risk models, see Chapter 20 of Litterman [2003]).

Forecasting Transaction Costs

Although often overlooked, accurate trade-cost estimates are critical to the equity portfolio management process. After all, what really matters is not the gross return a portfolio might receive, but rather the actual return a portfolio does receive after deducting all relevant costs, including transaction costs. Ignoring transaction costs when forming portfolios can lead to poor performance because implementation costs can reduce, or even eliminate, the advantages achieved through superior stock selection. Conversely, taking account of transaction costs can help produce portfolios with gross returns that exceed the costs of trading.

Accurate trading-cost forecasts are also important after portfolio formation, when monitoring the realized costs of trading. A good transaction-cost model can provide a benchmark for what realized costs “should be,” and hence whether actual execution costs are reasonable. Detailed trade-cost monitoring can help traders and brokers achieve best execution by driving improvements in trading methods—such as more patient trading, or the selective use of alternative trading mechanisms.

Transaction costs have two components: (1) explicit costs, such as commissions and fees; and (2) implicit costs, or market impact. Commissions and fees tend to be relatively small, and the cost per share does not depend on the number of shares traded. In contrast, market impact costs can be substantial. They reflect the costs of consuming liquidity from the market, costs that increase on a per-share basis with the total number of shares traded.

Market impact costs arise because suppliers of liquidity incur risk. One component of these costs is inventory risk. The liquidity supplier has a risk/return trade-off, and will demand a price concession to compensate for this inventory risk. The larger the trade size and the more illiquid or volatile the stock, the larger are inventory risk and market impact costs. Another consideration is adverse selection risk. Liquidity suppliers are willing to provide a better price to uninformed than informed traders, but since there is no reliable way to distinguish between these two types of traders, the market maker sets an average price, with expected gains from trading with uninformed traders compensating for losses incurred from trading with informed traders. Market impact costs tend to be higher for low-price and small-cap stocks for which greater adverse selection risk and informational asymmetry tend to be more severe.

Forecasting price impact is difficult. Because researchers only observe prices for completed trades, they cannot determine what a stock’s price would have been without these trades. It is therefore impossible to know for sure how much prices moved as a result of the trade. Price impact costs, then, are statistical estimates that are more accurate for larger data samples.

One approach to estimating trade costs is to directly examine the complete record of market prices, tick by tick (see, for example, Breen, Hodrick, and Korajczyk [2002]). These data are noisy due to discrete prices, non-synchronous reporting of trades and quotes, and input errors. Also, the record does not show orders placed, just those that eventually got executed (which may have been split up from the original, larger order). Research by Lee and Radhakrishna (2000) suggests empirical analysis should be done using aggregated samples of trades rather than individual trades at the tick-by-tick level.

Another approach is for portfolio managers to estimate a proprietary transaction cost model using their own trades and, if available, those of comparable managers. If generating a sufficient sample is feasible, this approach is ideal because the resulting model matches the stock characteristics, investment philosophy, and trading strategy of the individual portfolio manager. There is a large academic literature on measuring transaction costs. Further, models built from actual trading records provide a complementary source of information on market impact costs. (For empirical evidence on how transaction costs can vary across trade characteristics and how to predict transaction costs, see Chapter 23 of Litterman [2003].)

CONSTRUCTING PORTFOLIOS

In this section we discuss how to construct portfolios based on the forecasts described in the last section. In particular, we compare ad hoc, rule-based approaches to portfolio optimization. The first step in portfolio construction, however, is to specify the investment goals. While having good forecasts (as described in the previous section) is obviously important, the investor’s goals define the portfolio management problem. These goals are usually specified by three major parameters: the benchmark, the risk/return target, and specific restrictions such as the maximum holdings in any single name, industry, or sector.

The benchmark represents the starting point for any active portfolio; it is the client’s neutral position—a low-cost alternative to active management in that asset class. For example, investors interested in holding large-cap U.S. stocks might select the S&P 500 or Russell 1000 as their benchmark, while investors interested in holding small-cap stocks might choose the Russell 2000 or the S&P 600. Investors interested in a portfolio of non-U.S. stocks could pick the FTSE 350 (United Kingdom), TOPIX (Japan), or MSCI EAFE (World minus North America) indexes. There are a large number of published benchmarks available, or an investor might develop a customized benchmark to represent the neutral position. In all cases, however, the benchmark should be a reasonably low-cost, investable alternative to active management.

Although some investors are content to merely match the returns on their benchmarks, most investors allocate at least some of their assets to active managers. The allocation of risk is done via risk budgeting. In equity portfolio management, active management means overweighting attractive stocks and underweighting unattractive stocks relative to their weights in the benchmark. (The difference between a stock’s weight in the portfolio and its weight in the benchmark is called its active weight, where a positive active weight corresponds to an overweight position and a negative active weight corresponds to an underweight position.) Of course, there is always a chance that these active weighting decisions will cause the portfolio to underperform the benchmark, but one of the basic dictums of modern finance is that to earn higher returns, investors must accept higher risk—which is true of active returns as well as total returns.

A portfolio’s tracking error measures its risk relative to a benchmark. Tracking error equals the time-series standard deviation of a portfolio’s active return (which is the difference between the portfolio’s return and that of the benchmark). A portfolio’s information ratio equals its average active return divided by its tracking error. As a measure of return per unit of risk, the information ratio provides a convenient way to compare strategies with different active risk levels.

An efficient portfolio is one with the highest expected return for a target level of risk—that is, it has the highest information ratio possible given the risk budget. In the absence of constraints, an efficient portfolio is one in which each stock’s marginal contribution to expected return is proportional to its marginal contribution to risk. That is, there are no unintended risks, and all risks are compensated with additional expected returns. How can a portfolio manager construct such an efficient portfolio? Below we compare two approaches: (1) a rule-based system; and (2) portfolio optimization.

Building an efficient portfolio is a complex problem. To help simplify this complicated task, many portfolio managers use ad hoc, rule-based methods that partially control exposures to a small number of risk factors. For example, one common approach—called stratified sampling—ranks stocks within buckets formed on the basis of a few key risk factors, such as sector and size. The manager then invests more heavily in the highest-ranked stocks within each bucket, while keeping the portfolio’s total weight in each bucket close to that of the benchmark. The resulting portfolio is close to neutral with respect to the identified risk factors (that is, sector and size) while overweighting attractive stocks and underweighting unattractive stocks.

Although stratified sampling may seem sensible, it is not very efficient. Numerous unintended risks can creep into the portfolio, such as an overweight in high-beta stocks, growth stocks, or stocks in certain subsectors. Nor does it allow the manager to explicitly consider trading costs or investment objectives in the portfolio construction problem. Portfolio optimization provides a much better method for balancing expected returns against different sources of risk, trade costs, and investor constraints. An optimizer uses computer algorithms to find the set of weights (or holdings) that maximize the portfolio’s expected return (net of trade costs) for a given level of risk. It minimizes uncompensated sources of risk, including sector and style biases. Fortunately, despite the complex math, optimizers require only the various forecasts we’ve already described and developed in the prior section.

Chapter 23 of Litterman (2003) demonstrates the benefits of optimization, comparing two portfolios: one constructed using stratified sampling and the other constructed using an optimizer. The optimized portfolio is designed to have the same predicted tracking error as the rule-based portfolio. The results show that (1) the optimized portfolio is more efficient in terms of its expected alpha and information ratio for the same level of risk, (2) risk is spread more broadly for the optimized portfolio compared to the rule-based portfolio, (3) more of the risk budget in the optimized portfolio is due to the factors that are expected to generate positive excess returns, and (4) the forecast beta for the optimized portfolio is closer to 1.0, as unintended sources of risk (such as the market timing) are minimized.

Another benefit of optimizers is that they can efficiently account for transaction costs, constraints, selected restrictions, and other account guidelines, making it much easier to create customized client portfolios. Of course, when using an optimizer to construct efficient portfolios, reliable inputs are essential. Data errors that add noise to the return, risk, and transaction cost forecasts can lead to portfolios in which these forecast errors are maximized. Instead of picking stocks with the highest actual expected returns, or the lowest actual risks or transaction costs, the optimizer takes the biggest positions in the stocks with the largest errors, namely, the stocks with the greatest overestimates of expected returns or the greatest underestimates of risks or transaction costs. A robust investment process will screen major data sources for outliers that can severely corrupt one’s forecasts. Further, as described in the previous section, return forecasts should be adjusted for equilibrium views using the Black-Litterman model to produce final return forecasts that are more consistent with risk estimates, and with each other. Finally, portfolio managers should impose sensible, but simple, constraints on the optimizer to help guard against the effects of noisy inputs. These constraints could include maximum active weights on individual stocks, industries, or sectors, as well as limitations on the portfolio’s active exposure to factors such as size or market beta.

TRADING

Trading is the process of executing the orders derived in the portfolio construction step. To trade a list of stocks efficiently, investors must balance opportunity costs and execution price risk against market impact costs. Trading each stock quickly minimizes lost alpha and price uncertainty due to delay, but impatient trading incurs maximum market impact. However, trading more patiently over a longer period reduces market impact but incurs larger opportunity costs and short-term execution price risk. Striking the right balance is one of the keys to successful trade execution.

The concept of “striking a balance” suggests optimization. Investors can use a trade optimizer to balance the gains from patient trading (e.g., lower market-impact cost) against the risks (e.g., greater deviation between the execution price and the decision price; potentially higher short-term tracking error). Such an optimizer will tend to suggest aggressive trading for names that are liquid and/or have a large effect on portfolio risk, while suggesting patient trading for illiquid names that have less impact on risk. A trade optimizer can also easily handle most real-world trading constraints, such as the need to balance cash in each of many accounts across the trading period (which may last several days).

A trade optimizer can also easily accommodate the time horizon of a manager’s views. That is, if a manager is buying a stock primarily for long-term valuation reasons, and the excess return is expected to accrue gradually over time, then the optimizer will likely suggest a patient trading strategy (all else being equal). Conversely, if the manager is buying a stock in expectation of a positive earnings surprise tomorrow, the optimizer is likely to suggest an aggressive trading strategy (again, all else being equal). The trade optimizer can also be programmed to consider short-term return regularities, such as the tendency of stocks with dramatic price moves on one day to continue those moves on the next day before reversing the following day (see Heston, Korajczyk, and Sadka, 2010). Although these types of regularities may be too small to cover trading costs, and should not be used to initiate trades, they can be used to help minimize trading costs after an investor has independently decided to trade (see Engle and Ferstenberg, 2007).

To induce traders to follow the desired strategy (that is, that suggested by the trade optimizer), the portfolio manager needs to give the trader an appropriate benchmark, which provides guidance about how aggressively or patiently to trade. Two widely used benchmarks for aggressive trades are the closing price on the previous day and the opening price on the trade date. Because the values of these two benchmarks are measured prior to any trading, a patient strategy that delays trading heightens execution price risk by increasing the possibility of deviating significantly from the benchmark. Another popular execution benchmark is the volume-weighted average price (VWAP) for the stock over the desired trading period, which could be a few minutes or hours for an aggressive trade, or one or more days for a patient trade. However, the VWAP benchmark should only be used for trades that are not too large relative to total volume over the period; otherwise, the trader may be able to influence the benchmark against which he or she is evaluated.

Buy-side traders can increasingly make use of algorithmic trading, or computer algorithms that directly access market exchanges, to automatically make certain trading decisions such as the timing, price, quantity, type, and routing of orders. These algorithms may dynamically monitor market conditions across time and trading venues, and reduce market impact by breaking large orders into smaller pieces, employing either limit orders or marketable limit orders, or selecting trading venues to submit orders, while closely tracking trading benchmarks. Algorithmic trading provides buy-side traders more anonymity and greater control over their order flow, but tends to work better for more liquid or patient trades.

Principal package trading is another way to lower transaction costs relative to traditional agency methods (see Kavajecz and Keim, 2005). Principal trades may be crossed with the principal’s existing inventory positions, or allow the portfolio manager to benefit from the longer trading horizon and superior trading ability of certain intermediaries.

EVALUATING RESULTS AND UPDATING THE PROCESS

Once an investment process is up and running, it needs to be constantly reassessed and, if necessary, refined. The first step is to compare actual results to expectations; if realizations differ enough from expectations, process refinements may be necessary. Thus, managers need systems to monitor realized performance, risk, and trading costs and compare them to prior expectations.

A good performance monitoring system should be able to determine not only the degree of over- or under-performance, but also the sources of these excess returns. For example, a good performance attribution system might break excess returns down into those due to market timing (having a different beta than the benchmark), industry tilts, style differences, and stock selection. Such systems are available from a variety of third-party vendors. An even better system would allow the manager to further disaggregate returns to see the effects of each of the proprietary signals used to forecast returns, as well as the effects of constraints and other portfolio requirements. And, of course, any system will be more accurate if it can account for daily trading and changes in portfolio exposures.

Investors should also compare realized risks to expectations. For example, Goldman Sachs has developed the concept of the green, yellow, and red zones to compare realized and targeted levels of risk (see Chapter 17 in Litterman, 2003). Essentially, if realized risk is within a reasonable band around the target (that is, the green zone), then one can assume the risk management techniques are working as intended and no action is required. If realized risk is further from the target (the yellow zone), the situation may require closer examination, and if realized risk is far from the target (the red zone), some action is usually called for.

Finally, it is important to monitor trading costs. Are they above or below the costs assumed when making trading decisions? Are they above or below competitors’ costs? Are they too high in an absolute sense? If so, managers may need to improve their trade cost estimates, trading process, or both. There are many services that can report realized trade costs, but most are available with a significant lag, and are inflexible with respect to how they measure and report these costs. With in-house systems, however, managers can compare a variety of trade cost estimation techniques and get the feedback in a timely enough fashion to act on the results.

The critical question, of course, is what to do with the results of these monitoring systems: When do variations from expectations warrant refinements to the process? This will depend on the size of the variations and their persistence. For example, a manager probably would not throw out a stock-selection signal after one bad month—no matter how bad—but might want to reconsider after many years of poor performance, taking into consideration the economic environment and any external factors that might explain the results. It is also important to compare the underperformance to historical simulations. Have similar periods occurred in the past, and if so, were they followed by improvements? In this case, the underperformance is part of the normal risk in that signal and no changes may be called for. If not, there may have been a structural change that might invalidate the signal going forward—for example, if the signal has become overly popular, it may no longer be a source of mispricing.

Similarly, the portfolio manager needs to consider the source of any differences between expectations and realizations. For example, was underperformance due to faulty signals, portfolio constraints, unintended risk, or random noise? The answer will determine the proper response. If constraints are to blame, they may be lifted—but only if doing so would not violate any investment guidelines or incur excessive risk. Alternatively, if the signals are to blame, the manager must decide if the deviations from expectations are temporary or more enduring. If it is just random noise, no action is necessary. Similarly, any differences between realized and expected risk could be due to poor risk estimates or poor portfolio construction, with the answer determining the response. Finally, excessive trading costs (versus expectations) could reflect poor trading or poor trade cost estimates, again with different implications for action.

In summary, ongoing performance, risk, and trade cost monitoring is an integral part of the equity portfolio management process and should get equal billing with forecasting, portfolio construction, and trading. Monitoring serves as both quality control and a source of new ideas and process improvements. The more sophisticated the monitoring systems, the more useful they are to the process. And although the implications of monitoring involve subtle judgments and careful analysis, better data can lead to better solutions.

KEY POINTS

  • Two popular ways to manage equity portfolios are the traditional, or qualitative, approach and the quantitative approach.
  • The equity investment process comprises four primary steps: (1) forecasting returns, risks, and transaction costs; (2) constructing portfolios that maximize expected risk-adjusted return net of transaction costs; (3) trading stocks efficiently; and (4) evaluating results and updating the process.
  • There are four closely linked steps to building a quantitative equity return-forecasting model: (1) identifying a set of potential return forecasting variables, or signals; (2) testing the effectiveness of each signal, by itself and together with other signals; (3) determining the appropriate weight for each signal in the model; and (4) blending the model’s views with market equilibrium to arrive at reasonable forecasts for expected returns.
  • Most quantitative equity portfolio managers use a factor risk model in which individual variances and covariances are expressed as a function of a small set of stock characteristics such as industry membership, size, and leverage.
  • Transaction costs consist of explicit costs, such as commissions and fees, and implicit costs, or market impact. The per-share cost of commissions and fees does not depend on the number of shares traded, whereas market impact costs increase on a per-share basis with the total number of shares traded.
  • Tracking error measures a portfolio’s risk relative to a benchmark. Tracking error equals the time-series standard deviation of a portfolio’s active return, the difference between the portfolio’s return and that of the benchmark.
  • Information ratio is a measure of return per unit of risk, a portfolio’s average active return divided by its tracking error.
  • Two widely used ways to construct an efficient portfolio are stratified sampling, which is a rule-based system, and portfolio optimization.
  • To trade a list of stocks efficiently, investors must balance opportunity costs and execution price risk against market impact costs. Trading each stock quickly minimizes lost alpha and price uncertainty due to delay, but impatient trading incurs maximum market impact. Trading more patiently over a longer period reduces market impact but incurs larger opportunity costs and short-term execution price risk.
  • Once an investment process is operational, it should be constantly reassessed and, if necessary, refined. Thus, managers need systems to monitor realized performance, risk, and trading costs and compare them to prior expectations.
  • A good performance monitoring system should be able to determine the degree of over- or underperformance as well as the sources of these excess returns, such as market timing, industry tilts, style differences, and stock selection.

REFERENCES

Breen, W., Hodrick, L. S., and. Korajczyk, R. A. (2000). Predicting equity liquidity. Management Science 48, 4: 470–483.

Dawes, R. M., Faust, E., and Meehl, P. E. (1989). Clinical versus actuarial judgment. Science 243 (March 31): 1668–1674.

Engle, R. F., and Ferstenberg, R. (2007). Execution risk. The Journal of Portfolio Management, 33–44.

Fama, E. F., and French, K. R. (2008). Disecting anomalies. Journal of Finance, 1653–1678.

Graham, B., and Dodd, D. (1934). Security Analysis, 1st edition. New York: McGraw-Hill.

Heston, S. L., Korajczyk, R. A., and Sadka, R., (2010). Intraday patterns in the cross-section of stock returns. Journal of Finance, 1369–1407.

Kahneman, D., Slovic, P., and Tversky, A. (1982). Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Kavajecz, K. A., and Keim, D. B. (2002). Packaging liquidity: Blind auctions and transaction cost efficiencies. Journal of Financial and Quantitative Analysis 40: 465–492.

Lee, C., and Radhakrishna, B. (2000). Inferring investor behavior: Evidence from TORQ data. Journal of Financial Markets 3: 83–111.

Litterman, R. (2003). Modern Investment Management: An Equilibrium Approach. Hoboken, NJ: John Wiley & Sons.

Malkiel, B.G. (1995). Returns from investing in equity mutual funds, 1971 to 1991. Journal of Finance 50: 549–572.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.9.124