9
Equity Market Neutral
An investment process based on a quantitative model is not a black box, but an investment process based on subjective assessments and gut feeling is!
Most long/short equity managers select stocks separately for the long and the short sides of their portfolio. They pay little attention to the relation between their long and their short positions, or more generally, to their portfolio construction process. Consequently, their funds often have a net long or a net short exposure, depending on the set of available opportunities and the manager’s outlook for the near term direction of the overall market. In either case, their portfolio performance becomes dependent upon directional market movements. Alfred W. Jones’ fund, for instance, had a tilt towards long positions – his shorts were of a generally smaller magnitude than his longs.
The goal of equity market neutral managers is precisely to avoid any net market exposure in their portfolio. Selling and buying are no longer sequential independent activities; they become related and in some cases even concurrent. In addition, long and short positions are regularly balanced to remain market neutral at all times, so that all of the portfolio’s return is derived purely from stock selection and no longer from market conditions. This explains why many investors perceive equity market neutral as the quintessential hedge fund strategy. Indeed, when correctly implemented, it offers the promise of true absolute returns (the alpha) without having to bear the market sensitivity (the beta). But beware! “Market neutral” has become a catch-all marketing term which embeds several different investment approaches with varying degrees of risk and neutrality.

9.1 DEFINITIONS OF MARKET NEUTRALITY

Let us first explain what we intend by “market neutral”. As an illustration, consider a plain vanilla long/short equity portfolio with $10 million of initial capital. Say this capital is invested as follows: $9 million long shares and $6 million short shares. The $6 million raised from the short sale are used as collateral and collect interest at the risk-free rate. What should we do to make this portfolio market neutral?

9.1.1 Dollar neutrality

At a first glance, our portfolio has a positive net long market exposure of $3 million ($9 million long minus $6 million short). To be dollar neutral, we need to have equal dollar investments in the long and the short positions, say for instance $9 million long and $9 million short. We therefore need to increase the size of the short position by $3 million. Going forward, we will also need to rebalance our long and our short positions on a regular basis to maintain dollar neutrality. Indeed, if we were right in our stock selection, the long position will appreciate while the short position will shrink in size, pushing the portfolio towards a net long bias.
Figure 9.1 Splitting the risk of a stock or a stock portfolio into a market risk component and a specific risk component
098
Dollar neutrality is extremely appealing because of its simplicity. It has the great benefit of being directly verifiable, as the initial value of the investments is observable, at least to the hedge fund manager. But is it sufficient to make a portfolio market neutral? The answer requires closer examination of some of the unobservable risk characteristics of the long and short parts of our portfolio.

9.1.2 Beta neutrality

A commonly used risk-based definition of market neutrality relies on beta: a portfolio is said to be market neutral if it generates returns that are uncorrelated with the returns on some market index. Since beta is calculated from the correlation coefficient, a zero correlation implies a zero beta.
To create such a beta neutral fund, it is necessary to go back to the basics of Model Portfolio Theory (MPT). According to MPT, the volatility of a stock (or a portfolio of stocks) can be decomposed into a market risk component and a specific risk component. The market risk component depends on the volatility of equity markets as well as on the market risk exposure, which is measured by the beta coefficient.102 The specific risk component is independent from the market and it normally gets diversified away at the portfolio level (Figure 9.1).
The beta of a portfolio is a weighted average of the betas of its component stocks. Consequently, being dollar neutral does not necessarily guarantee that the portfolio will be insensitive to the market return, i.e. will have a beta equal to zero. It all depends on the beta of the long and the short positions. For instance, if the beta of the long position is 1.4 and the beta of the short position is 0.7, an equal dollar allocation between the two will have a net beta of 0.35 = (50%) × (1.4) − (50%) × (0.7). This positive beta implies that the market risk of our dollar neutral portfolio is not nil and that its correlation to equity markets is actually positive.
To make our portfolio really beta neutral, we need to size the long and short positions adequately. In our example, given the ratio of the two betas (1.4 versus 0.7), we would need to double the size of the short position relative to the long position. That is, for any dollar in the long position, we would need to have two dollars in the short position. In this case, the beta will be exactly zero, which means that the systematic risk of the portfolio has been neutralized. Going forward, if our long position appreciates in value and our short positions decreases in value, we would still need to adjust the size of our positions on a regular basis to avoid the drift towards a positive beta.
At this stage, the reader may wonder why a hedge fund manager might want to have a beta neutral portfolio. The answer is simple: to take risks only where he has skills. Many hedge fund managers prefer to focus on stock selection where they think they have a competitive advantage, rather than on forecasting the returns of the market or of some of its sub-sectors. Consequently, they prefer to run a portfolio of carefully selected stocks but with no net beta exposure, as this makes them completely independent from the behaviour of equity markets (Box 9.1).103
Box 9.1 An extension of beta neutrality: mean neutrality and risk neutrality
The notion of beta neutrality, or equivalently, correlation neutrality, needs to be taken with extreme caution. Several hedge fund strategies exhibit returns that are closely linked to some market index, but in a non-linear way. In such a case, the traditional linear correlation coefficient – and therefore the beta – will indicate an absence of linear correlation. Investors might conclude that the fund is market neutral or equivalently, that it is independent of the market, while the reality is that the two are closely linked but non-linearly.
As an illustration, consider a fund that would always provide the square of the market return. That is, if the market performance is 5 percent, the fund will return 25 %. If the market performance is −3%, the fund will gain 9%. Such a fund would obviously have a positive correlation with the market when market returns are positive and a negative correlation with the market when market returns are negative. However, the “average” correlation will be zero – this does not mean market neutrality (Figure 9.2).
Figure 9.2 Linear correlation cannot measure non-linear relationships
099
A solution to deal with such non-linear relationships is to extend the definition of neutrality to consider any function of the market returns in the analysis. That is, a hedge fund may be said to be market neutral if it generates returns that are uncorrelated with any function – linear or not – of the returns on some given market index.104 This is often referred to as “mean neutrality”, because it implies that the expected return of the hedge fund is unpredictable given the return of the market. This can easily be tested using non-parametric regressions or Taylor series approximations.
For the sake of completeness, we should also mention that some hedge fund investors are also seeking some risk neutrality. That is, they want to avoid having the risk of their hedge funds increasing at the same time as the risk of a market index. The term “risk” can be defined in terms of variance, but also in terms of downside risk, value at risk, or even returns in extreme market conditions.

9.1.3 Sector neutrality

Although a portfolio with a zero beta is theoretically market neutral, all practitioners know that it is still exposed to the risk of losing money – for instance, if the long positions are in a sector that suddenly plunges and the short positions are in another sector that goes up. In addition to sector bets, value and growth biases or capitalization exposures may also lead a portfolio to underperformance despite strong returns on the broad market. In 1998, for instance, the extreme difference between the performance of growth and value stocks hindered beta-neutral managers having a value tilt, even though their total long exposure exactly matched their total short exposure.
To avoid that risk, it is necessary to go one step further and balance the long and short positions in the same sector or industry. This preserves the beta neutrality at the aggregate level, but also adds sector neutrality. Similarly, practitioners may also consider the market capitalization of the stocks in their portfolio to ensure that it is capitalization neutral, or the value/growth attributes of their longs and their shorts to ensure that it contains no biases.105

9.1.4 Factor neutrality

Factor neutrality is, in a sense, the ultimate and most quantitative step of equity market neutral strategies. Where practitioners had the intuition to use sector or capitalization exposures to attempt to strengthen the neutrality of their portfolios, quantitative portfolio managers use sophisticated factor models to determine the precise sources of risks in their portfolios, to quantify their exposures to these sources, and eventually to neutralize them. In first approximation, factor models can be seen as formal statements about the performance of security returns. For instance, the basic premise of a factor model is that since similar stocks display similar returns they are likely to be influenced by common factors. Factor models precisely identify these common factors and determine the individual stocks’ return sensitivity to these factors. They also provide estimates of the variances, covariance, and correlation coefficients among common factors, which will be very useful to quantify the overall risk of a portfolio and split it based on its sources.
To create a factor neutral portfolio, it is necessary to have beforehand identified a series of factors that influence the returns of individual stocks. The simplest model is obviously the market model, where only one factor, the market, is common to all stocks and explains their correlation. However, empirical observation and academic research suggest that there are other factors beyond the market influence. Some of these common fluctuations are explained by fundamental characteristics of the portfolio: stocks in the same industry tend to move together, value stocks tend to move together, growth stocks tend to move together, small caps tend to move together, and so on. Some of these common fluctuations are also explained by more general economic factors such as oil price, the level of interest rates, inflation, etc. Since the market risk is obviously not responsible for these common behaviours, specific risk must therefore be the place to investigate.
Figure 9.3 Breakdown of an equity portfolio risk
100
Multi-factor models simply decompose the “old” specific risk of Figure 9.1 into additional sources of risk, namely common factor risks and residual specific risk (Figure 9.3). Common factor risks represent forces that are not linked to the market risk, but still have a common influence on subgroups of stocks. Examples of such forces include their sector (biotechnology, energy, etc.), but also some macro factor risks (oil prices, the level of interest rates, etc.) as well as some micro factor risks such as the market capitalization of the company, its price to book value (P/B) ratio, its price to earning (P/E) ratio, etc. Residual specific risk then captures a refined source of risk derived from forces that uniquely influence an individual company.
A factor model allows quantitative portfolio managers to statistically construct a portfolio having the highest expected excess return while being neutral to a selected series of underlying factors. How does this work? As an illustration, let us consider the Barra Integrated model for the US stock market. This is a commercial factor model with 55 sector factors (each firm may participate in up to 6 sectors) and 13 common risk factors (variability in markets, success, size, trading activity, growth, earnings/price, book/price, earnings variation, financial leverage, foreign income, labour intensity, dividend yield, and low capitalization). Each month, Barra supplies the evolution of these 68 factors as well as the sensitivities (the betas) of all the US stocks to each of these factors.
The betas of a given portfolio to the respective factors are easily obtained by a weighted average of the component stocks’ betas. If some of the portfolio betas are not equal to zero, then the portfolio is not neutral to the corresponding factors. For a long/short equity portfolio to be truly market neutral, the manager must therefore extend his risk controls beyond market risk to include all the common factor sources of risk (Table 9.1).
Of course, there is always a limit to market neutrality. As more risk factors are being hedged away, the opportunity set to add value is reduced. Ultimately, if all risk factors are perfectly hedged, the portfolio becomes risk free and should theoretically yield the risk-free rate, minus transaction costs. Market neutrality is therefore a trade-off between eliminating some undesirable risk sources and reducing the set of return generating opportunities. For skilled quantitative managers, market neutral is a comfortable space to operate into, because it allows them to avoid taking risk in areas where they do not have skills while simultaneously maintaining some risk exposure where they have a competitive advantage.
Table 9.1 Example of the some of the risk exposures (beta) of a long/short equity fund. Note that each action to make the portfolio market neutral with respect to one factor will influence the exposure to the other factors
Risk factorExposureCommentary
Size0.25The portfolio has a large cap bias. To make it market neutral, the manager should sell short some large caps.
Momentum−0.14The portfolio has a bias towards shares that have recently performed relatively poorly. To eliminate it, the manager should sell short some past losers, or buy some past winners.
Market0.11The portfolio has a small residual market risk. To make it market neutral, the manager should sell some index futures
Growth0.02The index has a very small bias towards growth stocks. To eliminate it, the manager should sell short some growth stocks, or buy some value stocks.

9.1.5 A double alpha strategy

Market neutral strategies are often termed “double alpha strategies”, because they aim to achieve a zero beta exposure to a set of specified risks while harvesting two alphas, or active returns – one from the long position and one from the short position (Figure 9.4). Additional returns are accrued from interest earned on the non-invested cash balance that is maintained for fund liquidity purposes plus an interest rebate earned on the cash proceeds from the short sales that are held as collateral. The final result is often suggested as a substitute for fixed income allocations, or even viewed as an enhanced cash equivalent within an investor’s asset allocation plan. It will act as such as long as the sum of the two alphas remains positive.
Figure 9.4 The double alpha strategy
101

9.2 EXAMPLES OF EQUITY MARKET NEUTRAL STRATEGIES AND TRADES

9.2.1 Pairs trading

Pairs trading is probably the most primitive form of equity market neutral strategy. Its origins can be traced back to the early 1920s, when the legendary trader Jesse Livermore made a fortune in what he called “sister stocks”. His investment rules were simple and could be summarized as follows: (i) find stocks whose prices should normally move together; (ii) take a long/short position when their prices diverge sufficiently; and (iii) hold the position until the two stock prices have converged, or a stop loss level has been hit.106 Today, the heirs of Jesse Livermore are still closely following his traces. Their view is that two securities with similar characteristics that tend to move together and whose relative prices form an equilibrium can only deviate temporarily from this equilibrium. Therefore, whenever their spread becomes large enough from a historical/statistical perspective to generate the expectation that it will revert back to the long-term average level, they can profit by establishing a long/short position. In a sense, their strategy is a mean-reverting strategy, which is making a call on the relationship between two securities. Note that although pairs trading does not explicitly require to be market neutral, it is often constrained to be at least dollar neutral by hedge funds that implement it – either each pair is dollar neutral, or there is a systematic hedge overlay at the portfolio level, i.e. sell or buy index futures to neutralize the residual market exposure of the long/short portfolio.
The success of pairs trading depends heavily on the approach chosen to identify potential profitable trading pairs, i.e. model and forecast the time series of the spread between two related stocks. There is a variety of approaches, and the choice of one of them often depends on the background of the fund manager. For instance, the first equity market neutral funds were run by managers with a pure stock-picking background. Not surprisingly, they chose to approach stocks using a fundamental valuation perspective. For instance, they analysed each company in a given sector against all its competitors, and established a long/short position by purchasing the most undervalued company and selling short the most overvalued one. The process was then repeated across sectors, and each position was held until the spread between the associated companies had sufficiently reverted, or a stop loss level had been reached. More recently, numerous statisticians have entered the equity market neutral space. Since their competitive advantage is in time series analysis rather than in fundamental valuation, they often use purely statistical models to identify pairs whose two components deviate sufficiently. Using statistics and being systematic in the application of a model allows them to cover a large investment universe without being exposed to incorrect discretionary judgements, but it also implies that the strategy no longer has the flexibility of incorporating prior economic or financial knowledge in representing the relationship between the two time series.
Most of these models use some sort of distance function to measure the co-movements within pairs of securities. The simplest distance between two stocks is the tracking variance, which is calculated as the sum of squared differences between the two normalized price series.107 The position in a pair is initiated when the distance reaches a certain threshold, as determined during a formation period. For instance, this threshold distance could be two historical standard deviations away from its mean, as estimated during the formation period, or be specified as a certain percentile of the empirical distribution. The pair is closed when the distance reaches another threshold, either with a gain (the mean reversion occurred) or with a loss (a stop loss level was hit).
Figure 9.5 An example of pair trading. The upper graph shows the normalized price series of the two stocks, and the bottom graph shows the profit and loss as well as the exposure in the two stocks
102
As an illustration, consider the example of Figure 9.5. The upper graph shows the normalized price series of two related stocks. A normalized price series starts at 1000 and increases or decreases by the stock’s gross return compounded daily. Most of the time, the two normalized price series tend to move together. However, the normalized prices of the two stocks differ from each other by more than the trigger value (two historical standard deviations of historical price divergence) on several occasions. On each of these occasions, a position is open, where the most expensive stock is sold and the least expensive is purchased. The bottom graph shows when, and for how long, a position remains open. It also shows the cumulative return to this pairs-trading strategy. Note that there are flat (no profit) periods when the pair’s position is not open, but this is usually not a concern at the portfolio level because other pairs will be open during this period.
Of course, more complex distance functions can also be used. Let us mention the co-integration approach, which allows for co-integration between the stocks,108 or the stochastic spread approach, in which the evolution of the spread between two stocks is explicitly modelled as a continuous time stochastic process exhibiting some form of mean reversion.109 This latter approach is extremely convenient for forecasting purposes as well as for calculating information such as the expected holding period and the expected return of each pair. Alternatively, some pairs traders also like to use the orthogonal regression approach (Box 9.2) to measure the distance between two stocks.
Box 9.2 Orthogonal regression
Linear regression models try to find the line of best fit through the historical returns of two stocks (R1, R2). The usual regression model assumes a causality relationship from R1 (independent value) to R2 (dependent value), and finds the line of best fit by minimizing the deviations of R2 value, or the vertical distances. However, this is not the best way to model a stock pair relationship. When regressing between two stock prices, a more realistic assumption is that the two variables are interdependent and without a known causal direction.
Figure 9.6 Illustration of orthogonal regression
103
Orthogonal regression (Figure 9.6) treats the two stocks equally. It finds the line that minimizes orthogonal (perpendicular to the line) distances, rather than vertical distances. Technically, it minimizes the sum of the squared R1 and R2 deviations, rather than just one variable’s deviation.
Data snooping is obviously an important issue when forming such pairs-trading rules. This is why the entry and exit rules for any pair should be based on sensible assumptions, and not just be the result of any back-tests or simulations. Remember that an in-sample optimal trading rule may not remain optimal out-of-sample. Moreover, one of the main risks involved with pairs trading based only on statistical analysis is that a fundamental change in the relationship between the two stocks can get masked and the trader can enter positions when the prices are not expected to revert to historical means. This can happen when, for example, there is a fundamental change in the strategy of one of the companies as a result of which the price level changes permanently.
Surprisingly, the profitability of pairs trading seems now to be well established – see Box 9.3. This goes in complete contradiction with the weakest form of market efficiency, as a relatively simple rule purely based on the behaviour historical prices and their expected mean reversion seems sufficient to make money. More puzzling is the fact that this profitability is not only arising just because of mean reversion – a systematic contrarian strategy, e.g. buying past losers and selling short past winners, should then be highly profitable, but it is not the case, at least not over some considered periods. So far, the most convincing explanation is qualitative: pairs-trading profits would indirectly be related to some sort of “systematic dormant factor” due to the agency costs of professional arbitrage, i.e. the compensation for keeping prices in line. However, the level of that compensation still seems high.
Box 9.3 Is pairs trading profitable?
Surprisingly, although pairs trading has been widely implemented by traders and hedge funds, there is very little academic research which realistically tests its implementation. One exception is Gatev et al. (1999), who offer a comprehensive analysis based on the long-term systematic application of a simple distance measure (the tracking variance) which is often used in practice. The three authors begin by defining a one-year observation period, during which they observe normalized stock prices. Each normalized price begins the period with a price equal to 1 and increases or decreases each day by its compounded daily return. At the end of the one-year observation period, they calculate the distance between the daily normalized time series for every pair of stocks. In a market with 500 listed stocks, this entails calculating 12 4750 (= 500 × 499/2) distances. They then rank their stocks based on their distance and retain the 20 pairs that have the lowest distance.
The trading period immediately follows the observation period and lasts for six months. The prices of the 20 pairs of stocks are again initially normalized to 1. Then, the authors wait until some normalized prices diverge sufficiently, i.e. when the distance between a pair of stocks is larger than two historical standard deviations of historical price divergence – “historical” in this case means measured over the formation period. This triggers a signal to open a position for the pair, i.e. sell the higher priced stock and buy the lower priced stock. The position is held open until the next crossing of the prices, or until the trading period ends – being left with an open position is a risk that finite-horizon arbitrageurs face. Since each pair is effectively a self-financing portfolio, an equal dollar amount is initially allocated to each stock, and the position is marked-to-market on a daily basis.
Over the 1962-1997 period, Gatev et al.’s strategy generated an average annualized excess return of more than 12 percent, which exceeds by far any conservative estimate of transaction costs. Andrade et al. (2005) repeated their test out of sample using Taiwan data from 1994 to 2002, and also obtain statistically significant performance.

9.2.2 Statistical arbitrage

Statistical arbitrage can be seen as an extension of the pairs-trading approach to relative pricing. The underlying premise in relative pricing is that groups of stocks having similar characteristics should be priced on average in the same way. However, due to non-rational, historical or behavioural factors, some discrepancies may be temporarily observed. Rather than looking for a few pairs of securities that diverge from their historical relationship, statistical arbitrageurs slice and dice the whole universe of stocks according to several criteria and look for systematic divergences between groups. Their portfolio will typically consist of a large number of long and short positions chosen simultaneously; for instance, they will buy the 20 percent most undervalued stocks and sell short the 20 percent most overvalued according to some criteria, with the aim of capturing the average mispricing between groups.
The criteria selected to slice and dice the universe are the most critical elements in the strategy. In reality, what arbitrageurs are trying to do is use factors that explain well historical equity price movements and also have some sort of predictability. The challenge is to avoid factors with little explanatory power, or factors that just have a temporary impact, and rely only on intuitive and significant factors, whose empirical performance can easily be documented. Examples of such factors are valuation indicators, growth estimates, leverage, dividend yield, earnings revision, momentum, etc. Once a factor is selected, the arbitrageur scores the universe of stocks according to it and goes long the top scorers and short the lowest scorers. The resulting portfolio is factor neutral by construction,110 and its performance depends on the factor’s future ability to separate top from bottom performers. Most of the time, this ability is linked to specific market reactions that can be classified as short-term, medium-term and long-term momentum and reversal patterns. In a momentum pattern, past winners/losers are expected to be future winners/losers, while in a reversal, past losers/winners are expected to be future winners/ losers.
Such patterns are well known in empirical finance. For instance, over the short run (3 to 12 months), markets seem to favour momentum.111 That is, stocks that have performed relatively poorly in the past continue to lag, and stocks that have performed relatively well in the past continue to perform well. This apparent inefficiency can somehow be justified by some momentum in earnings announcements, but it also comes from investor overconfidence and other well-documented behavioural finance biases. In any case, it can easily be exploited by a statistical arbitrage strategy. For instance, an arbitrageur could take again the S&P 500 companies, sort them according to their past three-month performance and create 50 groups of 10 companies. The first group contains the stocks that have realized the highest return (referred to as “winners”), while the last group contains those that have realized the lowest return (referred to as “losers”). The arbitrage portfolio will go long the first group and short the last group. If momentum persists, the arbitrage portfolio will be profitable.
Mean reversion or contrarian trading is, in a sense, the opposite of momentum trading. It is based on the empirical evidence that price reversals tend to take place two or three years after the formation of a momentum portfolio. Some researchers have argued that mean reversion is in fact the long-term consequence of the price momentum effect – investors overreact in the short term, but realize later that they were wrong and prices will therefore adjust. If this is true, then an interesting arbitrage consists in going long past losers and short past winners, where losers and winners are measured over a longer time horizon (say three years).
Another popular trade of statistical arbitrageurs is the value versus growth bias. Growth companies may temporarily outperform value companies, but over the long run (two to five years), value companies display higher average returns.112 A statistical arbitrageur that would expect this situation to persist in the future and had a sufficient time horizon should immediately attempt to profit by going long value and short growth stocks. For instance, he could take the S&P 500 companies, sort them according to their price/earning (P/E) ratio or their dividend yield and create 50 groups of 10 companies. The first group will contain the stocks that have the highest value attributes, while the last group will contain the stocks that have the highest growth attributes. Our arbitrageur could then go long the first group and short the last group. If value continues outperforming growth over the long run, his portfolio will be profitable.
Of course, these strategies seem relatively simple, but the devil is in the details. The generic idea might be straightforward to understand, but the implementation is not. In particular, each of these trades relies on selection rules that should be carefully calibrated to market data in order to identify the optimal length of the observation period, the optimal number of groups to create and the most efficient way to structure and rebalance the portfolio. Most statistical arbitrageurs spend a lot of time on fine-tuning and back-testing their selection rules. Note that they do not have to limit themselves to using only one rule. As soon as their time horizons are different, momentum and contrarian strategies can actually coexist in a portfolio, very similarly to commodity advisers using several trading rules. Indeed, momentum trading functions very well in trending markets (pro-cyclical strategy) while contrarian trading comes into action when prices revert back to more sustainable levels (anti-cyclical strategy). Mixing them may actually smoothen out the performance of the portfolio. For instance, Figure 9.7 shows the back-test of a strategy that aims at systematically exploiting over-and under reactions in the market by arbitraging short run momentum and a medium run reversal in the S&P 500 stocks. The strategy has worked perfectly from 1986 to 2002.
The next question, of course, is whether it will continue to perform as well in the future.

9.2.3 Very-high-frequency trading

With the increased availability of real time market information and computing power, automated trading has attracted the interest of a growing number of equity market neutral hedge funds in recent years. Automated trading greatly facilitates the arbitrage of multiple markets and timeframes. For instance, our momentum strategy could easily be applied to different markets simultaneously, without running up against human limitations, e.g. clicking the mouse fast enough and managing thousands of trades. It could also capture very short-term opportunities, i.e. momentum that could last a few minutes or even a few seconds. For example, analysing the percentage of trades in the last 15 seconds that have been conducted at the bid and offer and comparing that with current market depth can offer a useful indication of short-term market direction.
However, being successful in very high frequency trading requires four elements: brainpower (to design the trading rules or the learning algorithms), high-frequency historical data (to test the trading rules), computing power (to apply the selected trading rules in real time) and best execution (to limit as much as possible trading costs and slippage). In our opinion, very few firms have been successful at combining these four elements. One of them, of course, is Renaissance Technologies – see Box 9.4.
Figure 9.7 The AlphaSwiss Montreal Index describes the out of sample back-test of the Momentum/Reversal-Alpha Model (MONTREAL) model. The MONTREAL strategy is a quantitative market-neutral US equity strategy developed on the basis of behavioural finance models in 2001 by AlphaSwiss Asset Management, Switzerland. The above track record assumes transaction costs of 0.10%, a borrowing rate of 0.80%, a 0.26% p.a. administration fee, 1.50% p.a. management fee and 20% performance fee with a high water mark
104
Box 9.4 James Simons and Renaissance Technologies
Renaissance Technologies is one of the few firms that were successful at providing great returns over several years by using only mathematical and statistical models for the design and execution of its investment programme. Renaissance Technologies was founded in 1982 by James H. Simons to focus on the use of mathematical methods. Simons had a long and impressive scientific career, with a PhD in mathematics from the University of California at Berkeley and several years of research in the fields of geometry and topology. He received the American Mathematical Society Veblen Prize in Geometry in 1975 for work that involved a recasting of the subject of area minimizing multidimensional surfaces – a consequence was the settling of two classical questions, the Bernstein Conjecture and the Plateau Problem. Simons also discovered certain measurements, now called the Chern – Simons Invariants, which are now widely used, particularly in theoretical physics. He then became a cryptanalyst at the Institute of Defense Analyses in Princeton, taught mathematics at the Massachusetts Institute of Technology and Harvard University, and was later the chairman of the Mathematics Department at the State University of New York at Stony Brook.
In 1989, Renaissance launched three computer-based funds called the Medallion Fund, the Nova Fund and the Equimetrics Fund. Medallion initially specialized in currencies, futures and commodities, and later on expanded to equities and options. In 1993, Medallion managed $280 million and closed its doors to non-Renaissance employees. In 1997, the Nova Fund was merged into Medallion, followed by Equimetrics in 2002.
Figure 9.8 Evolution of $100 invested in the Medallion Fund compared to the S&P 500
105
The track record of Medallion is simply phenomenal (Figure 9.8). Despite the highest management and performance fees in the industry, the fund has returned more than 30% per annum after fees. Capital has been returned to initial non-employee investors on a regular basis to maintain the fund size at $5 billion. In December 2005, the fund finally kicked out the last external investors’ money and run only its own capital.
The operational setup of Medallion is as impressive as its performance. For its technical and trading operations, Renaissance Technologies has a 115 000 square foot campus-style building on a company-owned property of 50 acres close to Stony Brook University, as well as backup in Manhattan. The research environment includes a cluster of 1000 processors and five large servers, supported by 150 terabytes of disk space, while the trading environment includes a cluster of 48 processors and 55 Sun machines directly connected to exchanges and brokers. The fund’s 39 researchers all have PhD degrees in mathematics or hard sciences – if he wanted to, Simons could launch his own space programme. But they are exclusively focused on short-term prediction, cost modelling, risk modelling, optimization and simulation.
In the fall of 2003, James Simons and his team started working on a new fund, but with a focus on slower frequency trading and equities, with a longer bias than Medallion. Renaissance Institutional Equity Fund was launched on 1 August 2005, with a target size modestly announced at ... $100 billion. Its $20 million minimum investment commitment gears it to institutions. Its returns: the fund had a slow start and gained only 5% in 2005.

9.2.4 Other strategies

Several other hedge fund strategies are intended to be market neutral to some extent. Let us mention merger arbitrage, which consists of trading pairs of securities related by an expected merger or takeover offer, or convertible arbitrage, which trades a convertible bond and its associated stock. We will review these strategies in their respective chapters.

9.3 HISTORICAL PERFORMANCE

The historical performance of equity market neutral hedge funds has been impressive, particularly on risk-adjusted terms. Over the January 1994 to December 2005 period, equity market neutral hedge funds – as measured by the CS/Tremont Equity Market Neutral Index – delivered an average return of 9.92% p.a., with a volatility of 2.96%. By contrast, over the same period, the S&P 500 delivered an average return of 8.6% p.a., with a volatility of 16.0%, and the CS/Tremont Hedge Fund Index delivered an average return of 10.7% p.a., with a volatility of 8.1% (see Figure 9.9 and Table 9.2).
Figure 9.9 Evolution of the CS/Tremont Equity Market Neutral Index, 1994-2005
106
Table 9.2 Performance comparison of the CS/Tremont Equity Market Neutral Index, the S&P 500 and the Citigroup World Government Bond Index, 1994-2005
107
Figure 9.10 Return distribution of the CS/Tremont Equity Market Neutral Index, 1994-2005
108
Table 9.3 Monthly returns of the CS/Tremont Equity Market Neutral Index, 1994-2005
109
Figure 9.11 Drawdown diagram of the CS/Tremont Equity Market Neutral Index compared to the S&P 500, 1994-2005
110
Figure 9.12 Comparison of the 12-month rolling performances of the CS/Tremont Equity Market Neutral Index with the S&P 500, 1994-2005
111
The track record of equity market neutral hedge funds is remarkably consistent over the years, although returns have been slightly declining since 2001. As a result, the excess skewness and kurtosis are very small, and the return distribution can be approximated by a normal distribution (see Figure 9.10 and Table 9.3).
The maximum drawdown of the strategy is also extremely small (−3.55%) and does not seem related to equity market drawdowns. Lastly, the 12-month rolling return evidences the relative attractiveness of the track record. (see Figures 9.11 and 9.12).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.55.151