CHAPTER 8
Rogue Computer

Faulty software led Knight Capital Group to record thousands of erroneous trades with the NYSE, leading to huge losses and imminent bankruptcy.1 The Facebook IPO was going to be the IPO to end all IPOs. It all went wrong, however, on the day due to software glitches.2 The error's ripple effects lasted for days, and some market participants experienced significant losses. Another problematic IPO in 2012, though less high profile, was the BATS (Better Alternative Trading System) public filing, which had to be canceled due to software glitches. These events threw a spotlight on critical execution aspects of technology in facilitating trading activity and potential sources of great operational risk therein.

History of Technology in Public Markets and Increasing Risk

It is doubtful that any CEO of a major brokerage house at the beginning of 2012 would have listed technology failures in sales and trading and IPOs as leading candidates for public relations disasters and major financial losses. Yet in the space of a few short months, the BATS IPO, the Facebook IPO, and Knight Capital all grabbed headlines and losses for the wrong reasons. Given the rapid growth of technology in all aspects of the market, however, such events should not have been a big surprise and while attention is still focused here, it is worthwhile reviewing recent developments and what can be done to address these issues.

The use of technology has been a feature of public stock exchanges since they were first established in Europe over 400 years ago. Such exchanges were enabled by a combination of demand, regulation, and technology. In the twentieth century, telephones and ticker tapes led the way to rapid volume growth.

Modern computer technology was introduced in the 1970s, and its associated efficiencies hastened the onset of much greater trading volumes. All of these developments took place within a market that was dominated by several major players, such as the New York Stock Exchange and the London Stock Exchange, which innovated to stay competitive.

The introduction of screen‐based automated quotation systems took place in the mid‐1980s. In London, this was known as the Big Bang.3 This was quickly followed by the stock market crash in 1987, known as Black Monday.4 This was an early harbinger of the relationship between technical advance, trading volume growth, and more rapid and larger swings in the prices quoted on the trading floor or on the trading screen. Many critics expressed the view that the crash was directly caused by a feedback technology loop that had the effect of compounding price changes. There was some evidence for that view, but since then trading technology has only leapt further ahead, enabling the proliferation of a vast array of new players and new dynamics into the market.

First, electronic markets were one of the key new power groups in the trading marketplace and they reduced the hegemony of the traditional markets and the longstanding nature of their conventions, middlemen, practices, and cost structures.5 Second, the retail stock trader became a much‐noted feature of the market in the wake of these technology developments. The price of placing trades and the easy availability of market information made stock trading a much more attractive proposition to ordinary retail investors. Third, hedge funds with their endless appetite and penchant for secrecy, drove much of the build out of dark pools, so‐called because they could conceal the deals coming in to the market as well as superfast trade execution engines.6 Hedge funds introduced high‐frequency trading strategies, and the related algorithmic trading patterns7 brought back the specter of volume and price dysfunction.

Lastly, technology IPOs, especially the wildly popular ones released on a diversity of new platforms, introduced an element of surprise and excitement into the normally conservative proceedings of the market.

With all this innovation, technology became a major source of risk and potential disruption by the early 2000s. The Flash Crash in May 2010 was a wake‐up call to regulators and market participants alike.8 A rogue algorithmic trade appeared to send US markets sharply down in the space of 20 minutes before rebounding in a similar period. The SEC introduced a number of measures designed to bolster market stability and investor confidence, including a break in the glass utility in cases of precipitous falls in stock value and an improved audit trail of market actions preceding such types of events.

However, like the introduction of safety belts and air bags in cars, these innovations likely only encourage participants to drive faster. Furthermore, writing rules for every eventuality and identifying every flawed practice in every participant is not possible. The Knight Capital and Facebook cases are only the most well‐known and significant in a series of high‐frequency market‐making, algorithmic trade, and IPO malfunctions that have occurred on different platforms in each region of the globe (see Table 8‐1). The challenges and problematic scenarios for regulators and market participants are clearly broader than simply the popular bogeyman—high‐speed trading.

Table 8‐1 Losses from IT Failures

Firm Year Loss (in US billions)
Knight Capital 2012 ∼$0.4
NASDAQ 2012 ∼$0.6
BATS 2012 Unknown
Flash Crash 2010 Unknown

Flash Crash 2010

How quickly one executes trades has become a major source of competitive advantage. In equity markets, for many players, it is not what you know but how quickly you can act on it. Nanoseconds matter. By locating servers that route trades ever closer to the market, by building faster computers, market makers can build their market share. Algorithms are deployed that enable market participants to execute trades using preset parameters based on volume levels, price changes, volatility indicators, and so on. Many of these algorithms are similar so that, given certain prevailing market conditions, firms will start to execute in the same direction at the same time. Most of the time, the impacts of such changes are limited since prices go up and down, for the most part, in a continuous way—down like a waterfall and up like an airplane. What happens, however, when price discontinuity is introduced into the market, when a plane plunges from 20,000 feet to a hundred feet? The consequences are generally unknowable because the number of algorithmic traders is huge and knowing how they will react given a certain set of highly unlikely market conditions is impossible.

In May 2010, when a computerized algorithm burst into a selling frenzy, the consequences could probably have been a lot worse. The rogue algorithm set off a tremendous wave of selling activity, and prices of certain securities underwent, in apparent reaction, price drops of very significant proportions. Indeed certain securities, for example, Accenture, went to close to zero value in the space of seconds. Such a drop in value was not related to changes in Accenture's profits or its perceived value. No doubt, Accenture consultants were no more or less busy on this day than any other. In fact, the losses were caused by an automated chain reaction resulting from the unusual levels of activity in apparently unrelated trades.

What the episode showed, like in the market crash as a result of the bursting of the housing bubble, is that in today's market, all markets are interconnected. But maybe it was ever thus, and the innovations brought by technology speed up (in nanoseconds) the falling dominoes before a course correction takes place. In this instance, at least, market losses were minimal, due in no small part to the sensible and prompt actions taken by market regulators. Based on certain criteria, buyers and sellers were put back to the position they were before the crash.

However, people were spooked: investors, regulators, banks, and Washington. The concern was that this was yet another example of a risk, like insider trading, like price manipulation, like rogue trading, that left market participants at the mercy of forces beyond their control. If no rational explanation could be found, the fear of reoccurrence could have a chilling effect on the market. The Senate ordered an investigation, as did the CFTC, SEC, and the Federal Reserve. What was the cause, how could it be fixed?

The Senate Committee that led the investigation concluded that the direct cause of the crash was an algorithm of a futures firm that executed massive sales of index futures.9 Regardless of the cause, circuit breakers (break‐the‐glass utilities) were implemented to stop market activity in the instance of certain price changes within a small time period. Every brokerage house was forced to comply.

Other important concerns have emerged since then—namely, that the speed of certain players was such that they were able to gain an unfair market advantage over other market participants.10 The advantage was measured in pennies, but pennies add up when applied to trades worth millions of dollars. Given both these issues, the SEC conducted examinations of the high‐speed trading operations of the large investment banks and broker‐dealers. The goal was as much fact‐finding as anything else. There had been so much innovation in the market that the SEC was no longer able to monitor market developments. Examiners went in with many questions. For the banks and employees answering them, they were confused: What were the examinations really designed to achieve, what was the focus? Be that as it may, without requiring any significant changes of the front office, IT departments received the brunt of examiners' requests: more rigor when they changed the software code that controlled trading activities, improved security, and regulatory scrutiny over trading algorithms.

So then the same thing couldn't happen again. Right? No, that would be wrong.

Knight Capital 2012

In 2012, Knight Capital made an apparent error in releasing software updates to the live market environment. As a result, Knight Capital was taken on a buying spree of securities it apparently had no wish or had even considered buying. Eventually, the positions added up to a financial obligation that was beyond Knight Capital's financial means. The errant program had, evidently, locked the firm into price points that were above the market price. Unlike the 2010 Flash Crash, exiting the positions in this instance led to real losses and the firm's failure and subsequent sale.

The release of new code into the marketplace is a daily occurrence. While the coding of new software enabling a firm to take advantage of small market movements in new and innovative ways is relatively easy, managing the safe release of such software into the live market environment is hard. This was something, at least, that Knight Capital had failed to do. As it turned out, Knight's very survival depended on its ability to do so. It is doubtful, however, that Knight Capital was fully aware of its top risk or the fact that there was zero tolerance for such a failure.

Facebook and BATS

IPOs also gave rise to such problems in two cases: first, with the Facebook IPO and, second, with the BATS IPO.

The case of Facebook was well reported at the time, of course, and is well‐known. NASDAQ failed to execute in a timely way the order flow for the newly issued Facebook stock. Confirmations were not sent out per requirement to communicate to brokers how many shares they had bought. In one instance at UBS, it was reported that brokers, since they did not receive confirmations, placed the order multiple times, leading to a position many times larger than intended and to losses reported in the press to be as high as $400 million. Described to me by a colleague as behavior like a child who keeps pressing a button that doesn't work, this was driven by investors frantic to purchase the stock, and the failure of the market to work properly only exacerbated that. Many other firms and investors were impacted by the failure of the technology to work the way it was supposed to. Of course, the stock price dropped after the failure, and it is likely that the technology failures contributed to this, too.

Diagnosis of the failure was again due to changes introduced into software that were not effectively tested prior to prime time. The BATS IP0 in 2012 was a similar failure. In this case, BATS decided to launch its IPO on its own market platform.11 The failure of the software to handle the IPO was not the best advertisement for the company. Fortunately, the company was able to cancel the IPO without any impacts other than to BATS's own reputation.

How Problems Occur

As a manufacturing process, the development of software is still relatively immature. It is also dynamic and generally part of a larger whole, which has a complex set of links and dependencies. These dependencies need to be carefully documented so that an integration with a new version can ensure that the integrity of those links and dependencies are maintained. When one understands that any single one of the links and interlocking parts may also be changed at any time, one can see how carefully the process of changing, communicating, and testing code needs to be managed.

Careful code release into a live market environment should only happen after the completion of thorough testing to ensure the behavior of software in the live environment has been fully vetted. Such risk management is not necessarily practiced by all players releasing new software, nor is it necessarily possible to replicate the live, day‐to‐day trading environment exactly for testing purposes. Furthermore, players may be tempted to rush through final tests to meet delivery deadlines. In some smaller, more recent market entrants, release management protocols may not be fully developed.

Key Controls

As bad as the events we have discussed here may have seemed, the market may not yet have seen the worst‐case scenario. To address this issue, good IT governance and safe IT release management practices are required from all players releasing software into the marketplace. This will, however, not be sufficient to prevent the reoccurrence of events such as we have recently seen.

Just like fire drills, people need to be told immediately when a situation is occurring and what to do when it does. Regulators, markets, investment banks, and other smaller players need to come together to discuss the different scenarios that have occurred and could occur in the future and develop a “break‐the‐glass” set of scenarios, plans, escalation procedures, and on/off switches for when they do. That would be a good start. Players also need to assess their strategic objectives and understand the risks for which they can have zero tolerance. If Knight Capital had done such risk assessment work, it perhaps could have put more focus into the sort of controls that underpinned its very existence.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.60.158