Foreword

In many ways, a handbook that helps to contextualize the banking business as a portfolio of risk management activities could not be more welcome or timely. It should be no surprise that the globalization of the financial system has dramatically expanded the scope of risks a bank naturally accumulates in its day-to-day operations. Accordingly, the difficulty of valuing and administering these aggregate risks continues to broaden. Whilst computer technology has at least provided the processing faculty against this increasing challenge, the banking industry is continually pressed to develop the analytical theory and hedging tools necessary to cope with risk management’s increasing complexity. The advent and rapid growth of markets such as asset securitization and credit derivatives, for instance, evidence such progress. Hence, any practical study of banking without a proper perspective on the fundamental liquidity, capital, interest rate,, and credit risk management techniques in practice today would be incomplete.

With that said, the recent financial crisis has raised many questions around the merits of the so-called ‘advances’ made in valuation and risk theory over the past several decades. I would argue that ‘financial engineering’, as it has so dubiously been labelled, has taken a disproportionate share of the blame as the catalyst for the crisis. As above, the genesis of these new tools and approaches has been born out of necessity and are a natural consequence of the increasingly complex risk landscape for the financial system. So, although convenient, it would be an oversimplification to suggest that the tools of finance by their own construction had caused the problems outright. More appropriately, one should reflect on how these financial tools were employed and why their natural limitations were insufficiently appreciated. In retrospect, the latter is perhaps the main reason management at the most affected financial institutions did not see the level of risk they were exposed to until it was too late. Put differently, as measures of risk exposure filtered up through these organizations, the degree of sensitivity embedded in those risk measures was lost in translation. In effect, the worst hit financial institutions experienced a failure in risk management. Risk management, in this sense, refers not only to understanding the value of a financial asset at any given point in time but, more importantly, the sensitivity of its value to rapidly changing market conditions.

Stepping back, at the core of any true science is a set of basic fundamental principles and relationships that is accepted by all researchers as a foundation to expand upon. As a general matter, the science of finance is really no different. For most financial assets (bonds, equities,, real estate), the analytical theory used to assess their value at any given moment is fairly developed and is supported by broad consensus. In fact, many of those basic theories are captured in this book. If these theories are so generally accepted, one might ask oneself: Why do any two parties ever differ in terms of the values they would assign to any similar asset? More concretely, in light of the near death experience of the financial system, why had the values assigned to financial assets by buyers and sellers (i.e., the ‘bid and offer’) become virtually irreconcilable. In short, valuation disparities tend to be driven more by divergent views on model inputs rather than opposing analytical approaches in themselves. This should not be too surprising; the financial industry purposefully looks to establish tools that are standardized and widely shared. Pragmatically, standardization with open architecture enhances transparency in an asset class promoting greater liquidity and increased business volumes.

Irrespective of one’s trust or distrust in current financial theory, it is hard to dispute that the use of poorly founded input assumptions will render even the best model ineffective. As is the case with all scientific models used to predict and quantify an outcome, the quality of the output depends on the reliability of the input. In this regard, extracting dependable inference from historical source data is not a unique challenge to the science of finance. Interpreting historical source data to validate forward assumptions is challenged by sampling inaccuracy, insufficient time series, and reliance on ‘constant conditions’. ‘Constant conditions’, in this sense, refer to the belief that every factor that has influenced historical source data will remain the same in the future as will its impact. Unlike theoretical science, though, financial markets do not exist in a controlled environment or vacuum. On the contrary, financial markets are highly dynamic and interconnected. Thus, historical data are not necessarily predictive of future actual outcomes neither in terms of average expectation nor the variance around that expectation. This brings one back again to the concept of financial risk management, whereby financial risk management looks to metric the impact that changing conditions have on the base assumptions behind valuation tools (i.e., the 2nd, 3rd,, th order effects that changing market environments have on valuation outcomes). Not surprisingly, framing asset valuation sensitivity to capture the impact of changing conditions has only gotten more challenging in a globalizing financial world. Globalization by its very nature is a ‘non-constant condition’ that creates greater volatility in the reliability of forward assumptions relative to historical experience.

With that background, the structural limitations inherent in making forward predictions based on models were exemplified early in the crisis most notably with mortgage-backed securities. For many years, the cumulative borrower default rates assumed to value those securities relied heavily, among other things, on backward-looking observation. With the benefit of hindsight, these assumptions considerably underestimated the actual default rates currently being realized today. In other words, the true value of the cash flows from these securities was overestimated and the price paid for those securities at inception was too high. As the likelihood for worsening default rates transpired, investors reran their models using the more likely expectation only to learn that their investments were worth considerably less in many cases. The realization that so many could be so wrong created a crisis of confidence that ultimately put the entire financial system in disarray. It triggered questions by originators, investors, rating agencies and regulators as to the accuracy of all assumptions and models being used to deterministically set prices. This one event placed the entire financial industry into the limelight and onto its back foot.

So, why were the actual default rates being observed today not in the realm of imaginable possibility when compared with those actually used to populate the valuation models? Was it really the science of finance (i.e., ‘financial engineering’) in itself that caused the oversight? Well, as above, the overreliance on historical data to predict future outcomes had the most obvious role. In our mortgage-backed securities example, what many failed to appreciate was that lender underwriting standards had changed significantly. As a consequence, the historical source data used to support model inputs were no longer relevant. Said differently, the ‘constant condition’ between past and future was erroneously unquestioned. Interestingly, though, for those that invested in those mortgage-backed securities the consensus view on value between seller and buyer was very similar pre crisis. This was evidenced empirically by the very tight ‘bid–offer’ prices on those securities before the market meltdown. The ‘bid–offer’ can be viewed as a proxy measure of how in agreement buyers and sellers are on the inputs being used in their valuation models. But we know from hindsight, certain market participants avoided taking exposure to this asset class. Others went as far as to take deliberate views that the consensus values were so sufficiently overstated that they exercised their views by selling those securities short. These contrarians, despite being limited to the same tools and same historical data as anyone else, were ultimately correct in their divergent views. Were they simply smarter?

More realistically, they were not smarter. I would argue that they were just better at understanding the limitations and risks in relying on models and historical source data. Perhaps importantly, they looked to factors outside the usual numeric sources of information and applied judgment to stress their model assumptions beyond the historical experience. These ‘risk managers’ were more enlightened only to the extent that they recognized that the ‘constant condition’ was no longer valid. Basic questions around exponential origination growth, deteriorating demographics, increased use of teaser rates, high loan to values and documentation (i.e., all widely discussed in the public domain), should have given rise to reflection for all involved to question further whether the model inputs and hence model outputs made sense? In the end, no model can replace basic judgment, intuition or common sense. Perhaps this is the common sense lesson?

Although it is rare for finance books to be philosophical, hopefully, some of the viewpoints above will help to put perspective around the powers and pitfalls of financial science. As with all sciences, the theoretical frameworks, the analytical models, and the instruments themselves (i.e., ‘collective financial technology’) are only tools – their predictions and promises are not absolutes. While ‘collective financial technology’ allows for better understanding of valuation basics, more practically, their utility is best when considered as part of the broad suite of tools available to support decisions around ‘financial risk management’. Upon reflection, perhaps the better nomenclature is ‘financial risk judgment’. At least under this description, it marries both science and human intuition. Human intuition in this respect refers to our unique ability to question what we observe or what we are told; for example, are our data sources and assumptions reasonable? Furthermore, it refers to our unique ability to individually assess and weigh an expected financial result versus the consequences of an unexpected bad result: Can we live with the downside scenario if our assumptions are invalid? The long-run bull market leading up to the financial crisis clouded human judgment around downside risk awareness for many. The senior management at many financial institutions became obsessed with absolute revenues rather than risk-adjusted revenues. Consequently, many of the newer financial tools and innovations were used for speculation rather than for the purposes they were developed – such as risk immunization. Unfortunately, this behavioural trait is endemic of the human condition and recurring throughout history. Technology doesn’t make decisions, humans do.

Like all technology, the tools of finance have the faculty to be used appropriately and inappropriately. Albeit, this is not to be confused with the notion that they are either ‘good’ or ‘bad’ as some would like to argue. Wrongly, such oversimplifications are often used to justify or discredit the utility of other science fields (i.e., nuclear energy, genetics and social planning). Thankfully, the world we live in is not that simple or binary. With that said, there is perhaps one absolute that most would agree upon. There is an unwritten duty that transcends all science practitioners: use the tools and knowledge conscientiously. As new and old students of finance study this text, it is incumbent upon you to appreciate the strengths and weaknesses of the discipline and transact responsibly within its boundaries.

Oldrich Masek

Managing Director, JPMorgan

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.17.18