Preface

The Changing Landscape of Information Quality

Since the publication of Entity Resolution and Information Quality (Morgan Kaufmann, 2011), a lot has been happening in the field of information and data quality. One of the most important developments is how organizations are beginning to understand that the data they hold are among their most important assets and should be managed accordingly. As many of us know, this is by no means a new message, only that it is just now being heeded. Leading experts in information and data quality such as Rich Wang, Yang Lee, Tom Redman, Larry English, Danette McGilvray, David Loshin, Laura Sebastian-Coleman, Rajesh Jugulum, Sunil Soares, Arkady Maydanchik, and many others have been advocating this principle for many years.
Evidence of this new understanding can be found in the dramatic surge of the adoption of data governance (DG) programs by organizations of all types and sizes. Conferences, workshops, and webinars on this topic are overflowing with attendees. The primary reason is that DG provides organizations with an answer to the question, “If information is really an important organizational asset, then how can it be managed at the enterprise level?” One of the primary benefits of a DG program is that it provides a framework for implementing a central point of communication and control over all of an organization’s data and information.
As DG has grown and matured, its essential components become more clearly defined. These components generally include central repositories for data definitions, business rules, metadata, data-related issue tracking, regulations and compliance, and data quality rules. Two other key components of DG are master data management (MDM) and reference data management (RDM). Consequently, the increasing adoption of DG programs has brought a commensurate increase in focus on the importance of MDM.
Certainly this is not the first book on MDM. Several excellent books include Master Data Management and Data Governance by Alex Berson and Larry Dubov (2011), Master Data Management in Practice by Dalton Cervo and Mark Allen (2011), Master Data Management by David Loshin (2009), Enterprise Master Data Management by Allen Dreibelbis, Eberhard Hechler, Ivan Milman, Martin Oberhofer, Paul van Run, and Dan Wolfson (2008), and Customer Data Integration by Jill Dyché and Evan Levy (2006). However, MDM is an extensive and evolving topic. No single book can explore every aspect of MDM at every level.

Motivation for This Book

Numerous things have motivated us to contribute yet another book. However, the primary reason is this. Based on our experience in both academia and industry, we believe that many of the problems that organizations experience with MDM implementation and operation are rooted in the failure to understand and address certain critical aspects of entity identity information management (EIIM). EIIM is an extension of entity resolution (ER) with the goal of achieving and maintaining the highest level of accuracy in the MDM system. Two key terms are “achieving” and “maintaining.”
Having a goal and defined requirements is the starting point for every information and data quality methodology from the MIT TDQM (Total Data Quality Management) to the Six-Sigma DMAIC (Define, Measure, Analyze, Improve, and Control). Unfortunately, when it comes to MDM, many organizations have not defined any goals. Consequently these organizations don’t have a way to know if they have achieved their goal. They leave many questions unanswered. What is our accuracy? Now that a proposed programming or procedure has been implemented, is the system performing better or worse than before? Few MDM administrators can provide accurate estimates of even the most basic metrics such as false positive and false negative rates or the overall accuracy of their system. In this book we have emphasized the importance of objective and systematic measurement and provided practical guidance on how these measurements can be made.
To help organizations better address the maintaining of high levels of accuracy through EIIM, the majority of the material in the book is devoted to explaining the CSRUD five-phase entity information life cycle model. CSRUD is an acronym for capture, store and share, resolve and retrieve, update, and dispose. We believe that following this model can help any organization improve MDM accuracy and performance.
Finally, no modern day IT book can be complete without talking about Big Data. Seemingly rising up overnight, Big Data has captured everyone’s attention, not just in IT, but even the man on the street. Just as DG seems to be getting up a good head of steam, it now has to deal with the Big Data phenomenon. The immediate question is whether Big Data simply fits right into the current DG model, or whether the DG model needs to be revised to account for Big Data.
Regardless of one’s opinion on this topic, one thing is clear – Big Data is bad news for MDM. The reason is a simple mathematical fact: MDM relies on entity resolution, and entity resolution primarily relies on pair-wise record matching, and the number of pairs of records to match increases as the square of the number of records. For this reason, ordinary data (millions of records) is already a challenge for MDM, so Big Data (billions of records) seems almost insurmountable. Fortunately, Big Data is not just matter of more data; it is also ushering in a new paradigm for managing and processing large amounts of data. Big Data is bringing with it new tools and techniques. Perhaps the most important technique is how to exploit distributed processing. However, it is easier to talk about Big Data than to do something about it. We wanted to avoid that and include in our book some practical strategies and designs for using distributed processing to solve some of these problems.

Audience

It is our hope that both IT professionals and business professionals interested in MDM and Big Data issues will find this book helpful. Most of the material focuses on issues of design and architecture, making it a resource for anyone evaluating an installed system, comparing proposed third-party systems, or for an organization contemplating building its own system. We also believe that it is written at a level appropriate for a university textbook.

Organization of the Material

Chapters 1 and 2 provide the background and context of the book. Chapter 1 provides a definition and overview of MDM. It includes the business case, dimensions, and challenges facing MDM and also starts the discussion of Big Data and its impact on MDM. Chapter 2 defines and explains the two primary technologies that support MDM – ER and EIIM. In addition, Chapter 2 introduces the CSRUD Life Cycle for entity identity information. This sets the stage for the next four chapters.
Chapters 3, 4, 5, and 6 are devoted to an in-depth discussion of the CSRUD life cycle model. Chapter 3 is an in-depth look at the Capture Phase of CSRUD. As part of the discussion, it also covers the techniques of truth set building, benchmarking, and problem sets as tools for assessing entity resolution and MDM outcomes. In addition, it discusses some of the pros and cons of the two most commonly used data matching techniques – deterministic matching and probabilistic matching.
Chapter 4 explains the Store and Share Phase of CSRUD. This chapter introduces the concept of an entity identity structure (EIS) that forms the building blocks of the identity knowledge base (IKB). In addition to discussing different styles of EIS designs, it also includes a discussion of the different types of MDM architectures.
Chapter 5 covers two closely related CSRUD phases, the Update Phase and the Dispose Phase. The Update Phase discussion covers both automated and manual update processes and the critical roles played by clerical review indicators, correction assertions, and confirmation assertions. Chapter 5 also presents an example of an identity visualization system that assists MDM data stewards with the review and assertion process.
Chapter 6 covers the Resolve and Retrieve Phase of CSRUD. It also discusses some design considerations for accessing identity information, and a simple model for a retrieved identifier confidence score.
Chapter 7 introduces two of the most important theoretical models for ER, the Fellegi-Sunter Theory of Record Linkage and the Stanford Entity Resolution Framework or SERF Model. Chapter 7 is inserted here because some of the concepts introduced in the SERF Model are used in Chapter 8, “The Nuts and Bolts of ER.” The chapter concludes with a discussion of how EIIM relates to each of these models.
Chapter 8 describes a deeper level of design considerations for ER and EIIM systems. It discusses in detail the three levels of matching in an EIIM system: attribute-level, reference-level, and cluster-level matching.
Chapter 9 covers the technique of blocking as a way to increase the performance of ER and MDM systems. It focuses on match key blocking, the definition of match-key-to-match-rule alignment, and the precision and recall of match keys. Preresolution blocking and transitive closure of match keys are discussed as a prelude to Chapter 10.
Chapter 10 discusses the problems in implementing the CSRUD Life Cycle for Big Data. It gives examples of how the Hadoop Map/Reduce framework can be used to address many of these problems using a distributed computing environment.
Chapter 11 covers the new ISO 8000-110 data quality standard for master data. This standard is not well understood outside of a few industry verticals, but it has potential implications for all industries. This chapter covers the basic requirements of the standard and how organizations can become ISO 8000 compliant, and perhaps more importantly, why organizations would want to be compliant.
Finally, to reduce ER discussions in Chapters 3 and 8, Appendix A goes into more detail on some of the more common data comparison algorithms.
This book also includes a website with exercises, tips and free downloads of demonstrations that use a trial version of the HiPER EIM system for hands-on learning. The website includes control scripts and synthetic input data to illustrate how the system handles various aspects of the CSRUD life cycle such as identity capture, identity update, and assertions. You can access the website here: http://www.BlackOakAnalytics.com/develop/HiPER/trial.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.157.6