GLOSSARY

The Glossary contains terms that are formally defined and consistently used throughout the current book. Where possible, the terms align with traditional terms used in data resource management. However, in situations where the traditional terms are confusing, contradictory, redundant, overlapping, and so on, those terms have been specifically defined and consistently used in the current book. In addition, terms unique to the Common Data Architecture have been defined,  In situations where the terms are a direct quote, the reference is given.

The Glossary builds on the Glossary provided in Data Resource Simplexity to create an all-inclusive Glossary. The terms from Data Resource  Simplexity have (Brackett 2011) after them and are unchanged. The terms that appeared in Data Resource Simplexity and have been modified in the current book have (Brackett 2011, 2012) after them. Terms new to Data Resource Integration have (Brackett 2012) after them. Terms in bold italics are defined terms, and terms in italics only are references to defined terms.

You may use these terms and definitions in your material as long as you give due credit to the source. The intent is to provide a common and consistent terminology for data resource management and resolve the lexical challenge. These terms and definitions have been offered to DAMA for inclusion in their Dictionary of Data Management.

Acceptable means capable or worthy of being accepted. (Brackett 2012)

Acceptable data availability is the situation where data are readily available to meet the business information demand while those data are properly protected and secured. (Brackett 2011)

Acceptable data characteristic variation is any data characteristic variation that is not preferred, but is acceptable to use for an interim period until appropriate changes can be made to databases or application programs. (Brackett 2012)

Acceptable data culture variability is the acceptable level of variability in management of the data resource. (Brackett 2012)

Acceptable data reference set variation is any data reference set variation that is not preferred, but is acceptable to use for an interim period until appropriate changes can be made to databases or application programs. (Brackett 2012)  

Acceptable data resource variability is the acceptable level of variability for an organization’s data resource. (Brackett 2012)

Acceptable variability is the situation where a normal range of variability is acceptable. Variability exists in all aspects of a business and a normal level of variability must be accepted to perform business successfully. (Brackett 2012)

Accuracy is freedom from mistakes or errors, conformity to truth or to a standard, exactness, the degree of conformity of a measure to a standard or true value. (Brackett 2011)

Accurate data definition principle states that a comprehensive data definition must accurately represent the business. The data definition could be meaningful, and it could be thorough, but it may not be accurate.(Brackett 2011, 2012)

Active data contributors are data attributes that still exist and can change, and are used to create active derived data. (Brackett 2011)

Active derived data are derived data based on active data contributors. (Brackett 2011)

Actual data redundancy is the existence of the same business fact in multiple data files that contain non-redundant data occurrences. It’s the redundancy of a business fact based on the data characteristic name and a determination of the redundancy in data occurrences. (Brackett 2012)

Actual data resource scope is the portion of the data resource that is actually formally managed. (Brackett 2011)

Adequate means sufficient for a specific requirement; sufficient or satisfactory; lawfully and legally sufficient. (Brackett 2011)

Adequate data accessibility principle states that access to the data resource must be sufficient to allow people to perform their business activities, and for citizens and customers to obtain the data they need regarding services and products. (Brackett 2011)

Adequate data protection principle states that the data resource must be protected from unauthorized access, alteration, or destruction. (Brackett 2011)

Adequate data recovery principle states that the data resource must have reasonable protection against reasonable failures, and must be recoverable as quickly as possible when the data are altered or destroyed by human or natural disasters. (Brackett 2011)

Adequate data responsibility is the situation where the responsibility, as defined, meets the need for properly managing a comparate data resource. The responsibility is formal, consistent, coordinated, and suitable for a shared data environment. (Brackett 2011)

Aggregation data derivation is where two or more values of the same data attribute in different data occurrences contribute to the derived data. (Brackett 2011)

Alias data name is any data name, other than the primary data name, for a fact or group of related facts in the data resource. (Brackett 2011)

All-inclusive data inventory principle states that all existing data, or references to data, will be inventoried and cross-referenced to a common data architecture so they can be thoroughly understood in a common context. No existing data or references to data, such as data files, reports, screens, documents, dictionaries, data flows, and so on, will be exempt from the data inventory and data cross-reference processes, although priorities may be designated. (Brackett 2012)

Alternate foreign key is a foreign key that matches an alternate primary key in a parent data subject. (Brackett 2012)

Alternate primary key is a primary key that is valid and acceptable, but is not the preferred primary key. (Brackett 2011)

Analysis is separation of the whole into its parts; an examination of a complex, its elements, and their relations; the separation of the ingredients of a substance; a statement of the constituents of a mixture. (Brackett 2011)

Analytical data normalization is the process of re-normalizing the operational logical schema to an analytical logical schema for the purpose of analytical processing. (Brackett 2011)

Analytical tier represents the data in true data warehouses. The data are used to verify or disprove known or suspected trends and patterns. Mathematically the analytical tier is in the aggregation space. (Brackett 2011)

Anthropic principle is the law of human existence. Our existence in the universe depends on numerous constants and parameters whose values fall within a very narrow range. If a single variable was slightly different, we would not exist. (Brackett 2011)

Apparent data redundancy is the apparent existence of the same business fact in multiple data files, regardless of whether those data files contain redundant data occurrences. It’s the redundancy of a business fact based on the data characteristic name. (Brackett 2012)

Application alignment principle states that purchased applications must be selected that align with the business and prevent or minimize warping the business into the application. (Brackett 2011)

Application data transformation is the process of transforming disparate data applications from reading and storing disparate data to reading and storing comparate data. The entire application may not be changed, but the data read and data store routines of the application can be changed from disparate data to comparate data. (Brackett 2012)

Appropriate means especially suitable or compatible; fitting. (Brackett 2011)

Appropriate data recognition is the situation where the organization recognizes that data are a critical resource of the organization, the data resource is disparate, and an initiative to develop a comparate data resource is needed. The recognition is organization-wide and the data resource is managed with the same intensity as the financial resource, the human resource, and real property. (Brackett 2011)

Appropriate data use principle states that an organization must constantly review the use of data to ensure the use is appropriate and ethical. (Brackett 2011)

Appropriate detail principle states that a proper data structure must contain all the detail needed for all audiences, but only provide the detail desired by a specific audience. (Brackett 2011)

Agility is the quality or state of being agile; marked by ready ability to move with quick easy grace; mentally quick and resourceful. (Brackett 2012)

Arc from set theory. See Edge. (Brackett 2011)

Architected data are any data that are formally understood and managed within a common data architecture, including both disparate and comparate data. (Brackett 2011)

Architecture (general) is the art, science, or profession of designing and building structures. It’s the structure or structures as a whole, such as the frame, heating, plumbing, wiring, and so on, in a building. It’s the style of structures and method of design and construction, such a Roman or Colonial architecture. It’s the design or system perceived by people, such as the architecture of the Solar system. (Brackett 2011)

Architecture (data) is the art, science, or profession of designing and building a data resource. It’s the structure of the data resource as a whole. It’s the style or type of design and construction of the data resource. It’s a system, conceived by people, that represents the business world. (Brackett 2011)

Attribute is an inherent characteristic, an accidental quality, an object closely associated with or belonging to a specific person, place, or office; a word ascribing a quality. (Brackett 2011)

Availability heuristic states that the better you can imagine a dangerous event, the likelier you are to be afraid of that event. (Brackett 2012)

Base data type is a specific type or form of data within a data megatype, based on format. (Brackett 2011)

Borgesian nightmare is a labyrinth that is impossible to navigate, which causes people to have nightmares. (Brackett 2011)

Broker is one who acts as an intermediary; an agent who makes arrangements. (Brackett 2012)

Brute-force-physical approach goes directly to the task of developing the physical database. It skips all the formal analysis and modeling activities, and often skips involvement of the business professionals and domain experts. People taking such an approach consider that developing the physical database is the real task at hand. (Brackett 2011)

Business activity data are any data documenting the business activities. (Brackett 2011)

Business data architecture is the architecture of the business schema—the data as used by the business. It represents the data in the three business schemas. (Brackett 2011)

Business data domain specifies the data values allowed with respect to the business, and the conditions under which those data values are allowed. It represents what is reasonable for the business and results in the highest quality data. (Brackett 2011)

Business data optionality is a specific statement about the presence of a data value, including the conditions under which it will be present. (Brackett 2011)

Business driven data resource is a data resource where the design, development, and maintenance are driven by business needs, as defined by the business information demand. The data resource is about the business, by the business, and for the business. (Brackett 2011, 2012)

Business event is a happening in the real world, such as a sale, purchase, fire, flood, accident, and so on. (Brackett 2011)

Business event group is a subset of business events based on specific selection criteria. (Brackett 2012)

Business event happening is the actual happening of a business event, such as a specific sale, a purchase, a fire, a flood, an accident, and so on. (Brackett 2011)

Business feature is a trait or characteristic of a business object or business event, such as a customer’s name, a city’s population, a fire date, and so on. (Brackett 2011)

Business inclusion principle states that business professionals must be directly involved in the development of a comparate data resource. The understanding and knowledge that business professionals have about the business must be included to ensure development of a comparate data resource that supports the current and future business information demand. (Brackett 2011, 2012)

Business information demand is an organization’s continuously increasing, constantly changing need for current, accurate, integrated information, often on short notice or very short notice, to support its business activities. It is a very dynamic demand for information to support the business that constantly changes. (Brackett 2011)

Business intelligence is a set of concepts, methods, and processes to improve business decision making using any information from multiple sources that could affect the business, and applying experiences and assumptions to deliver accurate perspectives of business dynamics. (Brackett 2011)

Business Intelligence Value Chain is a sequence of events where value is added from the data resource, through each step, to the support of business goals. The data resource is the foundation that supports the development of information. Information supports the knowledge worker in a knowledge environment. The knowledge worker provides business intelligence to an intelligent, learning organization. Business intelligence supports the business strategies, which support the business goals of the organization. (Brackett 2011, 2012)

Business key is a primary key consisting of a fact or facts whose values have meaning to the business. A business key is sometimes referred to as an intelligent key, however that term is not used because a primary key cannot possess intelligence. (Brackett 2011, 2012)

Business object is a person, place, thing, or concept in the real world, such as a customer, river, city, account, and so on. (Brackett 2011)

Business object existence is the actual existence of a business object, such as a specific person, river, vehicle, account, and so on. (Brackett 2012)

Business object group is a subset of business objects based on specific selection criteria. (Brackett 2012)

Business orientation principle states that the data resource must be oriented toward business objects and events that are of interest to the organization and are either tracked or managed by the organization. Those business objects and events become data subjects in a subject-oriented comparate data resource. (Brackett 2011)

Business schema represents the structure of data as used by the business. (Brackett 2011)

Business term glossary is a list of terms and abbreviations used in the business, and a definition of each of those terms. (Brackett 2011)

Candidate data integrity rule is a data integrity rule that was documented during the data inventory and brought over to a common data architecture. (Brackett 2012)

Candidate foreign key is a foreign key that has been documented during the data inventory and placed in a common data architecture, but has not been reviewed and given a specific designation.

Candidate primary key is a primary key that has been identified and considered as a primary key, but has not been verified. (Brackett 2011)

Canon is an accepted principle or role; a body of principles, rules, standards, or norms. (Brackett 2011)

Canonical is conforming to a general rule or acceptable procedure reduced to the simplest and cleanest scheme possible. (Brackett 2011)

Canonical synthesis is the concept that if everyone followed the canons (rules) for developing a data model, then those independent data models could be readily plugged together, just like a picture puzzle, to provide a single, comprehensive, organization-wide data architecture. (Brackett 2011)

Centralized control principle states that centralized control of a comparate data resource within a common data architecture evolves from the assignment of data stewards and the development of reasonable data management procedures. (Brackett 2011)

Change documentation principle states that all changes to the data resource that occur over time must be identified and documented, no matter how slight or major those changes may be. (Brackett 2012)

Clarity is the quality or state of being clear, easily understood, free from doubt, free from obscurity or ambiguity, and capable of being readily understood and used  Clarity means clear and understandable. (Brackett 2011)

Class word is a word that has a consistent meaning wherever it is used in a data attribute name. (Brackett 2011)

Coded data codes is the situation where single property data codes are combined into a multiple property data code. (Brackett 2012)

Coded data value is any data value that has been encoded or shortened in some manner. (Brackett 2011)

Cognitive dissonance is the disharmony that is created when an individual’s personal reality does not fit with the actual reality of a situation. (Brackett 2011)

Cohesive is sticking together tightly, a union between similar parts. (Brackett 2012)

Cohesive data culture is a data culture composed of business processes that are integrated to effectively and efficiently manage an organization’s data resource. The business processes are seamless, consistent, and work together in a coordinated manner to develop and maintain a comparate data resource. (Brackett 2012)

Cohesive data culture state is the desired state where the fragmented data culture has been substantially and permanently transformed to a cohesive data culture. It’s a persistent integration according to the preferred data culture prescription. A single set of processes has been established across the organization. It’s the ideal, mature state for management of the organization’s data resource. (Brackett 2012)

Collection frequency states how often data are collected. (Brackett 2011)

Combined data are a concatenation of individual facts. (Brackett 2012)

Combined data characteristic is the combination of two or more closely related elemental data characteristics into a group that is managed as a single unit. Note the qualification for related facts. (Brackett 2011, 2012)

Common Data Architecture (capitalized) is a single, formal, comprehensive, organization-wide, data architecture that provides a common context within which all data are understood, documented, integrated, and managed. It transcends all data at the organization’s disposal, includes primitive and derived data; elemental and combined data; fundamental and specific data; structured and super-structured data; automated and non-automated (manual) data; current and historical data; data within and without the organization; high level and low level data; and disparate and comparate data. It includes data in purchased software, custom-built application databases, programs, screens, reports, and documents. It includes all data used by traditional information systems, expert systems, executive information systems, geographic information systems, data warehouses, object oriented systems, and so on. It includes centralized and decentralized data regardless of where they reside, who uses them, or how they are used. (Brackett 2011)

Common data architecture (not capitalized) represents the actual common data architecture built by an organization for their data resource, based on the concepts, principles, and techniques of the Common Data Architecture. The common data architecture contains all of the data used by the organization. (Brackett 2011, 2012)

Common data architecture adjustment principle states that a common data architecture should be periodically reviewed and adjusted during data cross-referencing to ensure that it adequately represents the organization perception of the business world. (Brackett 2012)

Common data architecture reference principle states that the thorough understanding and resolution of a disparate data resource, and the development of a comparate data resource, are done within the construct of a Common Data Architecture. The Common Data Architecture is the common construct for understanding and resolving a disparate data resource and developing a comparate data resource that fully supports the business information demand. (Brackett 2012)

Common data architecture variation is a language variation in a common data architecture. The same common data architecture exists in a different language. (Brackett 2012)

Common Data Culture is a single, formal, comprehensive, organization-wide data culture that provides a common context within which the organization’s data culture is understood, documented, and integrated. It includes all components in the Data Culture Segment of the Data Resource Management Framework for a reasonable data orientation, acceptable data availability, adequate data responsibility, expanded data vision, and appropriate data recognition. (Brackett 2012)

Common data culture (lower case) is the actual data culture built by an organization for the proper management of their data resource  It’s based on the concepts, principles, and techniques of the Common Data Culture. It provides the overarching construct for a common view of the organization’s data culture. All variations in the data culture are understood within the context of a common data culture. The preferred data culture is defined within the context of a common data culture. Data culture integration is done within the context of a common data culture. (Brackett 2012)

Common-to-common cross-reference is a data cross-reference between an interim common data architecture that is treated as a data product, and the final common data architecture. (Brackett 2012)

Common-to-common data translations are data translations between the preferred and non-preferred data designations within a common data architecture, and applied as needed to physical data translation. (Brackett 2012)

Common-to-physical data translations are data translations between a common data architecture and the disparate data documented as data products. (Brackett 2012)

Common word is a word that has consistent meaning whenever it is used in a data name. (Brackett 2011)

Communication theory states that information is the opposite of entropy, where entropy is disorderliness or noise. A message contains information that must be relevant and timely to the recipient. If the message does not contain relevant and timely information, it is simply noise (non-information). (Brackett 2011, 2012)

Comparate is the opposite of disparate and means fundamentally similar in kind. (Brackett 2011)

Comparate data are data that are alike in kind, quality, and character, and are without defect. They are concordant, homogeneous, nearly flawless, nearly perfect, high-quality data that are easily understood and can be readily integrated. (Brackett 2011)

Comparate data application is any application that reads and stores comparate data. (Brackett 2012)

Comparate data cycle is a self-perpetuating cycle where the use of comparate data is continually reinforced because people understand and trust the data. It is the flip side of the disparate data cycle. When people come to the data resource, they can usually find the data they need, can trust those data, and can readily access those data. The result is a shared data resource. Similarly, people that can’t find the data they need, they formally add their data to the data resource, and the enhanced data resource is readily available to anyone looking for data to meet their business need. ()Brackett 2011, 2012)

Complete data documentation principle states that data documentation must cover the entire scope of the data resource, and must include both the technical and the semantic aspects of the data resource. (Brackett 2011)

Comparate data resource is a data resource composed of comparate data that adequately support the current and future business information demand. The data are easily identified and understood, readily accessed and shared, and utilized to their fullest potential. A comparate data resource is an integrated, subject oriented, business driven data resource that is the official record of reference for the organization’s business. (Brackett 2011)

Comparate data resource state is the desired state where disparate data have been substantially and permanently transformed to comparate data and the disparate data are substantially gone from the organization’s data resource. It’s a persistent data transformation where the data are subject oriented according to the organization’s perception of the business world and are integrated within the common data architecture. The disparate data cycle is broken and the natural drift of the data resource is toward comparate data. (Brackett 2012)

Comparate data resource vision is the disparate data resource thoroughly understood and integrated into a comparate data resource, supported by a Data Resource Guide, to fully support the current and future business information demand. (Brackett 2012)

Complete historical data instance contains a complete set of data items in the data occurrence, whether or not the data values changed. (Brackett 2012)

Complete occurrence data record is a data record that contains all of the data items for a data occurrence. (Brackett 2012)

Complete set of data codes contains all of the data properties for a single data subject. (Brackett 2012)

Complete subject data file is a data file that contains all of the data items representing all of the data characteristics for a single data subject or for multiple data subjects. (Brackett 2012)

Complex means composed of two or more parts; having a bound form; hard to separate, analyze, or solve; a whole made up of complicated or interrelated parts; a composite made up of distinct parts; intricate as having many complexly interrelating parts or elements. (Brackett 2011, 2012)

Complex fact data attribute contains any combination of multiple values, multiple facts, and variable facts, and might be formatted in several different ways. (Brackett 2011)

Complex primary key contains multiple data attributes from both the home data entity and a foreign data entity. (Brackett 2011)

Complex structured data are any data that are composed of two or more intricate, complicated, and interrelated parts that cannot be easily interpreted by structured query languages and tools. The complex structure needs to be broken down into the individual component structures to be more easily analyzed. Complex structured data include text, voice, video, images, spatial data, and so on. (Brackett 2012)

Complexity is to become complex or the state of being complex. (Brackett 2011)

Compound primary key contains multiple home data attributes in their home data entity. (Brackett 2011)

Comprehensive means covering completely or broadly. (Brackett 2011)

Comprehensive data definition is a data definition that provides a complete, meaningful, easily read, readily understood definition that thoroughly explains the content and meaning of the data with respect to the business. It helps people thoroughly understand the data and use the data resource efficiently and effectively to meet the current and future business information demand.(Brackett 2011, 2012)

Concept is something conceived in the mind, a thought, or notion; an abstract or generic idea generalized from particular instances; a generic or generalized ideal from specific instances. A concept can be basic, applying to data resource management in general, or it can be specific, applying to one aspect of data resource management. (Brackett 2011)

Conceptual schema was defined as the common link between the internal schema and the external schema. From the database perspective, it was a common translation between the two schemas. (Brackett 2011)

Concordant means agreeing; in a state of agreement; a harmonious combination. (Brackett 2012)

Concordant data resource management is the situation where the overall management of an organization’s data resource, including the data resource itself and the data culture, is in agreement and harmony. (Brackett 2012)

Conditional data source rule is a data source rule that specifies multiple locations as the preferred data source and the conditions for selecting one of those locations. (Brackett 2012)

Conditional data sourcing is the process of selecting preferred data from a variety of different locations based on which location has the most current and most accurate data. (Brackett 2012)

Conditional data structure rule is a data integrity rule that specifies the conditional data cardinality for a data relation between two data entities when conditions or exceptions apply. It specifies both the conditions and exceptions with respect to the business, not with respect to the database management system. (Brackett 2011)

Conditional data value rule is a data integrity rule that specifies the domain of allowable values for a data attribute when conditions or exceptions apply. It specifies both the conditions for optionality and the condition for a relationship between data values in other data attributes. It specifies the rule with respect to the business, not with respect to the database management system. (Brackett 2011)

Connotative meaning is the idea or notion suggested by the data definition, that a person interprets in addition to what is explicitly stated. (Brackett 2011)

Consistent characteristic data item is a data item that always contains an elemental or combined data characteristic. (Brackett 2012)

Continuous enhancement principle states that documentation of disparate data should be continuously enhanced as additional insight is gained. Documenting and understanding disparate data is not a one-time process—it’s an ongoing process through all phases of data resource integration. Any time that additional insight is gained about disparate data, that insight must be documented. (Brackett 2012)

Contrarian thinking is not following the herd and thinking outside the box. Current wisdom is not simply accepted without question. Current practices are always scrutinized for better ways. The questions Why? or Why not? are frequently asked. Wanting to know what others are doing, and why, is persistent. Multiple voices are encouraged to speak on issues. Risk taking and innovations are valued, and leveraged for maximum benefit. Thinking gray is common, without group think or crowd mentality. Synergy and teamwork are encouraged. (Brackett 2012)

Cooperative development  principle states that the stakeholders of the data resource must be involved in developing the vision for a comparate data resource. (Brackett 2011)

Critical business principle states that a comparate data resource must be developed by beginning with the critical areas of the business. The general approach is to identify critical business areas where the data resource needs to provide strong support, and may not be providing that support. (Brackett 2011)

Critical mass principle states that when the understanding of disparate data appears insurmountable, a critical mass of information is reached and collapses into a meaningful understanding of the disparate data. (Brackett 2012)

Cross-system reporting is the collection of operational data from various, often disparate sources, and merging those for reporting or operational decision making. Many data integration approaches are simply cross-system reporting, not true integration of the data resource. (Brackett 2011)

Crowd psychology is the situation where people individually are objective, but when they get together in a crowd regarding a critical issue, that objectivity is lost. (Brackett 2011)

Cultural variability is the normal differences due to culture, geography, politics, and so on, such as different names, addresses, monetary units, and so on. The data resource must reflect these cultural differences. (Brackett 2012)

Culture is the act of developing the intellectual and moral faculties; expert care and training; enlightenment and excellence of taste acquired by intellectual and aesthetic training; acquaintance with and taste in fine arts, humanities, and broad aspects of science; the integrated pattern of human knowledge, belief, and behavior that depends upon man’s capacity for learning and transmitting knowledge to succeeding generations; the customary beliefs, social forms, and material traits of a racial, religious, or social group. (Brackett 2012)

Current budget principle states that any first initiative to improve data resource quality should begin within the current budget. (Brackett 2011)

Current data definition principle states that a comprehensive data definition must be kept current with the business. (Brackett 2011)

Current data documentation principle states that the data resource data must be kept current with the business. They must represent the current state of the data resource for both business and data management professionals. (Brackett 2011)

Current data instance is the most recent data instance that represents the current values of the data items in the data occurrence. (Brackett 2012)

Data are the individual facts that are out of context, have no meaning, and are difficult to understand. They are often referred to as raw data, such as 123.45. Data have historically been defined as plural. (Brackett 2011)

Data accuracy is a measure of how well the data values represent the business world at a point in time or for a period of time. Data accuracy includes the method used to identify objects in the business world and the method of collecting data about those objects. It describes how an object was identified and the means by which the data were collected. (Brackett 2011)

Data accuracy assurance is a proactive process of ensuring that data represent the business world as closely as the organization desires, to meet the business information demand. (Brackett 2011)

Data accuracy control is a reactive process of determining how well data already captured represent the business world. It determines the data accuracy after the data are acquired. (Brackett 2011)

Data anomaly is any data value that does not follow a pattern that matches a reasonable expectation of the business. It could be a correct data value, or it could be an error. If it’s a correct data value, it could be acceptable or unacceptable for the business. (Brackett 2012)

Data architectnology is the technology for producing comparate data within a common data architecture. It’s the formal technology for building a common data architecture within an organization and managing data within that architecture. It consists of specific concepts, principles, and techniques for developing a comparate data resource. It’s very formal and detailed, yet results are very elegant and simple. (Brackett 2011, 2012)

Data architecture (1) is the method of design and construction of an integrated data resource that is business driven, based on real-world subjects as perceived by the organization, and implemented into appropriate operating environments. It consists of components that provide a consistent foundation across organizational boundaries to provide easily identifiable, readily available, high-quality data to support the current and future business information demand. (Brackett 2011)

Data architecture (2) is the component of the Data Resource Management Framework that contains all the activities, and the products of those activities, related to the identification, naming, definition, structuring, integrity, accuracy, effectiveness, and documentation of the data resource. (Brackett 2011)

Data architecture quality is how well the data architecture components contribute to overall data management quality. (Brackett 2012)

Data attribute is the variation of an individual fact that describes or characterizes a data entity. It represents a data characteristic variation in a logical data model. (Brackett 2011, 2012)

Data attribute denormalization is the technique of implementing data attributes for optimum performance without compromising the normalized data structure. (Brackett 2011)

Data attribute history is when only the data attribute whose data value changed is retained. (Brackett 2011)

Data attribute normalization, commonly referred to as fact normalization, is the technique for ensuring that each data attribute represents one business fact or a set of closely related business facts. (Brackett 2011)

Data attribute partitioning places data attributes in different data sites. (Brackett 2011)

Data attribute retention rule is a data integrity rule that specifies the retention for individual data attribute values. (Brackett 2011)

Data attribute structure is a list showing the data attributes contained within each data entity and the roles played by those data attributes. It shows the primary keys, foreign keys to parent data entities, and all the data attributes contained in a data entity. (Brackett 2011)

Data availability is the process of ensuring that the data are available to meet the business information demand, while properly protecting and securing those data. (Brackett 2011)

Data awareness is the knowledge about all of the data that are available to the organization and where those data are located. (Brackett 2012)

Data bridge is an application that moves data from one disparate data file to another disparate data file to keep the two data files in synch. The primary purpose is to maintain redundant data in a disparate data resource. (Brackett 2012)

Data broker is an application that acts as an intermediary between disparate data and comparate data in databases or applications. It performs formal data transformations in both directions between disparate data and comparate data. (Brackett 2012)

Data brokering is the process of using data brokers to perform formal data transformation. (Brackett 2012)

Data cardinality is a specification of the number of data occurrences that are allowed or required in each data subject or data entity that are involved in a data relation, or the number of data records that are allowed or required for each data file that are involved in the data relation. (Brackett 2011)

Data category is a data entity that represents a can also be situation. Each data category contains data attributes that characterize that particular data category, as well as the parent data entity. Separate data categories may be defined that are peers of each other and further define the parent data entity. (Brackett 2011)

Data characteristic is an individual fact that describes or characterizes a data subject. It represents a business feature and contains a single fact, or related facts, about a data subject. (Brackett 2011)

Data characteristic source list is a list of all of the data sources for each data characteristic. The data characteristics are listed for each data subject, and the data sources are listed for each data characteristic. (Brackett 2012)

Data characteristic structure is a list showing the data characteristics contained within each data subject and the roles played by those data characteristics. It shows the primary keys, foreign keys to parent data subjects, and all the data characteristics contained in a data subject. (Brackett 2011)

Data characteristic substitution indicates that any data characteristic variation can be used for a data characteristic, such as (Date) can mean any form of a date. (Brackett 2011)

Data characteristic translation rule  is a data translation rule that translates data values between non-preferred and preferred variations of a data characteristic. (Brackett 2012)

Data characteristic variation is a variation in the content or format of a data characteristic. It represents a variant of a data characteristic, such as different units of measurement, different monetary units, different sequences as in a person’s name, and so on. (Brackett 2011, 2012)

Data characteristic variation list is a list of all of the data characteristic variations within a data characteristic. (Brackett 2012)

Data code is any data item whose data value has been encoded or shortened in some manner. (Brackett 2012)

Data code set is a complete group of data codes that represent all of the data properties for a single data subject. (Brackett 2012)

Data code variability is the variability in the coded data values, names, definitions, and domain of codes in a set of data codes. It’s a measure of how many variations exist for a particular set of data codes across data files. (Brackett 2012)

Data completeness is a measure of how well the scope of the data resource meets the scope of the business information demand. It ensures that all the data necessary to meet the current and future business information demand are available in the organization’s data resource. (Brackett 2011)

Data completeness assurance is the proactive process of analyzing the business information demand and ensuring that the data needed are available when needed. (Brackett 2011)

Data completeness control is the reactive process of determining what data are available and how completely those data support the business information demand. It’s an inventory process to determine the data available and how often those data are being used. (Brackett 2011)

Data consolidation is the process of merging existing data from different sources into one location. The data may be restructured slightly, but nothing is done to thoroughly understand the data or to resolve data disparity. (Brackett 2012)

Data conversion is the process of changing the same physical data schema from one database management system  to another database management system. The data values are not altered in any way. They are simply moved from one database management system to another. (Brackett 2012)

Data conversion rule is a data integrity rule that defines the conversion of a data value from one unit to another unit. It represents the conversion of the values of a single fact to different units, and is not considered to be a data derivation rule. (Brackett 2011)

Data converter is an application that changes the data between heterogeneous databases. It does not transform the data in any way. It only changes the physical form of the data from one database environment to another database environment. (Brackett 2012)

Data cross-reference is a logical mapping between disparate data names and common data names. It’s a link between components of the inventoried disparate data and components in a common data architecture. (Brackett 2012)

Data cross-reference concept is the inventoried disparate data are cross-referenced to a common data architecture to further increase the understanding of those disparate data within a common context. An initial understanding of disparate data was gained during the data inventory process. That initial understanding is increased through a cross-referencing of the inventoried disparate data to a common data architecture. (Brackett 2012)

Data cross-reference objective is to thoroughly understand the content, meaning, structure, and integrity of all data at the organization’s disposal within the context of a common data architecture so that a comparate data resource can be developed that fully supports the current and future business information demand. The objective is to take the initial understanding of disparate data that was documented at an elemental level during the data inventory and increase that understanding within the context of a common data architecture at the organization level. (Brackett 2012)

Data cross-walk is the physical movement of data from one data file to another data file without any formal data transformation or the application of data integrity rules  The analogy is like using a cross-walk at an intersection where people cross, but are not altered in the process. The term is not used with data resource transformation because it implies an easy task of moving the disparate data to a comparate data resource without any transformation. (Brackett 2012)

Data culture (1) is the function of managing the data resource as a critical resource of the organization equivalent to managing the financial resource, the human resource, and real property. It consists of directing and controlling the development, administering policies and procedures, influencing the actions and conduct of anyone maintaining or using the data resource, and exerting a guiding influence over the data resource to support the current and future business information demand. (Brackett 2011)

Data culture (2) is the component of the Data Resource Management Framework that contains all the activities, and the products of those activities, related to orientation, availability, responsibility, vision, and recognition of the data resource. (Brackett 2011)

Data culture insights are any insights necessary for thoroughly understanding the organization’s existing fragmented data culture and developing a cohesive data culture for properly managing data as a critical resource of the organization. (Brackett 2012)

Data culture integration is the thorough understanding of the existing fragmented data culture within a common data culture, the designation of a preferred data culture, and the transition toward that preferred data culture. It’s the act or process of integrating and coordinating the organization’s data management function and processes into a cohesive data culture. (Brackett 2012)

Data culture integration concept is to resolve the fragmented data culture and create a cohesive data culture for the management of a critical data resource. A thorough understanding of the current fragmented data culture leads to its resolution and the creation of a cohesive data culture. (Brackett 2012)

Data culture quality is how well the data culture components contribute to the overall data management quality. (Brackett 2011)

Data culture survey is the act of surveying the current data management practices in an organization and documenting the results of that survey. (Brackett 2012)

Data culture survey concept is that the existing fragmented data culture in an organization is leading to the creation of increasing quantities of a disparate data resource that are impacting business activities. (Brackett 2012)

Data culture survey objective is to survey and document all of the fragmented data management practices that are explicitly and implicitly being performed by people within and without the organization. (Brackett 2012)  

Data culture transformation is the formal process of transforming a fragmented data culture to a cohesive data culture, within the context of a common data culture, according to the preferred data culture. It’s a subset of overall data culture transition that includes transforming the data orientation, data availability, data responsibility, data vision, and data recognition. (Brackett 2012)

Data culture transformation concept is that all data culture transformation will be done within the context of a common data culture using the preferred data culture. The best existing data culture practices are combined with new data culture practices to provide a cohesive data culture. (Brackett 2012)

Data culture transformation objective is to transform the existing fragmented data culture to a cohesive data culture to support management of data as a critical resource of the organization. The objective is more than just documenting the exiting fragmented data culture. It’s a precise, detailed process that creates a cohesive data culture. (Brackett 2012)

Data culture transition is the transition of an organization’s data culture from a fragmented data culture state, through a formal data culture state, to a cohesive data culture state. It’s a pathway that is followed from a fragmented data culture to a cohesive data culture. It’s unique to each organization depending on their existing data culture and desired data culture. (Brackett 2012)

Data culture variability is a state where all aspects of data management are inconsistent, characterized by variations, and are not true to the concepts and principles for managing data as a critical resource. The management procedures are highly variable and that variability is pervasive throughout the organization. (Brackett 2012)

Data culture variability principle states that every organization has a level of variability that must be accepted and clarified, and that any variability above that acceptable level must be resolved. (Brackett 2012)

Data currentness is a measure of how well the data values remain current with the business. (Brackett 2011)

Data currentness assurance is the proactive process of analyzing the business information demand and ensuring that the data collected meet the currentness requirements of the business. (Brackett 2011)

Data currentness control is the reactive process of determining the data currentness and how well that currentness supports the business information demand. It’s a review process that documents the currentness of the existing data. (Brackett 2011)

Data de-coherence is an interference in the coherent understanding of the true meaning of data with respect to the business. It is due to the variability in the meaning, structure, and integrity of the data. The variability is large in a disparate data resource leading to a large data de-coherence. (Brackett 2012)

Data definition inheritance principle states that specific data definitions can inherit fundamental data definitions or other specific data definitions to minimize the size and increase the consistency of specific data definitions. (Brackett 2011)

Data definition variability is the situation where data definitions are vague and have a wide range of variability that contributes little to understanding the data resource. (Brackett 2012)

Data deluge is the situation where massive quantities of data are being captured and stored at an alarming rate. These data are being captured by traditional means, by scanning, by imaging, by remote sensing, by machine generation, and by derivation. Those data are being stored on personal computers, networks, departmental computers, and mainframe computers. The quantity of data in many organizations is increasing exponentially. (Brackett 2011, 2012)

Data denormalization is the process that adjusts the normalized data structure for optimum performance in a specific operating environment, without compromising the normalized data structure. (Brackett 2011)

Data de-optimization is the technique that transforms the logical data structure into the deployment data structure for the data sites where the databases will be implemented. It deals with the specific data that will be maintained in different data sites. (Brackett 2011)

Data deployment rule specifies how the data are deployed from the primary data site to secondary data sites, and how those deployed data are kept in synch with the primary data site. (Brackett 2012)

Data depot is a place for storing data for formal data transformation. It’s a staging area or work area for transforming data independent of the data source or data target. (Brackett 2012)

Data derivation – See Derive data.

Data derivation rule is a data integrity rule that specifies the contributors to a derived data value, the algorithm for deriving the data value, and the conditions for deriving a data value. (Brackett 2011)

Data dilemma is the situation where the ability to meet the business information demand is being compromised by the continued development of large quantities of disparate data. (Brackett 2011)

Data dimensions are the surrounding data entities that qualify the data focus. (Brackett 2011)

Data discovery is the process of identifying all the data that are at the organization’s disposal, and learning the content and meaning of those data. It’s the process of finding all the data, understanding those data, and using those data to meet the business information demand. (Brackett 2011)

Data documentation design principle states that all data resource data must be formally designed the same as business data. Data resource data are part of the data resource, the same as business data, and need to be designed the same as business data. (Brackett 2011)

Data documentation variability is the variability that exists with the documentation about a disparate data resource. Ideally, all components of the organization’s data resource are formally documented and readily available. (Brackett 2012)

Data domain is a set of allowable values for a data attribute. (Brackett 2011)

Data domain profiling analyzes the existing domain of data values for data items in a database. The existing data values, their frequency of distribution, variability, missing values, existence of multiple values, possibility of redundancy, and so on, are analyzed and documented. The analysis can identify the variability in data values, both within a data file and across data files. (Brackett 2012)

Data editing – See Edit data.

Data engineering is he discipline that designs, builds, and maintains the organization’s data resource and makes the data available to information engineering. It’s a formal process for developing a comparate data resource. Data engineering is also responsible for maintaining the disparate data resource and for transforming that disparate data resource to a comparate data resource. (Brackett 2011, 2012)

Data entity is a person, place, thing, event, or concept about which an organization collects and manages data. It represents a data subject in the logical data model. The name of a data entity is singular, since it represents a collection of data occurrences. (Brackett 2011)

Data entity fragmentation is the situation where data entities are created when data attributes are removed, but those data entities are not merged when they represent the same data entity. (Brackett 2011)

Data entity hierarchy is a hierarchical structure of data entities with branched one-to-one data relations between the parent data entity and the subordinate data entities. It represents a mutually exclusive, or can-only-be, situation between the subordinate data entity and the parent data entity. (Brackett 2011)

Data entity normalization, commonly referred to as just data normalization, deals with the normalization of data attributes within and between data entities. (Brackett 2011)

Data entity optimization, commonly referred to as data optimization, is the technique of making sure that data attributes removed from a data entity as a result of data normalization are optimized into the appropriate data entity. (Brackett 2011)

Data entity partitioning places data entities in different data sites.   If a data entity appears in the data deployment schema, then that data entity is maintained at the data site. (Brackett 2011)

Data entity-relation diagram shows the arrangement and relationships between data entities. It contains only data entities and the data relations between those data entities. It does not contain any of the data attributes in those data entities, nor does it contain any roles played by the data attributes. (Brackett 2011)

Data error is a data value that provides incorrect or false knowledge about the business, or about business objects and events that are important to the business. (Brackett 2011)

Data extract is the formal process of identifying and extracting the preferred disparate data and loading those data into a data depot for data transformation. (Brackett 2012)

Data file is a physical file of data that exists in a database management system, such as a computer file, or outside a database management system, such as a manual file. It is referred to as a table in a relational database. A data file generally represents a data entity, subject to adjustments made during formal data denormalization. (Brackett 2011)

Data file-relation diagram shows the arrangement and relationships between data files. It contains only the data files and the data relations between those data files. It does not contain any of the data items in those data files. (Brackett 2011, 2012)

Data file variability is the variability that exists within and across data files in a disparate data resource. Data file variability can exist at the data file level, the data record level, and the data instance level. (Brackett 2012)

Data focus is the central data entity that is being analyzed by the data warehouse. (Brackett 2011)

Data governance is a term that is not used because it represents a hype-cycle. See Data resource management, Data culture. (Brackett 2011)

Data heritage is documentation of the source of the data and their original meaning at the time of data capture. It’s the content and meaning of the data at the time of their origination and as they move from their origin to their current data location. It describes the original content and meaning of the data when initially captured. (Brackett 2011, 2012)

Data hierarchy aggregation identifies the level of aggregation of a hierarchy, such as the product hierarchy in a data warehouse. (Brackett 2011)

Data in context are individual facts that have meaning and can be readily understood. They raw facts wrapped with meaning. (Brackett 2011)

Data-information-knowledge cycle is the cycle from data, to data in context, to specific or general information, to knowledge, and back to data when stored. (Brackett 2012)

Data inheritance is the process of using fundamental data to support consistent definitions of specific data. (Brackett 2011)

Data instance is a specific set of data values for the characteristics in a data occurrence that are valid at a point in time or for a period of time. Many data instances can exist for each data occurrence, particularly when historical data are maintained. One data instance is the current instance and the others are historical instances. (Brackett 2011)

Data instant is the point in time or the timeframe the data represent in the business world. (Brackett 2011)

Data integration is the merging of data from multiple, often disparate, sources, usually based on some record of reference, to provide a single output, such as an interim database or report. It does not resolve any existing data disparity, and may further increase data disparity. It is seldom done within the context of a common data architecture. (Brackett 2011, 2012)

Data integration key is a set of data characteristics that could identify possible redundant physical data occurrences in a disparate data resource. It’s not a primary key because it does not uniquely identify each data occurrence. It’s not a foreign key because no corresponding primary key exists. (Brackett 2012)

Data integration key index is a table showing the values of a data integration key for each data occurrence, in all data files, within a data subject. (Brackett 2012)

Data integration quality is a measure of how well the data resource integration process is performed, based on how well the resulting comparate data resource supports the current and future business information demand. (Brackett 2012)

Data integrity is a measure of how well the data are maintained in the data resource after they are captured or created. It indicates the degree to which the data are unimpaired and complete according to a precise set of rules. (Brackett 2011)

Data integrity failure principle states that a violation action and a notification action must be taken on any data that fail precise data integrity rules  The violation and notification actions to be taken must be specified and followed. (Brackett 2012)

Data integrity notification action specifies the action to be taken for notifying someone that data have failed the data integrity rules and a violation action was taken. The action may alert someone who is responsible for taking action, or place an appropriate entry in an error log that will be reviewed by someone at a later date. The notification action includes the implementation of an algorithm to correct the data.(Brackett 2011, 2012)

Data integrity rule definition principle states that each data integrity rule must be comprehensively defined, just like data entities and data attributes are comprehensively defined. The definition must explain the purpose of the data integrity rule and the action that is taken. (Brackett 2011)

Data integrity rule edit principle states that precise data integrity rules must be denormalized as the proper data structure is denormalized and be implemented as data edits. Data integrity rules are the logical specification and must match the logical data structure, while data edits are the physical specification and must match the physical data structure. (Brackett 2011, 2012)

Data integrity rule lockout principle states that the precise data integrity rules must be reviewed to ensure that the rules do not result in a lockout, where data are prevented from entering the data resource. (Brackett 2011)

Data integrity rule management principle states that the management of data integrity rules must be proactive to make optimum use of resources and minimize impacts to the business. (Brackett 2011)

Data integrity rule name principle states that every data integrity rule must be formally and uniquely named according to the data naming taxonomy and supporting vocabulary. (Brackett 2011)

Data integrity rule normalization principle states that data integrity rules are normalized to the data resource component which they represent or on which they take action. (Brackett 2011)

Data integrity rule notation principle states that each data integrity rule must be specified in a notation that is acceptable and understandable to business and data management professionals, must be based on mathematical and logic notation where practical, and must use symbols readily available on a standard keyboard. (Brackett 2011)

Data integrity rule type principle states that seven different types of data integrity rules must be identified and defined. (Brackett 2011)

Data integrity rules specify the criteria that need to be met to insure that the data resource contains the highest quality necessary to support the current and future business information demand. (Brackett 2011)

Data integrity variability is the variability that exists with data edits in a disparate data resource. Ideally, data integrity rules are defined during logical data modeling and are transformed to data edits during physical data modeling. (Brackett 2012)

Data integrity violation action specifies the action to be taken with the data when the data violate a data integrity rule. That action may be to override the error with meaningful data, to suspend the data pending further correction, to apply a default data value, to accept the data, or to delete the data. Overriding the error could include implementing an algorithm to correct the data. (Brackett 2011, 2012)

Data inventory is the process of identifying and documenting all of the data at an organization’s disposal so those data can be readily understood and used to develop and maintain a comparate data resource that supports the business information demand. It begins the process of understanding disparate data and developing a comparate data resource within a common data architecture. (Brackett 2012)

Data inventory concept is that all data at the organization’s disposal will be completely and comprehensively inventoried, and documented in one location that is readily available to anyone in the organization, so that the organization at large understands the content, meaning, and quality of those data. (Brackett 2012)

Data inventory objective is to identify, inventory, and document all data that currently exist in the organization’s data resource or are readily available to the organization so that those data can be readily understood and used to support the current and future business information demand. It raises the awareness of the data that exist and solves the first problem with disparate data. (Brackett 2012)

Data inventory process identifies the existing data, collects the existing documentation, and enhances that documentation with additional insights.

Data item is an individual field in a data record and is referred to as a column in a relational database. A data item represents a data attribute, subject to adjustments made during formal data denormalization. (Brackett 2011)

Data item content is the physical variation in the data values contained in a data item. (Brackett 2012)

Data item format is the physical format of the data value contained in the data item. (Brackett 2012)

Data item length is the physical length of the data value contained in the data item. (Brackett 2012)

Data item structure is a list showing the data items contained in each data file and the roles played by those data items. It shows the primary keys, foreign keys to data files, and all the data items contained in a data file. (Brackett 2011)

Data item variability is the variability in the format or content of data items representing the same business fact. It’s a measure of how many different formats or contents exist for a particular data item across data files, and on screens, reports, and forms. (Brackett 2012)

Data key is any data attribute or set of data attributes used to identify a data occurrence within a data entity. (Brackett 2011)

Data key denormalization is the process of implementing data keys in the physical database without compromising the logical data structure. (Brackett 2011)

Data lineage is a description of the pathway from the data source to their current location and the alterations made to the data along that pathway. It is a process to track the descent of data values from their origins to their current data sites. It includes determining where the data values originated, where they were stored, and how they were altered or modified. It’s a history of how the content and meaning of the data were altered from their origin to their present location. (Brackett 2011, 2012)

Data load is the formal process of loading the target database after the data transformation has been completed. The transformed data are edited according to the preferred data integrity rules, loaded into the target database, and reviewed to ensure the load was successful before the data are released for use. (Brackett 2012)

Data loading – See Load data.

Data management optionality principle states that organizations have the opportunity to manage data as a critical resource of the organization. Each organization can choose whether or not to take that opportunity and develop a comparate data resource. (Brackett 2011)

Data management quality is how well the data management components contribute to overall data resource quality. (Brackett 2011)

Data megatype is a broad grouping of data based on their structure and physical management. (Brackett 2011)

Data migration is the movement of data to change location periodically from one database or platform to another depending on the physical environment and the needs of the organization. The migration seldom includes a thorough understanding of the data and is usually done outside of any context. The term migration is acceptable because periodic movements can be made depending on the conditions. (Brackett 2012)

Data mining is the analysis of evaluational data to find unknown and unsuspected trends and patterns, using techniques such as artificial intelligence and fuzzy logic.

Data model includes formal data names, comprehensive data definitions, proper data structures, and precise data integrity rules. A complete data model must include all four of these components. (Brackett 2011)

Data model concept is the development of a data model, for a specific audience, representing a particular business activity, using appropriate data modeling techniques, based on data contained in the Data Resource Guide. The data model is an expression of knowledge about the data resource that is presented in an appropriate form for a specific audience. (Brackett 2011)

Data name is a label for a fact or a set of related facts contained in the data resource, appearing on a data model, or displayed on screens, reports, or documents. (Brackett 2011)

Data name abbreviation is the shortening of a primary data name to meet some length restriction. (Brackett 2011)

Data name abbreviation algorithm is a formal procedure for abbreviating the primary data name using an established set of data name word abbreviations. (Brackett 2011)

Data name abbreviation scheme is a combination of a set of data name word abbreviations and a data name abbreviation algorithm. (Brackett 2011)

Data name - definition synchronization principle states that a comprehensive data definition and a formal data name must be kept in synch with each other. Formal data names help guide development of comprehensive data definitions, and comprehensive data definitions help verify formal data names. Synchronization is a two-way, value-added approach ensuring that formal data names match comprehensive data definitions. (Brackett 2011, 2012)

Data name homonym is different business facts with the same data name. (Brackett 2011, 2012)

Data name synonym is the same business fact with different data names. (Brackett 2011)

Data name variability is the situation where data names are informal and have a wide range of variability that contributes little to understanding the data resource. (Brackett 2012)

Data name vocabulary is the collection of all twelve sets of common words representing the twelve components of the data naming taxonomy. (Brackett 2011)

Data name word abbreviation is the formal abbreviation for each word used in a data name. The abbreviation must be unique for the root word and for all manifestations of the root word, and it must not create another word. (Brackett 2011)

Data naming taxonomy provides a primary name for all existing and new data, and all components of the data resource. It provides a way to uniquely identify all components of the data resource as well as all of the disparate data. It meets all of the data naming criteria and complies with the three components of semiotic theory. (Brackett 2011, 2012)

Data normalization is the process that brings data into a normal form that minimizes redundancies and keeps anomalies from entering the data resource. It provides a subject-oriented data resource based on business objects and events. (Brackett 2011)

Data occurrence is a logical record that represents the existence of a business object or the happening of a business event in the business world, such as an employee, a vehicle, and so on. It represents a business object existence or a business event happening.(Brackett 2011, 2012)

Data occurrence denormalization is the process of splitting the data occurrences in a data entity into two or more data files for processing efficiency or for database limitations. (Brackett 2011)

Data occurrence group is a subset of data occurrences within a specific data subject that meet specific selection criteria. A data occurrence group represents a business object group or a business event group. (Brackett 2011, 2012)

Data occurrence history is when the entire data occurrence is retained when one or more data values in that data occurrence change. (Brackett 2011)

Data occurrence partitioning places data occurrences in different data sites. (Brackett 2011)

Data occurrence redundancy is the existence of multiple data occurrences for the same existence of a business object or happening of a business event. (Brackett 2012)

Data occurrence role is a role that could be played by a specific data occurrence, such as a maintenance vendor or a lease vendor. (Brackett 2011)

Data optimization:  See Data entity optimization.

Data optionality indicates whether a data value is required or is optional. Most of these labels are not specific. (Brackett 2011)

Data orientation is the orientation of data resource management in response to business information needs which allows the business to operate effectively and efficiently in the business world. (Brackett 2012)

Data origin is the location where a data value originated, whether those data were collected, created, measured, generated, derived, or aggregated. (Brackett 2012)

Data overload is a deluge of data or data in context coming at a recipient that is not relevant and timely. It’s a deluge of non-information that is not wanted by the recipient. (Brackett 2011, 2012)

Data ownership is not used because people don’t own the data. See Data steward. (Brackett 2011)

Data perspective is the subject area represented by the data entity-relation diagram, and includes a data focus and data dimensions. (Brackett 2011)

Data precision is how precisely a measurement was made and how many significant digits are in the measurement. (Brackett 2011)

Data product is a major independent set of documentation of any type that contains the names, definitions, structure, integrity, and so on, of disparate data. It’s anything about the data resource, electronic or manual, that is a product of some development effort. A data product can be an information system, a database, a data dictionary, a major project, a major data model,  or anything else that provides insight into the existing disparate data. It is the highest level in the data product model. (Brackett 2012)

Data product code is any coded data value that exists in a data product unit or data product unit variation. It represents a specific property of the subject of interest. (Brackett 2012)

Data product code cross-reference is a cross-reference between a data product code or variation and a data reference set variation. Each data product code or variation is cross-referenced to a data reference set variation to which it belongs. (Brackett 2012)

Data product code variation is a recursion of a data product code to document multiple variations contained in a data product code. However, only one level of recursion is allowed. The data product code variation is not intended to document a hierarchy of data product codes. (Brackett 2012)

Data product concept is that the existing data resource, any documentation about the existing data resource, and any insights people have about the existing data resource are a product of some development effort. It’s those products that need to be identified and documented to fully understand the existing disparate data. (Brackett 2012)

Data product model is a subset of data resource data architecture pertaining to documentation of an organization’s disparate data resource. The input for the documentation comes from the data inventory process. (Brackett 2012)

Data product set is a major grouping of data within a data product. It may represent a data file, a data record, a data record type, a screen, a report, a form, a data entity, an application program, and so on. (Brackett 2012)

Data product set cross-reference is a cross-reference between a data product set or variation and a data subject variation solely for the purpose of designating data  selections, subsets of data, and data roles, or for designating the manifestations of a data focus. (Brackett 2012)

Data product set variation is a recursion of a data product set to document multiple variations contained in a data product set. However, only one level of recursion is allowed. The data product set variation is not intended to document a hierarchy of data product sets. A data product set variation could be a data record type, a data entity type, changes over time, or any other breakdown of a data product set. (Brackett 2012)

Data product unit is any unit of data within a data product set, such as data attribute in a data model, a data item in a data record, a data field on a screen or report, a data item in a program, and so on. (Brackett 2012)

Data product unit cross-reference is a cross-reference between a data product unit or variation and a corresponding data characteristic variation. Each data product unit or variation is cross-referenced to a data characteristic variation. (Brackett 2012)

Data product unit cross-reference list is a list of the data product units or variations and the corresponding data characteristic variation. (Brackett 2012)

Data product unit variation is a recursion of a data product unit to document multiple variations contained in a data product unit. However, only one level of recursion is allowed. The data product unit variation is not intended to document a hierarchy of data product units. (Brackett 2012)

Data profiling, in the context, of data resource integration is the process of analyzing the data values in databases to determine possible data meaning, data structure, and data integrity rules in preparation for data resource integration. These determinations must be verified before they can be accepted as fact and used for data resource integration. (Brackett 2012)

Data property is a single feature, trait, or quality within a grouping or classification of features, traits, or qualities belonging to a data characteristic. (Brackett 2012)

Data provenance is provenance applied to the organization’s data resource. (Brackett 2011)

Data provenance principle states that the source of data, how the data were captured, the meaning of the data when they were first captured, where the data were stored, the path of those data to the current location, how the data were moved along that path, and how those data were altered along that path must be documented to ensure the authenticity of those data and their appropriateness for supporting the business. (Brackett 2011)

Data quality is a subset of data resource quality dealing with data values. (Brackett 2011)

Data quality assurance is the proactive process of ensuring that data adequately support the business information demand. It determines the data accuracy, data completeness, and data currentness required by the business information demand and ensures that the data meet that demand. (Brackett 2011)

Data quality control is the reactive process of determining how well the data support the business information demand. It determines the existing data accuracy, data completeness, and data currentness and evaluates how well each supports the business information demand. (Brackett 2011)

Data recasting – See Recast data.

Data recasting rule is a data rule that specifies the adjustment of data values to a specific time period, such as adjusting financial data values to a specific time period for a comparison of trends independent of monetary inflation. (Brackett 2012)

Data recognition is the situation where management of the data resource is recognized as professional and directly supporting the business activities of the organization. (Brackett 2012)

Data Reconstruction – See Reconstruct data.

Data reconstruction rule is a data rule that specifies the reconstruction of historical data into full historical data instances in preparation for data transformation. The data reconstruction rule shows the conditions for data reconstruction and the data reconstruction that is performed. (Brackett 2012)

Data record is a physical grouping of data items that are stored in or retrieved from a data file. It is referred to as a row or tuple in a relational database. A data record represents a data instance. (Brackett 2011)

Data record group is a subset of data records based on specific selection criteria. A data record group represents a data occurrence group in a data file. (Brackett 2012)

Data rederivation rule is a data integrity rule that specifies when any rederivation is done after the initial derivation. A derived data value may be rederived when the conditions change or the contributors change, which often occurs in a dynamic business environment. The derivation algorithm and the contributors are usually the same, but timing of the rederivation needs to be specified. (Brackett 2011, 2012)

Data redundancy is the unknown and unmanaged duplication of business facts in a disparate data resource. It’s the same facts, for the same data occurrence, for the same time period. It’s the situation where a single business fact is stored in more than one location, and the locations may not be in synch. It’s the unnecessary duplication of data that is a major contributor to data disparity. (Brackett 2011, 2012)

Data redundancy factor is the number of sources for a single business fact in an organization’s data resource. (Brackett 2011)

Data reference item is single set of coded data values, data names, and data definitions representing a single data property in a data reference set variation.

Data reference item list is a listing of all of the data reference items in a data reference set variation, including the data reference item codes, data reference item names, and data reference item definitions. (Brackett 2012)

Data reference item matrix is a matrix of all of the data reference items, for all of the data reference set variations, for a single data subject, including the coded data values, data reference item names, and data reference item definitions. (Brackett 2012)

Data reference item translation rule is a data translation rule that translates coded data values and names between data reference items in preferred and non-preferred data reference set variations within a data subject. (Brackett 2012)

Data reference set is a specific set of data codes for a general topic, such as a set of management level codes in an organization. (Brackett 2011)

Data reference set variation is a variation of a data reference set that has a difference in the domain of data reference items, their coded data values, their names, or substantial difference in the data definitions. Any difference, however slight, constitutes a different data reference set variation. (Brackett 2012)

Data refining is no longer used. See data resource transition. (Brackett 2012)

Data relation is an association between data occurrences in different data subjects or data entities, or within a data subject or data entity, or between data records in different data files or within a data file. It provides the connections between data subjects for building the proper data structure and between data files for navigating in the database. (Brackett 2011, 2012)

Data relation variability is the variability that exists with the data relations, the names and cardinalities for those data relations, primary keys, and foreign keys. Ideally, data relations with their names and cardinalities, primary keys, and foreign keys are formally designed. However, that is far from the norm in a disparate data resource. (Brackett 2012)

Data replication is the consistent copying of data from one primary data site to one or more secondary data sites. The copied data are kept in synch with the primary data on a regular basis. (Brackett 2011)

Data resource is a collection of data (facts), within a specific scope, that are of importance to the organization. It is one of the four critical resources in an organization, equivalent to the financial resource, the human resource, and real property. The term is singular, such as the organization data resource, the student data resource, or the environmental data resource. (Brackett 2011, 2012)

Data resource agility principle states that an organization’s data resource must be agile enough to change in a manner that supports the business change needed to remain successful in a dynamic business world. The data resource must change so that it provides one version of truth about the business world where the organization operated. (Brackett 2012)

Data resource clarity is the state of being clear and understandable. The data resource must be free from doubt, obscurity, and ambiguity. (Brackett 2011)

Data resource comparity principle states that if the data resource management rules are followed, a comparate data resource will be developed.  The rules create the right conditions for development of a comparate data resource. If the rules are not followed, a disparate data resource will be developed. (Brackett 2011, 2012)

Data resource data are any data necessary for thoroughly understanding, formally managing, and fully utilizing the data resource to support the business information demand. (Brackett 2011)

Data resource data aspect principle states that data documentation must include both the technical aspect and the semantic aspect of the data resource. Both are needed for all audiences to fully understand, manage, and utilize the organization’s data resource. (Brackett 2011)

Data resource data model is a complete data model of the data resource data contained in the Data Resource Guide. (Brackett 2011)

Data resource direction is the course of data resource development toward a particular goal or objective. (Brackett 2011)

Data resource discovery principle states that data resource integration is a discovery process where any insights about the data resource are captured, understood, and documented. The process is performed by people, who may be supported by automated tools. (Brackett 2012)

Data resource drift is the natural, steady drift of a data resource towards disparity if its development is not properly managed and controlled. The natural drift is toward a disparate, low quality, complex data resource. The longer the drift is allowed to continue, the more difficult it will be to achieve a comparate data resource. The natural drift is continuing unchecked in most public and private sector organizations today, and will continue  until organizations consciously alter that natural drift. (Brackett 2011, 22012)

Data resource elegance is the state of being beautiful, graceful, and dignified. It’s high grade, and has desirable characteristics and qualities. (Brackett 2011)

Data resource excellence is the quality or state of a data resource being excellent, having outstanding or valuable data quality, being superior in supporting the business information demand. (Brackett 2011)

Data Resource Guide provides a complete, comprehensive, integrated index to the organization’s data resource. It provides a thorough understanding of the data resource, and is readily available to everyone in the organization so they can use the data resource to meet their business needs. It provides one version of truth about the data resource. (Brackett 2011, 2012)

Data Resource Guide principle states that the data resource data must be placed in a comprehensive Data Resource Guide which serves as the primary repository for all data resource data. It contains data resource data about disparate data, comparate data, and the transformation of disparate data to comparate data. The Data Resource Guide contains the single version of truth about the data resource. (Brackett 2011)

Data resource hazard is the existence of disparate data. A greater volume and a greater degree of disparity make the hazard greater. (Brackett 2011)

Data resource horizon is the distance into the future that an organization is interested in planning for its data resource development. (Brackett 2011)

Data resource iatrogenesis principle states that the disparate data resource was caused by or resulted from the actions of the data management professionals and/or business professionals in an effort to create data to meet the business information demand. Unlike medicine, the actions may have been intentional or unintentional. (Brackett 2011, 2012)

Data resource information is any set of data resource data in context, with relevance to one or more people at a point in time or for a period of time. (Brackett 2011)

Data resource information demand is the organization’s continuously increasing, constantly changing need for current, accurate, integrated information about the data resource that is necessary for formally managing the data resource. (Brackett 2011)

Data resource integration is the thorough understanding of existing disparate data within a common data architecture, the designation of preferred data, and the development of a comparate data resource based on those preferred data. It is the act or process to form, coordinate, or blend disparate data into a comparate data resource. It resolves the existing data disparity. (Brackett 2011, 2012)

Data resource integration concept is to resolve the disparate data and produce a comparate data resource that meets the current and future business  information demand. The awareness of the data resource and a thorough understanding of that data resource lead to the resolution of the disparate data. (Brackett 2012)

Data resource management is the formal management of the entire data resource at an organization’s disposal, as a critical resource of the organization equivalent to the human resource, financial resource, and real property, based on established concepts, principles, and techniques, leading to a comparate data resource, that supports the current and future business information demand. (Brackett 2011)

Data Resource Management Framework is a framework that represents the discipline for complete management of a comparate data resource. It represents the cooperative management of an organization-wide data resource that supports the current and future business information demand. (Brackett 2011)

Data resource management integration is the overall integration of the management of an organization’s data resource, including integration of the data resource itself and integration of the data culture. It is the process of moving from discordant data resource management to concordant data resource management. (Brackett 2012)

Data resource management transition is the transition from a state of discordant data resource management to a state of concordant data resource management. It includes both data resource transition and data culture transition. The transition has a direction and purpose, and permanence to the extent that a return is not made to discordant data resource management. (Brackett 2012)

Data resource perfection is the state of a data resource being perfect, being free of defective data, having an unsurpassable degree of accuracy to support the business information demand. (Brackett 2011)

Data resource precautionary principle states that if an action or policy has a suspected risk of causing harm to the data resource, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those who advocate taking the action. (Brackett 2011)

Data resource probability neglect is overestimating the odds of not meeting the current business information demand and underestimating the odds of not meeting the future business information demand. (Brackett 2012)

Data resource quality is a measure of how well the data resource supports the current and future business information demand. Ideally, the data resource should fully support all the current and future business information demands of the organization to be considered a high quality data resource. (Brackett 2011)

Data resource quality tolerance is the degree of acceptable variation from perfection that is allowed in the data resource. It’s the acceptable level of quality that is adequate for supporting the business information demand. (Brackett 2011)

Data resource risk is the chance that use of the disparate data will adversely impact the business. (Brackett 2011)

Data resource scope is the total data resource available to an organization. (Brackett 2011)

Data resource simplicity is the state of being simple, uncomplicated, and maintainable. It’s free from pretense and subtlety. (Brackett 2011)

Data resource transformation is the formal process of formally transforming a disparate data resource to a comparate data resource within the context of a common data architecture according to the preferred data architecture designations. It’s a subset of overall data resource transition that is based on the preferred physical data architecture. It’s a metamorphosis of the physical disparate data to form physical comparate data. (Brackett 2012)

Data resource transformation concept states that all data transformation, whether disparate data to comparate data or comparate data to disparate data, will be done within the context of a common data architecture, using the preferred data architecture designations, according to formal data transformation rules. The best existing disparate data are extracted and transformed to comparate data to create a single, high quality version of truth about the business. (Brackett 2012)

Data resource transformation objective is to transform the best of the existing disparate data to a high quality comparate data resource so it can support the current and future business information demand. The objective is more than just connecting a few database and merging the data, building bridges between databases, or sending electronic messages over a network. It’s a precise, detailed, and very rigorous process that creates a high quality comparate data resource. (Brackett 2012)

Data resource transition is the transition of an organization’s data resource from a disparate data resource state, through an interim data resource state and a virtual data resource state, to a comparate data resource state. It’s the pathway that is followed from a disparate data resource to a comparate data resource. It’s unique to each organization depending on their current situation and future needs. (Brackett 2012)

Data resource value is the worth and importance of the data resource. Its value is in its usefulness and its reusability. (Brackett 2011)

Data resource variability principle states that every data resource has a level of variability that must be accepted and clarified, and that any variability above that acceptable level must be resolved. (Brackett 2012)

Data responsibility is the assignment of appropriate responsibility for development and maintenance of the data resource to specific individuals. (Brackett 2011)

Data restructuring – See Restructure data.

Data retention rule is a data integrity rule that specifies how long data values are retained and what is done with those data values when their usefulness is over. It specifies the criteria for preventing the loss of critical data through updates or deletion, such as when the operational usefulness is over, but the evaluational usefulness is not over. (Brackett 2011)

Data reviewing – See Review data.

Data rule is a subset of business rules that deals with the data column of the Zachman Framework. They specify the criteria for maintaining the quality of the data resource. (Brackett 2011, 2012)

Data rule domain specifies the data domain in the form of a rule. (Brackett 2011)

Data rule version principle states that data rule versions are designated by the version notation in the data naming taxonomy. (Brackett 2011)

Data scanning in the context of data resource integration is the process of electronically or manually scanning databases or application programs to identify the data stored by databases, or the data used or produced by applications. Data scanning can capture technical insight into the data, but cannot capture semantic insight into the data. (Brackett 2012)

Data selection rule is a data integrity rule that specifies the selection of data occurrences based on selection criteria. (Brackett 2011)

Data sharing concept states that shared data are transmitted over the data sharing medium as preferred data. Any organization, whether source or target, that does not have or use data in the preferred form is responsible for translating the data. (Brackett 2011, 2012)

Data sharing cycle is an ongoing cycle where people understand the data get involved in sharing data, improving data quality, and promoting data resource integration. (Brackett 2011)

Data site is any location where data are stored, such as a database, a server, a filing cabinet, and so on. (Brackett 2011)

Data source rule specifies the preferred source from which a particular business fact is obtained and the conditions that determine the preferred source. (Brackett 2012)

Data steward is a person who watches over the data and is responsible for the welfare of the data resource and its support of the business information demand, particularly when the risks are high. (Brackett 2011)

Data stewardship principle states that data stewards will be assigned at all levels of an organization with appropriate responsibilities for developing and maintaining a comparate data resource. (Brackett 2011)

Data structure is a representation of the arrangement, relationships, and contents of data subjects, data entities, and data files in the organization’s data resource. (Brackett 2011)

Data structure components principle states that a proper data structure must integrate data entity-relation diagrams, data relations, semantic statements, data cardinalities, and data attribute structures. All of these components must be developed to have a complete proper data structure. (Brackett 2011)

Data structure integration principle states that each component of proper data structures must be stored once and only once within the organization’s data resource, and then integrated as necessary when data structures are presented to specific audiences. (Brackett 2012

Data structure rule is a data integrity rule that specifies the data cardinality for a data relation between two data entities that applies under all conditions. No exceptions are allowed to a data structure rule. (Brackett 2011)

Data structure uniformity principle states that all proper data structures in an organization must have a uniform format. (Brackett 2011)

Data structure variability is the variability that exists in the improper structure of data in a disparate data resource. Data structure variability can occur with data files, data records, data items, data codes, and data relations, and usually occurs with all five. (Brackett 2012)

Data subject is a person, place, thing, concept, or event that is of interest to the organization and about which data are captured and maintained in the organization’s data resource. Data subjects are defined from business objects and business events, making the data resource subject oriented toward the business. (Brackett 2011)

Data subject-relation diagram represents data subjects and the relations between those data subjects. (Brackett 2011)

Data subject thesaurus is a list of synonyms and related business terms that help people find data subjects that support their business information needs. It’s a list of business terms and alias data entity names that point to the formal data subject name. (Brackett 2011)

Data subject variation is a variation of a data subject to support data selections, subsets of data, and data roles, and to support evaluational data subjects. (Brackett 2012)

Data suitability is how suitable the data are for a specific purpose. The suitability varies with the use of data. The same data may be suitable for one use and unsuitable for another use. (Brackett 2011)

Data tracking is the process of tracking data from the data origin to their current location. It documents any alterations or modifications to the data, the addition of new data, and the creation of derived or aggregated data. It’s a process to help understand and manage the movement of data within and between organizations. (Brackett 2011, 2012)

Data transform is the formal process of transforming disparate data into comparate data, in the data depot, using formal data transformation rules. (Brackett 2012)

Data transformation is the process of transforming disparate data to comparate data, or comparate data to disparate data, within the context of a common data architecture. (Brackett 2012)

Data transformation rule is a data rule that specifies how the data will be transformed within the context of a common data architecture based on the existing disparate data and the preferred physical data architecture. (Brackett 2012)

Data translation – See Translate data.

Data translation principle states that data translation rules are prepared between preferred data designations and non-preferred data designations to assist in the transformation between disparate data to comparate data. (Brackett 2012)

Data translation rule is a data rule that defines the translation of a data value from one unit to another unit. It represents the translation of the values of a single fact to different units, and is not considered to be a data derivation rule. (Brackett 2012)

Data translation scheme was the former name for data translation rule and is no longer used so that all translations could be stated as rules. (Brackett 2012)

Data type hierarchy provides the construct for understanding and managing all data that are currently defined or may be defined in the future. It consists of a hierarchy for data megatypes, base data types, and distinct data types. (Brackett 2011)

Data value is any data value, such as a date, a name, a code, or a description. (Brackett 2011)

Data value domain specifies the data domain as a set of allowable values. (Brackett 2011)

Data value rule is a data integrity rule that specifies the unconditional data domain for a data attribute that applies under all conditions. It specifies the rule with respect to the business, not with respect to the database management system. No exceptions are allowed to a data value rule. (Brackett 2011, 2012)

Data variability is the variation in format and content of a redundant fact stored in a disparate data resource. (Brackett 2011)

Data variability factor is the number of variations in format or content for a single business fact. (Brackett 2011)

Data variation is the variation in the data meaning, data structure, data integrity, data domain, data content and format, and so on. (Brackett 2012)

Data version identifies the specific version of data, such as a date or time frame. Two to four words are usually sufficient to uniquely designate a data version. (Brackett 2011)

Data view schema represents the structure of data as normalized from the business schema. (Brackett 2011)

Data vision is the power of imagining, seeing, or conceiving the development and maintenance of a comparate data resource the meets the current and future business information demand. (Brackett 2012)

Data volatility is a measure of how quickly data in the business world changes. (Brackett 2011)

Data volume breadth is how many data entities and data attributes are in the data resource and data models, and how many data files and data items are in the databases. It depends on the number of business facts and how those business facts are grouped into data entities and stored in data files. (Brackett 2012)

Data volume depth is how many data occurrences exist for the data entities and how many data records are stored in the data files. (Brackett 2012)

Data warehousing is the storage of evaluational data for the analysis of trends and patterns in the business. (Brackett 2011)

Database conversion is the process of changing a database management system from one operating environment to another operating environment. The data are not altered in any way. The database management system is simply moved from one operating platform to another. (Brackett 2012)

Database data domain specifies the values allowed in a data attribute with respect to the database management system. (Brackett 2011)

Database data optionality is a general statement about the requirements of a data value with respect to the database management system. The possibilities are usually Required or Optional because that’s all database management systems can handle. (Brackett 2011)

Database merge is the process of merging separate compatible databases together into one single database. The data are not altered in any way. Data records are simply merged into one database. (Brackett 2012)

Datum has historically been defined as the singular form of data related to one fact. (Brackett 2011)

Default data value is a data value that is automatically entered when no other data values are available. (Brackett 2011)

Definition is a statement conveying a fundamental character or the meaning of a word, phrase, or term. It is a clear, distinct, detailed statement of the precise meaning or significance of something. (Brackett 2011)

Deniability is the ability to deny, a valuable but often deceptive ability to deny. (Brackett 2011)

Denotative meaning is the direct, explicit meaning provided by a data definition. (Brackett 2011)

Denotative meaning principle states that a comprehensive data definition must have a strong denotative meaning that limits any individual connotative meanings. (Brackett 2011)

Deployment data architecture is the architecture of the deployment data as they are deployed over a network. It represents the data in the deployment schema. (Brackett 2011)

Deployment schema represents the structure of the logical schema as de-optimized and distributed over several physical databases. (Brackett 2011)

Depot is a place for storing goods; a store or cache; a place for storing and forwarding supplies; a building for railroad or bus passengers or freight. (Brackett 2012)

Derive data is the formal process of deriving target data from source data according to formal data derivation rules. It applies to deriving individual data values, to summarizing operational data, and to aggregating evaluational data to the lowest level of detail desired. (Brackett 2012)

Derived data are data that are obtained from other data, not by the measurement or observation of an object or event. (Brackett 2012)

Derived data – See Fourth normal form.

Descriptive is to describe; referring to, consulting, or grounded in matters of observation or experience; expressing the quality, kind, or condition of what is denoted by a modified term. It is finding out what currently exists and describing it. (Brackett 2012)

Detail data steward is a person who is knowledgeable about the data by reason of having been intimately involved with the data. That person is usually a knowledge worker who has been directly involved with the data for a considerable length of time. (Brackett 2011)

Deterministic is the quality or state of being determined; every event, act, and decision is the consequence of some previous event, act, and decision. (Brackett 2012)

Diagram segmentation principle states that a data entity-relation diagram must be segmented in a manner that is readily understandable by the intended audience. (Brackett 2011)

Dimensional data modeling is used for modeling evaluational data in the analytical tier using analytical data normalization. (Brackett 2011)

Dimensional data structure shows the detail necessary for implementing data entities in a data warehouse. (Brackett 2011)

Discordance is the state of disagreement, a lack of agreement among persons and groups, dissension. It’s tension or strife resulting from a lack of agreement. (Brackett 2011)

Discordant is being at variance; disagreeing; quarrelsome; relating to disagreement or clashing. (Brackett 2012)

Discordant data management is the situation where disagreement exists in the organization about how the data resource should be managed and whether an initiative should be started to formally manage the data resource. (Brackett 2011)  No longer used. See Discordant data resource management.

Discordant data resource management is the situation where the overall management of an organization’s data resource, including the data resource itself and the data culture, has a high variance and disagreement. (Brackett 2012)

Disparate means fundamentally distinct or different in kind; entirely dissimilar. (Brackett 2102)

Disparate data are data that are essentially not alike, or are distinctly different in kind, quality, or character. They are unequal and cannot be readily integrated to meet the business information demand. They are low quality, defective, discordant, ambiguous, heterogeneous data. (Brackett 2011)

Disparate data application is any application that reads and stores disparate data. (Brackett 2012)

Disparate data codes is the situation where data codes can represent single, multiple, or partial data properties; where data codes can represent single or multiple data subjects; where sets of data codes can represent single or multiple data subjects; and where sets of data codes can be complete or partial. (Brackett 2012)

Disparate data cycle is a self-perpetuating cycle where disparate data continue to be produced at an ever-increasing rate because people do not know about existing data or do not want to use existing data. People come to the data resource, but can’t find the data they need, don’t trust the data, or can’t access the data. These people create their own data, which perpetuates the disparate data cycle. The next people that come to the data resource find the same situation, and the cycle keeps going. (Brackett 2011, 2012)

Disparate data definition is any vague definition about the data in the existing data resource. (Brackett 2012)

Disparate data file is a data file that did not go through formal data normalization and data denormalization, and does not represent a single, complete data subject, or related data subjects resulting from formal data denormalization. Disparate data files often represent multiple data subjects, partial data subjects, or a combination of multiple and partial data subjects. (Brackett 2012)

Disparate data instances is the situation where the retention of historical data instances across disparate data files and disparate data records can easily result in large quantities of disparate data. (Brackett 2012)

Disparate data integrity rule is any data integrity rule that exists in the data resource. (Brackett 2012)

Disparate data item is a data item that contains other than an elemental or combined data characteristic. Disparate data items may contain multiple data characteristics, partial data characteristics, or complex data characteristics. (Brackett 2012)

Disparate data name is any informal data name in the disparate data resource. (Brackett 2012)

Disparate data record is data record that did not go through formal data normalization and denormalization, and does not represent a single data occurrence, or multiple data occurrences resulting from formal data denormalization. (Brackett 2012)

Disparate data resource is a data resource that is substantially composed of disparate data that are dis-integrated and not subject oriented. It is in a state of disarray, where the low quality does not, and cannot, adequately support an organization’s business information demand. (Brackett 2011)

Disparate data resource state is the current state of a disparate data resource in an organization and is outside the context of a common data architecture. The data exhibit the four characteristics of disparate data: unknown existence, unknown meaning, high redundancy, and high variability. The disparate data cycle is in full swing and the natural drift of the data resource is toward disparity. It’s the least desirable state of the data resource and is the initial state for data resource transition process. (Brackett 2012)

Disparate data resource variability is a state where all aspects of a disparate data resource are inconsistent, characterized by data variations, and are not true to the concepts and principles of a comparate data resource. The data are highly variable in their names, definitions, structure, integrity, and documentation. The variability is pervasive throughout the disparate data resource. (Brackett 2012)

Disparate data shock is the sudden realization that a data dilemma exists in an organization and that it is severely impacting an organization’s ability to be responsive to changes in the business environment. It’s the panic that an organization has about the poor state of its data resource. It’s the realization that disparate data are not adequately supporting the current and future business information demand. It’s the panic that sets in about the low quality of the data resource, that the quality is deteriorating, and very little is being done to improve the situation. (Brackett 2011, 2012)

Disparate data spiral is the spiraling increase in data disparity from existing technologies into new technologies. Both the volume of disparate data and the complexity of that disparity are increasing. (Brackett 2011)

Disparate data structure is any improper data structure that exists in the data resource. (Brackett 2012)

Disparate data understanding principle states that all disparate data variability, including data names, definitions, structure, integrity, and existing  documentation will be understood and formally documented at a detailed level within the context of a common data architecture. (Brackett 2012)

Disparate foreign key is any foreign key defined in a disparate data resource that does not meet the formal criteria for a true foreign key. (Brackett 2012)

Disparate information is any information that is disparate with respect to the recipient. It could result from information acquired from different sources that are organized differently, or it could result from information created from disparate data that provides conflicting information. (Brackett 2012)

Disparate primary key is any primary key defined in a disparate data resource that does not meet the formal criteria for a true primary key. The specific situations are described below. (Brackett 2012)

Distinct data type is a unit style within a base data type, based on variations in content or format. (Brackett 2011)

Documentation known to exist principle states that the data resource data must be known to exist so data management and business professionals can take advantage of those data. (Brackett 2011)

Dormant means inactive or a suppression of activity, but having the capability of becoming active again. (Brackett 2011)

Dormant data are data that exist in the data resource but are never used or are seldom used. Data can be dormant because they are hidden, out of date and don’t represent the real world, or useless for any business activity. (Brackett 2011)

Dynamic data conversion is where the data conversion is based on changing conversion criteria, such as monetary units with varying exchange rates. (Brackett 2011)

Dysfunction is a behavior caused by uncertainty and lack of understanding. (Brackett 2011)

Dysfunctional organization exhibits dysfunctional behavior; it is not a learning organization. A lack of knowledge about the business environment limits understanding and results in uncertainty, which perpetuates a dysfunctional organization. (Brackett 2011)

Edit data is the formal process of applying the preferred data edits to the transformed data to ensure the quality of the data before they are loaded into the target database. (Brackett 2012)

Effective data cross-referencing principle states that thoroughly understanding existing disparate data is only effective when those data are inventoried and documented at a detailed level and are cross-referenced to a common data architecture. (Brackett 2012)

Elegance is refined grace, dignified propriety, tasteful richness of design or ornamentation, dignified gracefulness or restrained beauty of style, high grade or quality. Elegance is beautiful and graceful. (Brackett 2011)

Electronic database data includes data located in databases and database management systems. They can be searched and analyzed relatively easily.

Electronic non-database data includes data in word processing documents, spreadsheets, electronic presentations, e-mails, and so on. These data can be searched and analyzed electronically, but with some difficulty. (Brackett 2011)

Elemental data are individual facts that cannot be subdivided and retain any meaning. (Brackett 2012)

Elemental data characteristic is a single elemental fact that cannot be further divided and retain their meaning, such as a month number or a day number within a month. (Brackett 2011, Brackett 2012)

Enhanced disparate data definition is a disparate data definition that is enhanced in some way based on insight gained from another source, such as a person’s memory. (Brackett 2012)

EnterpriseSee Organization.

Enterprise architecture is an initiative to comprehensively describe the architectures in an organization. It describes the terminology, composition, and relationships of each architecture, the relationships between architectures, and the relationships with external organizations. It includes business goals, business processes, hardware, software, data, and information systems. (Brackett 2011)

Entity is a being, existence; independent, separate, or self-informed existence; the existence of a thing compared with its attributes; something that has separate and distinct existence and objective or conceptual reality. (Brackett 2011)

Entity in mathematics is a single existent, such as an employee John J. Smith. (Brackett 2011)

Entity in the data resource is really an entity set in mathematics. The term is often made plural, such as Employees, to hide the fact that it has a different meaning from mathematics. (Brackett 2011)

Entity-relation diagram, often referred to as an E-R diagram or a data structure diagram, is a  (Brackett 2011)

Entity set in mathematics is a group of like entities, such as Employee. (Brackett 2011)

Entropy is the state or degree of disorderliness. It is a loss of order, which is increasing disorderliness. Entropy increases over time, meaning that things become more disorderly over time. (Brackett 2011)

E-R diagram:  See Entity-relation diagram.

Evaluational data are subject oriented, integrated, time variant, non-volatile collections of data in support of management’s decision making process. They are used to evaluate the business and usually contain summary data with some capability to drill down to detail data. (Brackett 2011)

Excellence is the quality or state of being excellent, having outstanding or valuable quality, being superior, distinguishable by superiority, first class, very good of its kind. (Brackett 2011)

Existing disparate data definition is a disparate data definition that currently exists in a data dictionary, database management software, or some other form of documentation. (Brackett 2012)

Expanded means to increase the extent, number, volume, or scope of something; to enlarge; to express fully or in detail; to write out in full; to increase the extent, number, volume, or scope. (Brackett 2011)

Expanded data vision is an intelligent foresight about the data resource that includes the scope of the data resource, the development direction, and the planning horizon. It’s the situation where the scope of the data resource includes the entire data resource, the development direction is aligned with the business and technology, and the planning horizon is realistic. (Brackett 2011, 2012)

Expect anything principle states that when seeking to understand and resolve disparate data, anything should be expected. One should expect any situation, even if it seems irrational. (Brackett 2012)

Explicit data culture variability is the variability that can be readily visible, or identified in documented procedures and data management actions pertaining to data orientation, data availability, data responsibility, data vision, and data recognition. (Brackett 2012)

Explicit data error is a data error that is readily visible and known. Explicit data errors are routinely identified and made apparent through data edits. (Brackett 2011)

Explicit data integrity rule principle states that any implicit data integrity rule shown on a proper data structure must be shown explicitly in a precise data integrity rule. All data integrity rules must be stated explicitly so they can be enforced. (Brackett 2011)

Explicit data transformation rules are stated as a formal data rule using specific notations. (Brackett 2012)

Extract source data is the formal process of extracting the source data from the preferred data source based on the specifications, performing any database conversions necessary between the data source and the data depot, and placing the source data into a data depot for data transformation. (Brackett 2012)

Explicit disparate data integrity rule is a disparate data integrity rule that is explicitly stated in the data documentation or in a data model. (Brackett 2012)

Explicit disparate data name is a disparate data name that exists in the data resource, such as a data file name. (Brackett 2012)

Explicit disparate data resource variability is the variability that can be readily seen or identified in the data names, definitions, structure, integrity, and documentation of a disparate data resource. (Brackett 2012)

Explicit disparate data structure is a disparate data structure that is explicitly defined in the documentation or in a data model. (Brackett 2012)

Explicit knowledge, also known as formal knowledge, is knowledge that has been codified and stored in various media, such as books, magazines, tapes, presentations, and so on, and is held for mankind, such as in a reference library or on the web. It is readily transferable to other media and capable of being disseminated. (Brackett 2011, Brackett 2012)

External data tracking is data tracking in an environment where the organization does not have control of the data. It usually deals with data tracking between organizations, where changes to the data may not be known. (Brackett 2011, 2012)

External schema is the structure of the data used by programs. (Brackett 2011)

Fact normalization:  See Data attribute normalization.

Farsighted horizon is the situation where an organization’s data resource horizon is very long term. The vision is too far over the horizon to be of interest to most people. (Brackett 2011, 2012)

Federated database is a set of databases that are documented and then interconnected to operate as one database, even when those databases are on different platforms. A person desiring data goes to the federation and gets the data they need without knowing where those data reside. (Brackett 2011)

Fifth normal form, commonly known as inter-entity dependencies, is a technique to find dependencies between entities and document those dependencies as additional data entities. (Brackett 2011)

Final common data architecture is a common data architecture that includes all data in the organization’s data resource and is used to designate a preferred data architecture. (Brackett 2012)

First dimension of data variability is the variability in data names, definitions, structure, integrity, and documentation that exists at any point in time with the operational data in a disparate data resource. (Brackett 2012)

First level of data redundancy is created when disparate data files and disparate data records contain redundant data. The data redundancy can be quite large, particularly in organizations that have been in business for many years and have a large data resource. (Brackett 2012)

First normal form, commonly known as repeating groups, is a technique to find repeating groups and move them to a separate data entity. (Brackett 2011)

Five-Tier Five-Schema concept represents all the schema involved in data resource management within the context of a common data architecture. The five tiers are strategic logical, tactical logical, operational, analytical, and predictive. The five schema in the operational, analytical, and predictive tiers are business schema, data view schema, logical schema, deployment schema, and physical schema. (Brackett 2011, 2012)

Five-Tier Five-Schema orientation principle states that development of a comparate data resource within a common data architecture must be done according to the Five-Tier Five-Schema Concept. (Brackett 2011)

Fixed format data item is a data item whose data value is always in the same format. (Brackett 2012)

Fixed length data item is a data item whose length is fixed. (Brackett 2012)

Foreign data attribute is any data attribute that does not have the same data entity name as the data entity in which is appears. (Brackett 2011)

Foreign data entity is a data which is foreign to a data attribute and which is not characterized by that data attribute. (Brackett 2011)

Foreign key in logical data models is the primary key of a data occurrence in a parent data entity that is placed in each data occurrence of a subordinate data entity to identify the parent data occurrence in that parent data entity. In data files, a foreign key is the primary key of a data record in a parent data file that is placed in each data record of a subordinate data file to identify the parent data record in that parent data file. (Brackett 2011, 2012)

Foreign key list is a list of the foreign keys for a data subject that exists in the disparate data. Only the data characteristic is listed for each foreign key, not the data characteristic variation. (Brackett 2012)

Formal means having an outward form or structure, being in accord with accepted conventions, consistent and methodical, or being done in a regular form. (Brackett 2011)

Formal data culture integration is any data culture integration done within the context of a common data culture. (Brackett 2012)

Formal data culture state is a necessary state where the data culture is readily understood within the context of a common data culture. The variability of the fragmented data culture is understood and documented, the preferred data culture is designated, and the data culture integration is prescribed. No changes to the data culture have yet been made, pending review and approval by the organization. (Brackett 2012)

Formal data name readily and uniquely identifies a fact or group of related facts in the data resource, based on the business, and using formal data naming criteria. (Brackett 2011)

Formal data name abbreviation is the formal shortening of a primary data name to meet a length restriction according to formal data name word abbreviations and a formal data name abbreviation algorithm. (Brackett 2011)

Formal data resource integration is any data resource integration done within the context of a common data architecture. (Brackett 2012)

Formal data resource state is a necessary state where the disparate data are readily understood within the context of a common data architecture. It’s the first step in the data resource transition process where disparate data are put in context using a common data architecture. It’s not a separate data resource since the data are only understood within a formal context. (Brackett 2012)

Formal data transformation is done within the context of a common data architecture and follows all of the concepts, principles, and techniques for formal data resource integration. (Brackett 2012)

Formal data transformation principle states that formal data transformation will be used to create a comparate data resource. Informal data transformation will not be considered or used. (Brackett 2012)

Formal design techniques principle states that proper data structures must be developed according to formal, recognized data design techniques. (Brackett 2011)

Formal knowledge – See Explicit knowledge.

Forward data transformation is the formal transformation of disparate data to comparate data. Data are extracted from the preferred data source, transformed, and loaded into the data target. (Brackett 2012)

Forward data translation rule is a data value translation rule from a non-preferred data designation to a preferred data designation. (Brackett 2012)

Fourth normal form, commonly known as derived data, is a technique to identify data attributes that are derived and remove them from the data entity. (Brackett 2011)

Fragmented is broken apart, detached, or incomplete; consisting of separate pieces. (Brackett 2012)

Fragmented data culture is a data culture that is broken apart into separate pieces that are unrelated, incomplete, and inconsistent. It is similar to a disparate data resource, and leads to the creation of a disparate data resource. A fragmented data culture cannot effectively or efficiently manage an organization’s data resource. (Brackett 2012)

Fragmented data culture state is the situation where every organizational unit, and possibly every person, is managing data in their own way, with their own orientation, vision, processes, and software tools. The data culture is highly variable and exhibits all of the characteristics of a fragmented data culture. The management is informal and seldom documented, and the fragmentation is not known. It is the least desirable state and is the initial state for data culture integration. (Brackett 2012)

Functional dependency profiling analyzes the data values for possible data relations between sets of data. If the same domain of data values is identified in different data files, a presumption can be made that those two data files might be related through a primary key – foreign key relationship. (Brackett 2012)

Fundamental data are data that are not stored in databases and are not used in applications, but support the definition of specific data. (Brackett 2011)

Fundamental data definitions are the comprehensive data definitions for fundamental data. (Brackett 2011)

Fundamental data definition inheritance is the process of comprehensively defining fundamental data and allowing specific data definitions to inherit those fundamental data definitions. It’s a technique that implements the data inheritance principle. (Brackett 2011)

Fundamental data integrity rule is a data integrity rule that can be developed for and used by many specific data attributes. The data integrity rule is defined once and is applied to many different situations. (Brackett 2011)

Fundamental data translation rule is a basic data translation rule that can be applied to many specific data translations. The data translation rule is specified once and can be inherited for many specific data translations. (Brackett 2012)

General data cardinality is a data cardinality specified by the data relation or by a semantic statement. (Brackett 2011)

General information is a set of data in context that could be relevant to one or more people at a point in time or for a period of time. (Brackett 2012)

General primary key is a primary key that uniquely identifies every data occurrence in a data entity. (Brackett 2011)

Generation data derivation is where the data derivation algorithm generates the derived data values without the input of any other data attributes. (Brackett 2011)

Generic data structure principle states that universal data models and generic data architectures can be used to guide an understanding of the organization’s data, but should not be used in lieu of thoroughly understanding the organization’s business. (Brackett 2011)

Graph theory is a branch of discrete mathematics that deals with the study graphs as mathematical structures used to model relations between objects from a certain collection. A graph consists of a collection of vertices (or nodes), and a collection of edges that connect pairs of vertices. The edges may be directed from one vertex to another, or undirected meaning no distinction between the two vertices. (Brackett 2012)

Group think is the situation where a group of people under stress tend to find a solution, but have lost their objectivity. (Brackett 2011)

Hazard is a possible source of danger or a circumstance that creates a dangerous situation. (Brackett 2011)

Heritage is property that descends from an heir, something transmitted by or acquired from a predecessor, or something possessed as a result of one’s natural selection or birth. Heritage usually applies to biological or cultural descendants, but can be applied to data. (Brackett 2012)

Hidden data code hierarchy is the situation where a single set of data codes represents a hierarchy of data codes. (Brackett 2012)

Hidden data resource is the large quantities of data that are maintained by the organization, but are largely unknown, unavailable, and unused because people are not aware that those data exist, or do not understand the data well enough for appropriate use. The data just sit in databases, on hard drives, in filing cabinets and desk drawers, and in archive boxes just waiting to be useful if only their existence and meaning were known and understood. (Brackett 2011)

Hidden information is the information that could be available from the hidden data if those hidden data were known to exist. (Brackett 2011)

Hidden knowledge is the knowledge that could be gained through the understanding of the hidden information. (Brackett 2011)

Historical data instance is any data instance, other than the current data instance, that represents previous data values of the data items in the data occurrence. (Brackett 2012)

Home data attribute is any data attribute that has the same data entity name as the data entity in which it appears. For example, Employee. Name is a home data attribute within the Employee data entity. (Brackett 2011)

Home data entity is the data entity which is the home to a data attribute and which is characterized by a data attribute. (Brackett 2011)

Homeostasis is the property of an open or closed system that regulates its internal environment and tends to remain in a stable, constant condition. (Brackett 2011)

Horizon is the distance into the future which a person is interested in for planning. (Brackett 2011)

A horizon is the distance into the future which a person is interested in for planning.

Horizontal partitioning is denormalizing data occurrences into two or more data files when the number of data records exceeds the capability of the database. (Brackett 2011)

Human data profiling identifies the pattern of actions different people exhibit when entering or editing data. Patterns about how people collect data, enter data, and edit data can be helpful for understanding disparate data. The patterns can also be useful for identifying data integrity rules that are not documented anywhere. (Brackett 2012)

Hype-cycle is a major initiative that is promoted in an attempt to properly manage an organization’s data resource, but often ends up making the data resource more disparate and impacting the business. (Brackett 2011)

Iatrogenesis refers to the inadvertent adverse effects or complications caused by or resulting from medical treatment or advise. The term originated in medicine and is generally referred to as harm caused by the healer. The medical profession strives to do no harm, hence iatrogenesis is a result of inadvertent actions. (Brackett 2011)

Identify source data is the formal process of determining the source data that will be needed to prepare the target data. The source data are specified as a physical data structure of the disparate data as documented during data inventorying. (Brackett 2012)

Identify target data is the formal process of determining the desired target data. The desired target data are specified as the preferred physical data structure and will be used throughout the data transformation process. (Brackett 2012)

Imagination is the power of uncertainty, the ability to spark intrigue to keep the imagination going, a suspense about what’s next. It’s a way to spark innovation and engage people in an activity. (Brackett 2011)

Implicit data culture variability is the variability that is not readily visible, or identified in documented procedures and data management actions. (Brackett 2012)

Implicit data error is a data error that is hidden and is only known through discovery during business processing, rather than through data edits. (Brackett 2011)

Implicit data integrity rule is a data integrity rule that is implied in a proper data structure. (Brackett 2011)

Implicit data transformation rules are stated in the form of a table or matrix for data value translations. (Brackett 2012)

Implicit disparate data integrity rule is a disparate data integrity rule that is not explicitly stated in the documentation or in a data model, but exists in database management systems or applications. (Brackett 2012)

Implicit disparate data name is a disparate data name that is implied through a definition, contents, or use of the data. (Brackett 2012)

Implicit disparate data resource variability is the variability that is not readily seen or identified in the data names, definitions, structure, integrity, and documentation of a disparate data resource. Implicit disparate data resource variability is either implied by existing documentation or exists in people’s minds. (Brackett 2012)

Implicit disparate data structure is a disparate data structure that is not explicitly defined and is implied through the use of foreign keys. (Brackett 2012)

Implicit knowledge – See Tacit knowledge.

Imprecise means not precise, not clearly expressed, indefinite, inaccurate, incorrect, or not conforming to a proper form. (Brackett 2011)

Imprecise data integrity rules are data integrity rules that do not provide adequate criteria to ensure high quality data. (Brackett 2011)

Improper means not suited to the circumstances or needs. (Brackett 2011)

Improper data structure is a data structure that does not provide an adequate representation of the data supporting the business for the intended audience. (Brackett 2011)

Inadequate means insufficient, or not adequate to fulfill a need or meet a requirement. (Brackett 2011)

Inadequate data responsibility is the situation where the responsibility, as defined, does not fulfill the need for properly managing a comparate data resource. The responsibility is casual, lax, inconsistent, uncoordinated, and not suitable for the current environment of a shared data resource. (Brackett 2011)

Inappropriate means not appropriate. (Brackett 2011)

Inappropriate data recognition is the situation where the organization at large does not recognize data as a critical resource of the organization, the fact that the data resource is disparate, or the need to develop a comparate data resource. (Brackett 2011)

Incorrect data definitions are data definitions that are incorrect or inaccurate with respect to the business. The definitions are not in synch with the data name, the data structure, the data integrity rules, or the business. (Brackett 2011)

Incorrect data name is any data name that does not correctly represent the contents of the data component. Incorrect data names are just flat wrong. (Brackett 2011, 2012)

Incrementally cost effective principle states that any data management initiative to resolve disparate data and create a comparate data resource should begin small, produce meaningful results, and continue to grow to a fully recognized initiative. (Brackett 2011)

Informal means casual, not in accord with prescribed form, unofficial, or inappropriate for the intended use. (Brackett 2011)

Informal data culture integration is any data culture integration done outside the context of a common  data culture. It usually does not resolve variability in the data culture and seldom leads to development and maintenance of a comparate data resource. (Brackett 2012)

Informal data name is any data name that is casual and inappropriate for the intended purpose of readily and uniquely identifying each fact, or set of related facts, in an organization’s data resource. It has no formality, structure, nomenclature, or taxonomy. (Brackett 2012)

Informal data name abbreviation is any abbreviated data name that has no formality to the abbreviation. (Brackett 2011)

Informal data resource integration is any data resource integration done outside the context of a common data architecture. It usually does not result in a comparate data resource or any substantial resolution to the disparate data. (Brackett 2012)

Informal data transformation is done outside the context of a common data architecture, seldom follows formal concepts, principles, and techniques, and seldom resolves data disparity. (Brackett 2012)

Information is a set of data in context, with relevance to one or more people at a point in time or for a period of time. Information is more than data in context—it must have relevance and a time frame. Information has historically been defined as singular. (Brackett 2011, 2012)

Information architecture is not used because it’s difficult to place relevance to one or more people and a time into an architecture. (Brackett 2011)

Information assimilation overload occurs when information is coming too fast for a person to assimilate. (Brackett 2011)

Information engineering is the discipline for identifying information needs and developing information systems to meet those needs. It’s a manufacturing process that uses data from the data resource as the raw material to construct and transmit information. (Brackett 2011)

Information engineering objective is to get the right data, to the right people, in the right place, at the right time, in the right form, at the right cost, so they can make the right decisions, and take the right actions. The operative term is the right data. (Brackett 2011)

Information excellence is the state of fully meeting the business information demand. Information perfection is the state of information being perfect, free from defect, and having an unsurpassable degree of accuracy in meeting the business information demand. (Brackett 2011)

Information frustration is a situation where needed information exists, but the information is so fragmented that the time to locate and relate the information causes frustration. (Brackett 2011)

Information integration is the integration of information, using the formal definition of information, from multiple sources into an understandable set of information for a specific use. It’s the process of taking disparate information and developing comparate information for some business activity. (Brackett 2012)

Information management is coordinating the need for information across the organization to ensure adequate support for the current and future business information demand. It should not be confused with data resource management. (Brackett 2012)

Information paranoia is the fear of not knowing everything that is relevant or could be relevant at some point in time. It’s a situation where a person is obsessed with gaining information for information’s sake. (Brackett 2011)

Information perfection is the state of information being perfect, free from defect, and having an unsurpassable degree of accuracy in meeting the business information demand. (Brackett 2011)

Information quality is how well the business information demand is met. It includes both the data used to produce the information and the information engineering process. (Brackett 2011)

Information sharing is the sharing of information between people and organizations according to the definition of information. (Brackett 2012)

Information system data are any data documenting the information system. (Brackett 2011)

Information technology infrastructure provides the resources necessary for an organization to meet its current and future business information demand. (Brackett 2011)

Infrastructure is the underlying foundation or framework for a system or an organization.

Integrate means to form or blend into a whole; to unite with something else; to incorporate into a larger unit; to bring into common organization. (Brackett 2011)

Integrated data culture is a data culture where all of the data management functions and processes in an organization are integrated within a common context, and are oriented toward developing and maintaining a comparate data resource. Data culture variability has been resolved and data resource management is performed consistently across the organization. (Brackett 2012)

Integrated data resource is a data resource where all data are integrated within a common context and are appropriately deployed for maximum use supporting the current and future business information demand. Data awareness and data understanding are increased. Data variability is at a minimum and data redundancy is reduced to a known and manageable level. Data integrity is known and at the desired level. The data are as current as the organization needs to conduct its business. (Brackett 2011, 2012)

Integration is the act or process of integrating. (Brackett 2012)

Integrity is the state of being unimpaired, the condition of being whole or complete, or the steadfast adherence to strict rules. (Brackett 2011)

Intelligence is the ability to learn or understand or to deal with new or trying situations; the skilled use of reason; the ability to apply knowledge to manipulate one’s environment or to think abstractly. (Brackett 2011)

Intelligent key is a term that is not used because data keys cannot possess intelligence. See Business key. (Brackett 2011)

Inter-attribute dependencies:  See Third normal form.

Inter-entity dependencies:  See Fifth normal form.

Inter-entity derived data attribute is one that is derived from data attributes in another subordinate data entity. (Brackett 2011)

Interim common data architecture is a common data architecture that is developed for cross-referencing one major segment of the data resource for a very large organization. (Brackett 2012)

Interim common data architecture principle states that interim common data architectures may be developed in very large organizations where it is not possible to achieve a final common data architecture in one step because of the size of the task. Data products are cross-referenced to interim common data architectures, and those interim data architectures are cross-referenced to a final common data architecture. (Brackett 2012)

Internal data tracking is data tracking in an environment where the organization has control of the data. It usually deals with data tracking within an organization, where changes to the data may be known. (Brackett 2011, 2012)

Internal schema was the structure of the data in the database. (Brackett 2011)

Intra-entity derived data attribute is one that is derived from other data attributes within that same data entity. (Brackett 2011)

Inventory is an itemized list of assets; a catalog of the property of an individual or estate; a list of goods on hand; a survey of natural resources; a list of traits, preferences, attitudes, interest, or abilities; the quality of goods or materials on hand. It is also the act or process of taking an inventory. (Brackett 2012)

Knowledge is cognizance, cognition, the fact or condition of knowing something with familiarity gained through experience or association. It’s the acquaintance with or the understanding of something, the fact or condition of being aware of something, of apprehending truth or fact. Knowledge is information that has been retained with an understanding about the significance of that information. Knowledge includes something gained by experience, study, familiarity, association, awareness, or comprehension. (Brackett 2011)

Knowledge base principle states that the existing, often hidden, base of knowledge about the data resource must be tapped to ensure a complete and thorough understanding of the data. (Brackett 2011)

Knowledge management is the management of an environment where people generate tacit knowledge, render it into explicit knowledge, and feed it back to the organization. The cycle forms a base for more tacit knowledge, which keeps the cycle going in an intelligent learning organization. It’s an emerging set of policies, organizational structures, procedures, applications, and technology aimed toward increased innovation and improved decisions. It’s an integrated approach to identifying, sharing, and evaluating an organization’s information. It’s a culture for learning where people are encouraged to share information and best practices to solve business problems.(Brackett 2011, 2012)

Law of increasing entropy states that a system reaches a state of maximum entropy—an equilibrium. The law is inevitable and irreversible for a closed system. In an open system, entropy continues to increase and an equilibrium is not reached. (Brackett 2011)

Learning means to gain knowledge or understanding, to come to realize, to be informed of something, to acquire knowledge, skill, or behavior, to discover. (Brackett 2011)

Learning organization is an organization where critical knowledge is leveraged to understand the business environment and meet business initiatives. It’s an organization where the employees are well informed, well trained, knowledge workers empowered to take action. (Brackett 2011)

Lessons learned principle states that every initiative has some failures and some successes, and the lessons learned can be included in the next initiative. (Brackett 2011)

Limited data documentation is any documentation about the data resource that is sparse, incomplete, out of date, incorrect, inaccessible, unknown, poorly presented, poorly understood, and so on. (Brackett 2011)

Limited foreign key is a foreign key that matches a limited primary key in a parent data subject. (Brackett 2012)

Limited primary key is a primary key that is available for all data occurrences, but has a limited range of uniqueness for data occurrences. (Brackett 2011, 2012)

Lineage is the direct descent from an ancestor or common progenitor to the descendants of a common ancestor that is regarded as the founder of the line. Lineage is commonly used for biological or cultural descendants, but can be applied to data. (Brackett 2012)

Load data is the formal process of loading the data from the data depot into the target database. Any database conversion that is necessary is done during the data load process. (Brackett 2012)

Logical data architecture is the architecture of the logical data represented by the logical schema. It represents the data in the logical schema from the strategic tier down to the predictive tier. (Brackett 2011)

Logical data relation is an association between data occurrences in different data subjects or data entities, or within a data subject or data entity. It is defined during data normalization and has a name or short phrase describing the data relation. (Brackett 2012)

Logical schema represents the structure of the logical data, independent of the physical operating environment, that are optimized from the data view schema. (Brackett 2011)

Lost productivity cycle is the situation where disparate data grows, more time is spent resolving the impacts, and less time is spent on value-added business activities. More time is spent resolving problems and less time is spent on preventing problems. (Brackett 2011)

Malthusian Principle deals with the power of populations to overwhelm their means of subsistence, causing misery, suffering, and eventually leading to extinction of that population if no corrective action is taken. Populations tend to grow geometrically, and their means of subsistence tend to grow arithmetically. At some point in time, the population growth exceeds the means of subsistence. (Brackett 2011)

Many-to-many data relation occurs when a data occurrence in one data entity is related to more than one subordinate data occurrences in the second data entity, and each data occurrence in that second data entity is related to more than one data occurrence in the first data entity. A many-to-many data relation is shown by a dashed line with an arrowhead on each end. (Brackett 2011)

Many-to-many recursive data relation is where a data occurrence in a data entity is related to more than one data occurrences in that same data entity, and each of those data occurrences are related to more than one other data occurrences. (Brackett 2011)

Many-to-one data reference item translation rule translates the coded data value and/or the name from many different data reference items in the source to one data reference item in the target. (Brackett 2012)

Massively disparate data is the existence of large quantities of disparate data within a large organization, or across many organizations involved in similar business activities. (Brackett 2011)

Master data management is a term that is not used because it represents a hype-cycle. See Data resource management. (Brackett 2011)

Mathematical data domain specifies the data values that are mathematically possible. Usually, it’s a maximum range allowed and is applied to all values in that data attribute. (Brackett 2011)

Meaningful data definition principle states that a comprehensive data definition must define the real content and meaning of the data with respect to the business. It is not based on the use of the data, how or where the data are used, how they were captured or processed, the privacy or security issues, or where they were stored. (Brackett 2011)

Meaningless data definitions are data definitions that are meaningless to the business. The English and grammar may be acceptable, but the explanation of the content and meaning of the data with respect to the business is useless. (Brackett 2011)

Meaningless data name is any data name that has no formal meaning with respect to the business. (Brackett 2011)

Merge means to blend or combine together, to become combined or united. (Brackett 2011)

Meta-data is a term that is no longer used because its meaning has become so confused that it is meaningless. See Data resource data, Para-data. (Brackett 2011)

Migration is a movement to change location periodically, especially by moving seasonally from one region or country to another. It’s wandering without a long term purpose, or wandering with only current objectives in mind, like nomadic wandering or bird migration. It’s a lack of a permanent settlement, especially resulting from seasonal or periodic movement. (Brackett)

Multiple characteristic data item is a data item that contains more than one data characteristic. (Brackett 2012)

Multiple contributor data derivation is where many data attributes from the same data entity or from different data entities contribute to the derived data. (Brackett 2011)

Multiple data characteristic is two or more single or combined data characteristics that are not closely related and should not be stored together or managed as a single unit. The data characteristics may be from the same data subject or from different data subjects. (Brackett 2012)

Multiple fact data attribute is where multiple facts appear in the same data attribute. (Brackett 2011)

Multiple fact data field is any data field that contains multiple, unrelated business facts. (Brackett 2011)

Multiple file data subject is a data subject that exists in multiple data files. The situation is common in a disparate data resource. (Brackett 2012)

Multiple occurrence data record is a data record that represents multiple data occurrences in a single data record. A multiple occurrence data record may contain subordinate data occurrences or parallel data occurrences. (Brackett 2012)

Multiple preferred data designations is the situation where multiple data characteristic variations or multiple data reference set variations are designated as preferred due to culture, geography, or politics. (Brackett 2012)

Multiple property data code is a data code that represents two or more data properties of the same data subject. (Brackett 2012)

Multiple subject data code is a data code that represents two or more different data subjects. (Brackett 2012)

Multiple subject data file is a data file that contains all of the data items, or a subset of the data items, representing the data characteristics for multiple data subjects. (Brackett 2012)

Multiple subject set of data codes is a set of data codes that represent more than one data subject. (Brackett 2012)

Multiple value data attribute is where multiple values of a fact appear in the same data attribute. (Brackett 2011)

Nearsighted planning horizon is the situation where an organization’s data resource horizon is very short term. Data resource development is focused on short term objectives to the detriment of long term goals. (Brackett 2011, 2012)

New technology syndrome is a repeating cycle of events that occurs with new technology. New technology appears as a new way of doing things. People play with the new technology, in a physical sense, to see what it can do or is capable of doing, like a child plays with a new toy. (Brackett 2011, 2012)

No blame – no whitewash principle states that the disparate data situations exists, that laying blame for that situation only polarizes and alienates people, and whitewashing the situation only allows it to continue. (Brackett 2011)

Non-architected data are any data that are not formally managed within a common data architecture. (Brackett 2011)

Non-business key is a primary key consisting of a fact or facts whose values have no meaning to the business. (Brackett 2012)

Non-correcting principle states that the data resource cannot correct itself when it encounters complexity. (Brackett 2011)

Non-electronic data includes all data located in filing cabinets, archive boxes, desk drawers, and so on. These data cannot be searched or analyzed electronically without some form of data entry. (Brackett 2011)

Non-existent data definitions have never been developed, or were developed at one time and have since been misplaced or lost. Whatever the reason, there exists considerable data in the data resource that have no data definition. (Brackett 2011, 2012)

Non-information is a set of data in context that is not relevant or timely to the recipient. It is neither specific information or general information. (Brackett 2011, 2012)

Non-integrating principle states that the data resource cannot integrate itself when it encounters disparity. (Brackett 2011)

Non-preferred data characteristic variation is a data characteristic variation within a data characteristic that has not been designated as preferred. A non-preferred data characteristic variation may be either acceptable or obsolete. (Brackett 2012)

Non-preferred data designation is a data variation that has not been accepted as preferred. (Brackett 2012)

Non-preferred data name abbreviation algorithm is any data name abbreviation algorithm that is not the official data name abbreviation algorithm for the organization. (Brackett 2011)

Non-preferred data name abbreviation scheme is any data name abbreviation scheme that does not contain the preferred data name word abbreviations and the preferred data name abbreviation algorithm. (Brackett 2011)

Non-preferred data name word abbreviations are any sets of data name word abbreviations that are not the official set of data name word abbreviations for the organization. (Brackett 2011)

Non-preferred data reference set variation is a data reference set variation within a data subject that has not been designated as preferred. (Brackett 2012)

Non-preferred data translation rule is a data translation rule between different non-preferred data designations. (Brackett 2012)

Non-redundant data documentation principle states that the data resource data must represent a single version of truth about the data resource. (Brackett 2011)

Non-unique data name is any data name, whether abbreviated or unabbreviated, that is not unique across the organization or across multiple organizations engaged in the same business activities. (Brackett 2011)

Objective data resource quality is based on facts or metrics without any distortion by personal experience. It’s a technical quality based on reality and is not impacted by perception or experience. It tends to remain constant as long as the data resource is unchanged. (Brackett 2011)

Obsolete data characteristic variation is any data characteristic variation that is obsolete and can no longer be used. (Brackett 2012)

Obsolete data reference set variation is any data reference set variation that is obsolete and can no longer be used. (Brackett 2012)

Obsolete foreign key is a foreign key that matches an obsolete primary key in a parent data subject. (Brackett 2012)

Obsolete primary key is a primary key that has no further use and should not be used. (Brackett 2011)

Occurrence in the data resource is really an entity in mathematics. (Brackett 2011)

Occurrence group or group of occurrences in the data resource is really a set of entities in mathematics. (Brackett 2011)

One-to-many data reference item translation rule translates one coded data value and/or name from the source to many data reference items in the target. These translations are difficult and require additional input to make the split from one source value to many target values. (Brackett 2012)

One-to-many data relation occurs when a parent data occurrence in one data entity is related to more than one subordinate data occurrences in a second data entity, and each subordinate data occurrence in the second data entity is related to the parent data occurrence in the first data entity. A one-to-many data relation is shown by a dashed line with an arrow on one end pointing to the data entity with many occurrences. (Brackett 2011)

One-to-many recursive data relation is where a parent data occurrence in a data entity is related to more than one subordinate data occurrences in the same data entity, and each of those subordinate data occurrences is related to the parent data occurrence. (Brackett 2011)

One-to-one data reference item translation rule translates the coded data value and/or the name from one data reference item in the source to one data reference item in the target. (Brackett 2012)

One-to-one data relation occurs when a data occurrence in one data entity is related to only one data occurrence in a second data entity, and that data occurrence in the second data entity is related to the same data occurrence in the first data entity. A one-to-one data relation is shown by a dashed line with no arrowheads. (Brackett 2011)

One-to-one recursive data relation is where a data occurrence in a data entity is related to one other data occurrence, and that other data occurrence is related to the first data occurrence. It is shown by a dashed line with no arrowhead leaving and returning to the same data entity. (Brackett 2011)

Operational data are subject oriented, integrated, time current, volatile collections of data in support of day to day operations and operational decision making. (Brackett 2011)

Operational data modeling is used for the operational data using operational data normalization. (Brackett 2011)

Operational data normalization is the process of normalizing the operational data according to the formal rules of data normalization. (Brackett 2011)

Operational data stores is often used to represent the collection of operational data. (Brackett 2011)

Operational processing is the day-to-day transactional processing using operational data to support business operations and operational decisions.

Operational tier represents data used for data to day operations of the business and operational business decisions. The data are usually detailed with some summary data, and may be on any platform or in any software product. (Brackett 2011)

Opportunistic principle states that every opportunity should be taken to promote the initiative in the organization, regardless of the size of the opportunity. (Brackett 2011)

Opt for detail principle states that when in doubt about the level of detail to document during the data inventory, always opt for greater detail. Experience has shown that more detail is needed to fully understand and integrate the data resource. (Brackett 2012)

Organization represents any administrative and functional structure for conducting some form of business, such as a public sector organization, quasi-public sector organization, private sector organization, association, society, foundation, and so on, however large or small, whether for profit or not for profit, and for however long it has been operating. (Brackett 2012)

Organization agility principle states that an organization must be agile to remain successful in their business endeavor. Agility depends on how the organization perceives the business world and how well it adjusts to changes in that business world. It depends on how well the organization understands the business world, how quick the organization perceives changes in that business world, and how quick they can respond to those changes. (Brackett 2012)

Organization perception principle states that the comparate data resource developed to support an organization’s business must be based on the organization’s perception of the business world. If a comparate data resource is to support an organization’s business activities, that comparate data resource must be based primarily on the organization’s perception of the business world and how the organization chooses to operate in that business world. (Brackett 2012)

Organization umwelt principle states that each organization has a particular perception of the business world in which they operate based on previous experiences that are unique to that organization. Those experiences affect the organization’s behavior in the business world, and determines how the organization adapts to a changing business world and operates in that business world. (Brackett 2012)

Organizational knowledge is information that is of significance to the organization, is combined with experience and understanding, and is retained by the organization. It is information in context with respect to understanding what is relevant and significant to a business issue or business topic—what is meaningful to the business. It’s analysis, reflection, and synthesis about what information means to the business and how it can be used. It’s a rational interpretation of information that leads to business intelligence.(Brackett 2011, 2012)

Orientation means the act or process of orienting or being oriented; the state of being oriented; the general or lasting direction of thought, inclination, or interest; the change of position in response to external stimulus. (Brackett 2012)

Outdated data definitions are data definitions that are not current with the business. (Brackett 2011)

Overly optimistic horizon is the situation where the data resource horizon is the best, and can be easily and quickly achieved. The vision may be valid and realistic, but the horizon is too  optimistic. (Brackett 2011, 2012)

Para-data are any data that are ancillary to or support core business data. Para-data are a perception by the observer based on their role in the business world. (Brackett 2011)

Parallel data occurrence is a data occurrence from the same data subject represented by the data file. (Brackett 2012)

Paralysis-by-analysis is a process of ongoing analysis and modeling to make sure everything is complete and correct. Data analysts and data modelers are well known for analyzing a situation  and working the problem forever before moving ahead. They often want to build more into the data resource than the organization really wants or needs. The worst, and most prevalent, complaint about data resource management is its tendency to paralyze the development process by exacerbating the analysis process. (Brackett 2011, 2012)

Partial characteristic data item is a data item that contains part of a data characteristic. Other parts of the data characteristic are contained in one or more other data items. (Brackett 2012)

Partial historical data instance contains a subset of data items in the data occurrence, usually the data items whose data values changed and appropriate identifiers. (Brackett 2012)

Partial key dependencies:  See Second normal form.

Partial occurrence data record is a data record that contains only part of the data items for a data occurrence. The complete data occurrence is split across multiple data records, usually due to some length limitation. (Brackett 2012)

Partial set of data codes contains a subset of the data properties for a single data subject. (Brackett 2012)

Partial subject data file is a data file that contains a subset of the data items representing the data characteristics for a single data subject or for multiple data subjects. (Brackett 2012)

Partially architected data is the situation where some data are managed within a common data architecture and some data are not managed within a common data architecture. (Brackett 2011)

Passive data contributors are data attributes that no longer exist or whose value will never change. (Brackett 2011)

Passive derived data are derived data based on passive data contributors. (Brackett 2011)

Pattern of failure is a sequence of events which lead toward a disparate data resource and its failure to fully support the current and future business information demand. (Brackett 2011)

Pattern of success is a sequence of events which lead toward a comparate data resource and full support for the current and future business information demand. (Brackett 2011)

Perceived data resource scope is the portion of the data resource that is perceived to be formally managed. (Brackett 2011)

Perfection is the quality or state of being perfect, freedom from fault or defect, an unsurpassable degree of accuracy or excellent. Perfection is the ultimate state of excellence. (Brackett 2011)

Physical data architecture is the architecture of the data in the physical databases. It represents the data in the physical schema. Moving the data to a different physical database means going back to the deployment schema and denormalizing the data for the new database. (Brackett 2011)

Physical data relation is an association between data records in different data files or within a data file. It  is typically defined during formal data denormalization and has now name. (Brackett 2012)

Physical key is a preferred or alternate primary key that may or may not be meaningful to the business, but is useful for physical navigation in the database. (Brackett 2011, 2012)

Physical schema represents the structure of data in physical databases as denormalized from the deployment schema. (Brackett 2011)

Physical-to-physical data translations are data translations between the disparate data documented as data products and the comparate data resource. (Brackett 2012)

Platform resource data are any data documenting the platform resource. (Brackett 2011)

Plausible means reasonable, superficially fair, valuable but often having deceptive attraction or allure, superficially pleasing or persuasive. (Brackett 2011)

Plausible deniability is the ability of an organization to deny the fact that their data resource is disparate and live with the illusion of high quality data. (Brackett 2011)

Pragmatics deals with the relation between signs and symbols, and their users. Specifically it deals with their usefulness. (Brackett 2011)

Precautionary principle states that if an action or policy has a suspected risk of causing harm to the public or the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those who advocate taking the action. (Brackett 2011)

Precise means clearly expressed, definite, accurate, correct, and conforming to proper form. (Brackett 2012)

Precise data integrity rule is a data integrity rule that precisely specifies the criteria for high quality data values and reduces or eliminates data errors. (Brackett 2011)

Precision is the quality or state of being precise, exactness, the degree of refinement with which a measurement is stated. (Brackett 2011)

Predictive data normalization is the process of re-normalizing the analytical logical schema to predictive logical schema for the purpose of predictive processing. (Brackett 2011)

Predictive tier represents true data mining, which is the search for unknown and unsuspected trends and patterns. Mathematically, it is in the variation and influence space. (Brackett 2011)

Preferred means to put before; to promote or advance to a rank or position; to like better or best; to give priority; to put or set forward for consideration. (Brackett 2012)

Preferred data are data that have the preferred names, definitions, structure, integrity rules, format, and content acceptable for data sharing. (Brackett 2011, 2012)

Preferred data architecture is a subset of the common data architecture that contains preferred data. It’s the desired data architecture that provides a pattern for designing a comparate data resource and for transforming a disparate data resource to a comparate data resource. (Brackett 2011, 2012)

Preferred data architecture concept is that the redundancy and variability of disparate data will be resolved through the designation of a preferred data architecture and the transformation of disparate data to comparate data according to that preferred data architecture. The data redundancy and variability may not be eliminated, but will be reduced to a known and manageable level. (Brackett 2012)

Preferred data architecture objective is to designate the preferred representation of all data at the organization’s disposal so those data can be readily understood and shared within and without the organization. The objective is to take a common data architecture that was enhanced to cover the data cross-references and designate preferred components that will become a pattern or template for designing and building a comparate data resource and transforming disparate data to comparate data. (Bracket 2012)

Preferred data characteristic variation is a data characteristic variation within a data characteristic that has been designated as the one preferred for data sharing and development of a comparate data resource. (Brackett 2012)

Preferred data culture is a subset of a common data culture that contains the preferred practices for managing data as a critical resource. It’s the desired data culture that provides the pattern for building a cohesive data culture and transforming the fragmented data culture to that cohesive data culture. It’s how the organization chooses to manage their data as a critical resource. (Brackett 2012)

Preferred data culture concept is that the variability of the existing fragmented data culture will be resolved through the designation of a preferred data culture and the transformation of the fragmented data culture to a cohesive data culture. The variability may not be eliminated, but will be reduced to a known and manageable level. (Brackett 2012)

Preferred data culture objective is to designate the preferred practices for managing data as a critical resource, so that those practices are readily understood and consistently performed throughout the organization. (Brackett 2012)

Preferred data definition is a comprehensive and denotative data definition developed from all of the insights documented during data inventory and cross-referencing that fully explains the data with respect to the business. (Brackett 2012)

Preferred data designation is a data variation that has been accepted by the consensus of knowledgeable people as being preferred for data sharing and development of a comparate data resource. (Brackett 2012)

Preferred data designation principle states that all preferred designations that comprise the preferred data architecture will be made within a common data architecture, after data cross-referencing has been completed, according to the organization’s perception of the business world, by knowledgeable detail data stewards. (Brackett 2012)

Preferred data designation process is the process of designating and finalizing preferred data names, data definitions, data integrity rules, primary and foreign keys, data characteristic variations, data reference set variations, and data sources. Data translation rules between data characteristic variations and data reference items are based on the preferred data designations. (Brackett 2012)

Preferred data integrity rule is a data integrity rule that has either been confirmed or created to ensure the integrity of a common data architecture. (Brackett 2012)

Preferred data name abbreviation algorithm is the data name abbreviation algorithm that is the official data name abbreviation algorithm for the organization. (Brackett 2011)

Preferred data name abbreviation scheme uses the preferred data name word abbreviation set and the preferred data name abbreviation algorithm. (Brackett 2011)

Preferred data name word abbreviations is a set of data name word abbreviations that is the official set of data name word abbreviations for the organization. (Brackett 2011)

Preferred data source is the data product unit or variation within a data product set or variation representing a data file that will be the source for a business fact. It’s the location where an individual business fact can be obtained that is the most current and most accurate. It’s the location for the highest quality data that is sometimes referred to as the best-of-breed data. (Brackett 2012)

Preferred data template is a subset of the preferred logical data architecture for a specific subject area that promotes data sharing within or between organizations, and helps organizations develop applications and databases using preferred data. (Brackett 2012)

Preferred data translation rule is a data translation rule between a preferred data designation and a non-preferred data designations. (Brackett 2012)

Preferred foreign key is a foreign key that matches the preferred primary key in a parent data subject. (Brackett 2012)

Preferred foreign key principle states that each subordinate data subject in a common data architecture will have one and only one preferred foreign key designated that uniquely identifies the parent data occurrence in a parent data subject. (Brackett 2012)

Preferred logical data architecture is the common, desired, to-be logical data architecture for the organization. It’s a subset of a common data architecture developed from a thorough understanding gained through data inventorying and cross-referencing. (Brackett 2012)

Preferred logical data name is the data name developed according to the data naming taxonomy and approved by the business as the preferred name for the data. The preferred logical data names are the data names developed for an initial common data architecture and for enhancements to that common data architecture. (Brackett 2012)

Preferred physical data architecture is the common, desired, to-be, physical data architecture for the organization. It’s developed from a  formal denormalization of the logical preferred data architecture. (Brackett 2012)

Preferred physical data name is the data name developed from the preferred logical data name during formal data denormalization according to a set of data name word abbreviations and a formal data name abbreviation algorithm. (Brackett 2012)

Preferred primary key is a primary key that has been designated as preferred for use in a comparate data resource. (Brackett 2011, 2012)

Preferred primary key principle states that each data subject in a common data architecture will have one and only one preferred primary key designated that uniquely identifies all data occurrences within that data subject in the organization’s common data architecture. (Brackett 2012)

Preferred data reference set variation is a data reference set variation within a data subject that has been designated as preferred for data sharing and development of a comparate data resource. (Brackett 2012)

Prescriptive is serving to prescribe; acquired by, founded on, or determined by prescription or long-standing custom. It’s describing how to get from an existing situation to a desired situation. (Brackett 2012)

Presumed data culture variability principle states that an existing fragmented data culture is highly variable and should be considered as the norm in most public and private sector organizations. Seldom is any organization free from some degree of data culture variability. (Brackett 2012)

Presumed data resource variability principle states that disparate data are highly variable in their names, definitions, structure, integrity, and documentation. Data resource variability should be considered as the norm in most public and private sector organizations. (Brackett 2012)

Primary data name is the formal data name that is the fully spelled out, real world, unabbreviated, un-truncated, business name of the data that has no special characters or length limitations. (Brackett 2011)

Primary data name abbreviation principle states that data name word abbreviations, data name abbreviation algorithms, and data name abbreviation schemes be developed to consistently provide formal data name abbreviations. (Brackett 2011)

Primary data name principle states that each business fact, or set of closely related business facts, in the data resource must have one and only one primary data name. All other data names become aliases of the primary data name. (Brackett 2011)

Primary key is a set of one or more data attributes whose values uniquely identify each data occurrence in a data entity in a logical data model. In a database, a primary key is a set of one or more data items whose values uniquely identify each data record in a data file. (Brackett 2011, 2012)  

Primary key composition indicates the number and nature of the data attributes forming the primary key. (Brackett 2011)

Primary key list is list of the primary keys for a data subject that exists in the disparate data. Only the data characteristic is listed for each primary key, not the data characteristic variation. (Brackett 2012)

Primary key matrix is a matrix of the primary keys that shows all of the disparate primary keys for a data subject and across related data subjects. (Brackett 2012)

Primary key range of uniqueness is the range of data occurrences for which the primary key provides a unique identification. The primary keys in disparate data may have different ranges of uniqueness that must be identified before a preferred primary key can be designated. (Brackett 2012)

Primary key scope indicates the range of data occurrences covered by the primary key. (Brackett 2011)

Primary key status indicates the status of the primary key. (Brackett 2011)

Primary key type indicates whether or not the primary key is meaningful or meaningless to the business.

Primary productivity loss is the loss related to understanding and using the data.

Primitive data are data that are obtained by measurement or observation of an object or event in the business world. (Brackett 2012)

Principle is a comprehensive and fundamental law, doctrine, or assumption; a rule of conduct. A principle can be basic, applying to data resource management in general, or it can be specific, applying to one aspect of data resource management. (Brackett 2011, 2012)

Principle of delayed change states that nothing will change to prevent a situation from getting worse until it’s too late. When the situation is finally discovered, such as a disparate data resource, it becomes a monumental task to resolve the problem. (Brackett 2011)

Principle of gradual change states that the disparate data resource evolved slowly and almost unnoticed until it was too late to correct. (Brackett 2011)

Principle of independent architectures states that each primary component of the information technology architecture has its own architecture independent of the other architectures. (Brackett 2011)

Principle of intended consequences states that any intervention in a complex system, such as a data resource, should be guaranteed to have the intended result. If that guarantee cannot be made, then the intervention should not be taken. (Brackett 2011, 2012)

Principle of unintended consequences states that any intervention in a complex system may or may not have the intended result, but will inevitably create unintended and often undesirable outcomes. (Brackett 2011)

Privacy and confidentiality principle states that the data resource must be protected from any disclosure that violates a person’s or organization’s right to privacy and confidentiality. (Brackett 2011)

Proactive data resource quality is the process of establishing the desired quality criteria and ensuring that the data resource meets those criteria from this point forward. It’s oriented toward preventing defects from entering the data resource. (Brackett 2011)

Probabilistic is of, referring to,  based on, or affected by probability, randomness, or chance. (Brackett 2012)

Probability neglect is overestimating the odds of things we most dread and underestimating the odds of things we least dread. Probability neglect for the data resource is happening in most public and private sector organizations. (Brackett 2012)

Product-to-common cross-reference is a data cross-reference between data products and a common data architecture. (Brackett 2012)

Product-to-product cross-reference is a data cross-reference between data products without the benefit of a common data architecture. Product-to-product cross-references are between sets of disparate data, usually databases, bridges, or feeds between information systems. (Brackett 2012)

Proof positive principle states that when you go to executives for approval with proof of positive results, you are more likely to gain their support than if you ask for support based on a promise to deliver. (Brackett 2011)

Proper means marked by suitability, rightness, or appropriateness; very good, excellent; strictly accurate, correct; complete. (Brackett 2011)

Proper balance principle states that a proper balance needs to be maintained between allowing enough access for people to perform their business activities and limiting access to protect the data from unauthorized alteration or deletion. (Brackett 2011)

Proper data structure is a data structure that provides a suitable representation of the business, and the data supporting the business, that is relevant to the intended audience. (Brackett 2011)

Proper sequence principle states that proper design proceeds from development of logical data structures that represent the business and how the data support the business, to the development of physical data structures for implementing databases. (Brackett 2011)

Prospective is likely to come about; likely to be or become; expected to happen; looking to the future. It is looking ahead at what’s needed. (Brackett 2012)

Provenance comes from the French provenir, meaning to come from. It represents the origin or source of something, the history of ownership, or the current location of an object. The term is used mostly for art work, but is now used in a wide range of fields, including science and computing. (Brackett 2011, 2012)

Psychological denial is the situation where people are inherently aware of a dangerous situation, but choose not to recognize that situation or deny that the situation exists.

Quality is a peculiar and essential character, the degree of excellence, being superior in kind. Quality is defined through four virtues -- clarity, elegance, simplicity, and value. (Brackett 2011)

Raw DataSee Data.

Readily available data documentation principle states that all data resource data must be readily available to all audiences. Both technical and semantic data must be available. (Brackett 2011)

Realistic planning horizons principle states that realistic planning horizons must be challenging, yet achievable, and must be developed to cover all audiences in the organization. The horizons must stretch the imagination slightly, but not unrealistically. It must be understandable and achievable, but not too close or too  distant. (Brackett 2011, 2012)

Reasonable means agreeable to reason, not extreme or excessive, having the faculty of reason, and possessing sound judgment. (Brackett 2011)

Reasonable data orientation is an orientation toward the business and support of the current and future business information demand. It depends on the architectural concepts, principles, and techniques, but more importantly depends on the culture of the organization.(Brackett 2011, 2012)

Reasonable development direction principle states that the direction of data resource development must focus primarily on the business direction and secondarily on the database technology direction. (Brackett 2011)

Reasonable management procedures principle states that reasonable procedures for development and maintenance of a comparate data resource must be established. (Brackett 2011)

Recast data is the formal process of adjusting data values for historical continuity. It aligns data values for a common historical perspective using data recast rules. (Brackett 2012)

Recognition means the action of recognizing; the state of being recognized; acknowledgement; special notice and attention. (Brackett 2012)

Reconstruct data is the formal process of rebuilding complete historical data that are not stored as full data records. (Brackett 2012)

Recursive data relation is a data relation between two data occurrences within the same data entity. (Brackett 2011)

Redundant means exceeding what is necessary or normal; superfluous; characterized by or containing an excess; characterized by similarity or repetition; profuse; or lavish. (Brackett 2012)

Redundant data are inconsistently maintained on different data sites, by different methods, and are seldom kept in synch. (Brackett 2011)

Redundant data items is the situation where a data item representing the same data characteristic exists in different data files or different data records, whether that data item has the same data name or a different data name. (Brackett 2012)

Redundant historical data instances is the situation where redundant physical data occurrences may have corresponding physical historical data instances. (Brackett 2012)

Redundant physical data occurrences is the situation where the same logical data occurrence exists multiple times in different data files in a disparate data resource. (Brackett 2012)

Relational theory was developed by Dr. Edgar F. (Ted) Codd to describe how data are designed and managed. The theory represents data and their interrelations through a set of rules for structuring and manipulating data, while maintaining their integrity. It is based on mathematical principles and is the base for design and use of relational database management system. (Brackett 2012)

Repeating groups:  See First normal form.

Replication is a copy or reproduction; the action or process of replicating or reproducing; or creating a replica. (Brackett 2012)

Resistance to change principle states that everyone has some resistance to change. The resistance exists because most people are unsure of new approaches, particularly with new techniques. The uncertainty causes anxiety and apprehension about the outcome. (Brackett 2011)

Resolution is the degree of granularity of the data, indicating how small an object can be represented with the current scale and precision. (Brackett 2011)

Resource is a source of supply or support; an available means; a natural source of wealth or revenue; a source of information or expertise; something to which one has recourse in difficulty; a possibility of relief or recovery; or an ability to meet and handle a situation. (Brackett 2012)

Responsibility is the quality or state of being responsible; moral, legal, or mental accountability; reliability and trustworthiness; something for which one is responsible. (Brackett 2011)

Restricted means to confine within bounds; subjected to some restriction; not general; available to particular groups and excluding others; not intended for general circulation or use. (Brackett 2011)

Restricted data vision is the situation where the scope of the data resource is limited, the development direction is unreasonable, or the planning horizon is unrealistic. (Brackett 2011)

Restructure data is the formal process of changing the structure of the disparate source data to the structure of the comparate target data. It takes physical disparate data structures that have existed in the past and changes them to a preferred physical data structure. (Brackett 2012)

Retroactive data resource quality is the process of understanding the existing quality of the data resource and improving the quality to the extent that is reasonably possible. It’s oriented toward correcting the existing low quality data resource by removing defects. (Brackett 2011)

Retrospective is the act or process of surveying the past; based on memory; affecting things past; looking back, contemplating, or directing to the past. It is looking at what has happened in the past to reach what currently exists. (Brackett 2012)

Reverse data transformation is the formal transformation of comparate data to disparate data. It’s necessary to maintain disparate data that supports disparate applications until they can be converted to comparate data. (Brackett 2012)

Reverse data translation rule is a data translation rule from a preferred data designation to a non-preferred data designation. (Brackett 2012)

Review data is the formal process of reviewing the data that have been transformed and loaded into the target database to ensure they are appropriate for production use. (Brackett 2012)

Risk is the possibility of suffering harm or loss from some event; a chance that something will happen. (Brackett 2011)

Robust means having or exhibiting strength or vigorous health; firm in purpose or outlook; strongly formed or constructed; sturdy. (Brackett 2011)

Robust data documentation is documentation about the data resource that is complete, current, understandable, non-redundant, readily available, and known to exist. (Brackett 2011)

Rotational data structure shows the detail necessary for implementing data mining. It is developed from a renormalization of the logical dimensional data structure, but is independent of the physical operating environment. (Brackett 2011)

Rule is an authoritative, prescribed direction for conduct, or a usual, customary, or generalized course of action or behavior; a statement that describes what is true in most or all cases; a standard method or procedure for solving problems. (Brackett 2011)

Scale is the ratio of a real world distance to a map distance. (Brackett 2011)

Schema is simply a data structure. (Brackett 2011)

Scope pertains to the range of a person’s perceptions, the breadth or opportunity to function, or the area covered by a given activity. (Brackett 2012)

Second dimension of data variability is the variability in data names, definitions, structure, integrity, and documentation that occurs over time with the operational data in a disparate data resource. (Brackett 2012)

Second level of data redundancy is created when disparate data instances contain redundant data. The data redundancy greatly magnifies the data redundancy created in the first level of data redundancy, leading to massive quantities of redundant data. (Brackett 2012)

Second normal form, commonly known as partial key dependencies, is a technique to find data attributes that are dependent on only part of the primary key, and move them to a data entity where they are dependent on the complete primary key. (Brackett 2011)

Secondary productivity loss, which includes unnecessary business activities, such as legal appeals, suits, returned merchandise, protests, vandalism, and other actions against the organization that take resources to resolve. (Brackett 2011)

Seduction relates to being creative, engaging people in the task at hand, captivating people’s attention, and activating their imagination. It’s how you draw people into the process of creating a comparate data resource that meets the business information demand. (Brackett 2011)

Self-contained historical data is the situation where historical data instances are retained in the same data file along with the current data instance. (Brackett 2012)

Self-defeating fallacy states that no matter how much you believe that something can happen, if it is not possible, it will not happen. (Brackett 2011)

Self-fulfilling prophecy states that if you really believe in something that can happen, and it is possible, it will happen. It’s the flip side of the self-fulfilling fallacy. (Brackett 2011)

Semantic data resource data are the data that help business professionals understand the content and meaning of the data and use them to support business activities. (Brackett 2011)

Semantic heterogeneity is a general lack of understanding about the data that makes it very difficult to fully utilize those data to support the business information demand. (Brackett 2011)

Semantic homogeneity is a formal understanding about the data that makes it easy to fully utilize those data to support the business information demand. (Brackett 2011)

Semantic information has context and meaning. It is relevant and timely. It is also arranged according to certain rules. (Brackett 2011)

Semantic statement is a textual statement of the relationship between data entities. (Brackett 2011)

Semantics deals with the relation between signs and symbols, and what they represent. Specifically, it deals with their meaning. (Brackett 2011)

Semiotic theory deals with the relation between signs and symbols, and their interpretation. It consists of syntax, semantics, and pragmatics. (Brackett 2011)

Semiotics is a general theory of signs and symbols and their use in expression and communication. (Brackett 2011)

Separate historical data is the situation where historical data instances are retained in a separate data file. (Brackett 2012)

Set of data codes is a subset of a data codes representing only part of the data properties for a complete data code set, or a mixture of properties from different data code sets. (Brackett 2012)

Set of entities in mathematics is a subgroup of an entity set, such as Retirement Eligible Employees. (Brackett 2011)

Set theory is a branch of mathematics or of symbolic logic that deals with the nature and relations of sets. (Brackett 2011)

Short data definitions are data definitions that are short, truncated phrases, or incomplete sentences that provide little meaning. (Brackett 2011)

Silver bullet is an attempt to achieve some gain without any pain. The result of seeking a silver bullet is usually considerable pain with minimal gain, and maybe considerable loss. (Brackett 2011)

Silver bullet syndrome is the on-going syndrome that organizations go through searching for quick fixes to the data problems. (Brackett 2011)

Simple primary key contains one home data attribute in its home data entity, such as Employee. Social Security Number in the Employee data entity. (Brackett 2011)

Simplicity is the state of being simple or uncompounded, having a lack of subtlety or penetration, freedom from pretense or guile, directness of expression, and maintainable. Simplicity is plain and uncomplicated.

Simplicity principle that states everything should be a simple as possible … but not simpler. Albert Einstein. (Brackett 2011)

Single architecture orientation principle states that the entire data resource of an organization must be developed and managed within a single, organization-wide, common data architecture. (Brackett 2011)

Single characteristic data item is a data item that contains only one elemental or combined data characteristic. (Brackett 2012)

Single contributor data derivation is where one data attribute is the contributor to an algorithm that generates the derived data. (Brackett 2011)

Single data architecture principle states that the entire data resource of an organization must be developed and managed within  a single, organization-wide, common data architecture. (Brackett 2012)

Single file data subject is a complete data subject that is contained in a single data file. (Brackett 2012)

Single occurrence data record is a data record that represents a single data occurrence. (Brackett 2012)

Single property data code is a data code that represents one specific data property of a single data subject. (Brackett 2012)

Single subject data code is a data code that represents a single data subject, such as the ones shown above for gender, management level, and hair color. (Brackett 2012)

Single subject data file is a data file that contains all of the data items, or a subset of the data items, representing the data characteristics for a single data subject. (Brackett 2012)

Single subject set of data codes is a set of data codes that represent one data subject. Single subject sets of data codes are relatively common in a disparate data resource. (Brackett 2012)

Source data extraction – See Extract source data.

Source data identification – See Identify source data.

Specific data are data that are stored in databases and are used in applications. (Brackett 2011)

Specific data cardinality is the data cardinality specified by a notation at the end of a data relation and is more specific than the general data cardinality. (Brackett 2011, 2012)

Specific data definitions are the comprehensive data definitions for specific data. (Brackett 2011)

Specific data definition inheritance is the process of specific data definitions inheriting other specific data definitions. It’s a technique that implements the data inheritance principle. (Brackett 2011)

Specific data integrity rule is a data integrity rule that is developed and applied to the data. (Brackett 2011)

Specific data translation rule is a data translation rule that applies directly to the data translations. It may inherit a fundamental data translation rule, or it may specify a unique data translation rule. (Brackett 2012)

Specific information is a set of data in context that is relevant to a person at a point in time or for a period of time. (Brackett 2012)

Specific primary key is a primary key that is not available for all data occurrences. (Brackett 2011)

Static data conversion is where the data conversion is always done by the same conversion criteria, such as changing miles to kilometers. (Brackett 2011)

Steward came from the old English term sty ward; a person who was the ward of the sty. These people watched over the stock and were responsible for the welfare of the stock, particularly at night when the risks to the welfare of the stock were high. (Brackett 2011)

Strategic data steward is a person who has legal and financial responsibility for a major segment of the data resource. That person has decision-making authority for setting directions, establishing policy, and committing resources for that segment of the data resource. (Brackett 2011)

Strategic schema represents the structure of data as perceived by executives. It is relatively general in nature and includes only major data subjects and a few relations. (Brackett 2011)

Strong data resource comparity principle states that rules were designed for the development of a comparate data resource. (Brackett 2011)

Strong anthropic principle states that the constants and parameters were designed for our existence. (Brackett 2011)

Strong application data transformation is the complete transformation of an application to read and store comparate data as well as operate with comparate data. (Brackett 2012)

Structurally stable – business flexible principle states that a proper data structure must remain structurally stable across changing technology and changing business needs, yet adequately represent the current and future business as it changes. (Brackett 2011)

Structured means something arranged in a definite pattern of organization; manner of construction; the arrangement of particles or parts in a substrate or body, arrangement or interrelation of parts as dominated by the general character of the whole; the aggregate of elements of an entity in their relationships to each other, the composition of conscious experience with its elements and their combination. (Brackett 2011)

Structured data are data that are structured according to traditional database management systems with tables, rows, and columns that are readily accessible with a structured query language. Structured data are considered tabular data. (Brackett 2012)

Structureless data name is any data name that has no formal structure to the words composing the data name. (Brackett 2011)

Subject-oriented data resource is a data resource that is built from data subjects that represent business objects and events in the business world that are of interest to the organization. The basic structure of a comparate data resource is based on data subjects and the relations between those data subjects. All characteristics of a data subject are stored with that data subject. (Brackett 2011, 2012)

Subjective data resource quality is a perception of the quality of the data resource based on an individual’s experience. It’s a cultural oriented quality based on an individual’s reality and varies from person to person, and from time to time. It’s like beauty—it’s in the eyes of the beholder. (Brackett 2011)

Subordinate data occurrence is a data occurrence from a data subject that is subordinate to the data subject represented by the data file. (Brackett 2012)

Subtraction is economy, doing less, and conserving. It’s about finding what to eliminate and how to eliminate what’s unnecessary. (Brackett 2011)

Success motivation cycle is a cycle where success encourages people to continue their effort, which leads to more success. Success begets success. (Brackett 2011)

Suck-and-squirt approach is the process of finding the single record of reference, or system of reference, for operational data, sucking the operational data out of that reference, performing superficial cleansing, and squirting the data into the data warehouse. (Brackett 2011)

Super means over and above, higher in quantity, quality, or degree; exceeding a norm, in excessive degree or intensity, surpassing all or most others of its kind; situated or placed above, on, or at the top of, situated on the dorsal side; having the ingredient present in a large or unusual large proportion; constituting a more inclusive category than that specified; superior in status, title, or position. (Brackett 2011)

Super-structured data are any data that are structured in a manner more intricate than tabular data and, therefore, cannot be interpreted by structured query languages and tools. (Brackett 2011)

Surrogate key is a physical key contained within the database that is not visible to the business, and is seldom identified on any logical data structures. It is solely for database management purposes. (Brackett 2011)

Survey is the act or instance of surveying; something that is surveyed; the examination of a condition, situation, or value; appraise, inspect, scrutinize. (Brackett 2012)

Sustainability is a repeatable and lasting process. It is symmetry, seduction, and subtraction applied over and over. It’s the ability to maintain something at a creative level indefinitely. (Brackett 2011)

Symmetry includes structure, order, and esthetics. It is something that is pleasing to people. Symmetry does not mean symmetrical, but is more about the dynamic properties of ordering, organizing, and operating than about the static proportions of objects. (Brackett 2011)

Syntactic information is raw data. It is arranged according to certain rules. Syntactic information alone is meaningless—it’s just raw data. (Brackett 2011)

Syntax deals with the relation between signs and symbols, and their interpretation. Specifically it deals with the rules of syntax for using signs and symbols. (Brackett 2011)

Synthesis is to put together; the combination of parts or elements to form a whole; the production of a substance by the union of elements, or groups to form a whole. (Brackett 2011)

Tacit knowledge, also known as implicit knowledge, is the knowledge that a person retains in their mind. It’s relatively hard to transfer to others and to disseminate widely. (Brackett 2011)

Tactical data steward is a person who acts as liaison between the strategic data stewards and the detail data stewards to ensure that all business and data concerns are addressed. (Brackett 2011)

Tactical schema represents the structure of data as perceived by managers. It is more specific, but is not a fully detailed operational schema. (Brackett 2011)

Target data identification – See Identify target data.

Tarnished silver bullet is the result of attempting to find a silver bullet—considerable pain with minimal gain, and maybe considerable loss. (Brackett 2011)

Taxonomy is the science of classification, a system for arranging things into natural, related groups based on common features. (Brackett 2011)

Teamwork synergy principle states that the appropriate business and data management professionals must be involved at the appropriate time in any project to ensure that development or enhancement of a comparate data resource supports the business information demand. (Brackett 2011)

Technical data resource data are the data that technicians need to build, manage, and maintain databases and make the data available to the business. (Brackett 2011)

Technically correct – culturally acceptable principle states that a proper data structure must be both technically correct in representing the data and culturally acceptable for the intended audience. A proper data structure must integrate all of the technical detail about the data resource and present it in a manner that is acceptable to the recipients.(Brackett 2011, 2012)

Technique is a body of technical methods; a method of accomplishing a desired aim. Technique as used here represents how to accomplish a principle; the principle is the what and the technique is the how. (Brackett 2011)

Temporal variability is the normal change in the data resource due to changes in the business over time. Organizations add or drop lines of business, reorient their focus, establish new initiatives, and so on. The data resource must reflect these changes. (Brackett 2012)

Tertiary productivity loss is the loss of customers and sales in the private sector and the avoidance of regulations in the public sector. (Brackett 2011)

Theory is a plausible or scientifically acceptable general principle or body of principles offered to explain phenomena; a body of theorems presenting a concise systematic view of a subject. (Brackett 2011)

Thesaurus is a list of synonyms and related terms that help people find a specific term that meets their needs. (Brackett 2011)

Think globally – act locally principle provides a broad orientation for developing a comparate data resource. People need to think globally about the comparate data resource, but act locally to ensure that data resource contains their data and those data are readily available. (Brackett 2011, 2012)

Third dimension of data variability is the variability in data names, definitions, structure, integrity, and documentation that occurs with evaluational data in a disparate data resource. (Brackett 2012)

Third normal form, commonly known as inter-attribute dependencies, is a technique to find data attributes in a data entity that are dependent on another data attribute in that same data entity and move them to another data entity. (Brackett 2011)

Thorough data definition principle states that a comprehensive data definition must be thorough to be fully meaningful to the business. To be thorough, a data definition must not have any length limitation. The data definition must be long enough to fully explain the data in business terms. (Brackett 2011)

Thorough understanding principle states that a thorough understanding of the data with respect to the business resolves uncertainty and puts the brakes on data disparity. It’s the understanding of data with respect to the business that’s important. (Brackett 2012)

Transforming data – See Data transform.

Transition is the passage from one state, stage, or place to another; a movement, development, or evolution from one form, stage, or style to another. It is moving in a consistent direction toward a desired goal. It implies a permanence of the passage or evolution without a return to the former state. (Brackett 2012)

Translate data is the formal process of translating the extracted data values to the preferred data values, if they are not already in the preferred format or content, using appropriate data translation rules. (Brackett 2012)

Ultimate data resource quality is a data resource that is stable across changing business and changing technology so it continues to support the current and future business information demand. (Brackett 2011)

Umwelt is a German word meaning the environment or the world around. It’s the world as perceived by an organism based on its cognitive and sensory powers. It’s the environmental factors collectively that are capable of affecting an organism’s behavior. It’s a self-centered world where organisms can have different umwelten, even though they share the same environment. It’s an organism’s perception of the current surroundings and previous experiences which are unique to that organism. It’s the world as experienced by a particular organism. (Brackett 2012)

Unacceptable means not acceptable, not pleasing or unwelcome. (Brackett 2012)  

Unacceptable data availability is the situation where the data are not readily available to meet the business information demand or are not properly protected or secured. (Brackett 2011)

Unacceptable data culture variability is any unacceptable level of variability in management of the data resource. (Brackett 2012)

Unacceptable data resource variability is any temporal or cultural variability in the data resource that is beyond the acceptable level. Any data resource variability that is unacceptable and impacts the business must be resolved. (Brackett 2012)

Unacceptable variability is the situation where the variability exceeds the normal range and becomes unacceptable. Most organizations seek to resolve the unacceptable variability. (Brackett 2012)

Unavailable data definitions are data definitions that are not readily available. The best data definitions may have been written, but if they are not readily available, it’s the same as being non-existent. (Brackett 2011)

Uncertainty resolution principle states that when people thoroughly understand the situation, most of the uncertainty about that situation is resolved. (Brackett 2011)

Unconditional data source rule is a data source rule that specifies only one location as the preferred data source. (Brackett 2012)

Understandable data documentation principle states that the data resource data must be understandable to all audiences. The appropriate data resource data must be selected and presented to the intended audience in a manner appropriate for that audience. (Brackett 2011)

Understanding principle states that a thorough understanding of the data with respect to the business resolves uncertainty and puts the brakes on data disparity. (Brackett 2011)

Unnecessary justification principle states that an extensive justification is not needed to begin an initiative for developing a comparate data resource. An extensive justification is not needed to improve data resource quality.(Brackett 2011, 2012)

Unrealistic planning horizon is the situation where the data resource horizon is too nearsighted, too farsighted, or overly optimistic. (Brackett 2011)

Unreasonable means not acting according to reason, not conforming to reason, or exceeding the bounds of reason or moderation. (Brackett 2011)

Unreasonable data orientation is an unreasonable attitude about developing the data resource that is physically oriented, short term, and narrowly focused. (Brackett 2011)

Unrelated data definitions are data definitions that are unrelated to the content and meaning of the data with respect to the business. The data definition may be useful in another context, but it is not useful for understanding the data with respect to the business.

Unstructured means not structured, having few formal requirements, or not having a patterned organization without structure, having no structure, or structureless. (Brackett 2011, 2012)

Unstructured data are data that are not structured, have few formal requirements, or do not have a patterned organization. See Complex structured data. (Brackett 2011)

Vague means not clearly expressed; stated in indefinite terms; not having a precise meaning; not clearly grasped, defined, or understood. (Brackett 2011)

Vague data definition is any data definition that does not thoroughly explain in simple, understandable terms, the real content and meaning of the data with respect to the business. (Brackett 2011)

Value is the monetary worth of something, its relative utility or importance, its usefulness and reusability, or its degree of excellence. It’s having desirable or esteemed characteristics or qualities. (Brackett 2011)

Variability is the quality, state, or degree of being variable or changeable; apt or liable to vary or change; changeable; inconsistent; characterized by variations; having much diversity; or not true to type. (Brackett 2012)

Variable characteristic data item is a data item that could contain several different data characteristics, but only one of those data characteristics appears in any data record. (Brackett 2012)

Variable fact data attribute is where different facts may appear in the same data attribute depending on the situation. (Brackett 2011)

Variable format data item is a data item whose data value could be in one of a variety of different format. (Brackett 2012)

Variable sequence data item is the situation where the data items can be in any sequence in a data record. The specific data item is identified by a keyword or mnemonic, followed by the data value. (Brackett 2012)

Variation is the act or process of varying; the state or fact of being varied; the existent to which a thing varies; or an instance of varying. (Brackett 2012)

Vertical partitioning is denormalizing data occurrences into two or more data files when the length of the data record exceeds the capability of the database.

Vertices from set theory. See Nodes.

Vested interest principle states that the audience with a vested interest in managing data as a critical resource of the organization should be targeted for supporting any quality improvement initiative. (Brackett 2011)

Virtual data resource state is an interim state between the formal data resource and a comparate data resource where real-time data transformation is performed to produce interim comparate data. The data are transformed in real time, according to formal data transformation rules, in either direction between disparate data and comparate data. Disparate data may be transformed to comparate data to support new applications or databases. Comparate data may be transformed to disparate data to support disparate applications or databases. (Brackett 2012)

Virtue is a beneficial quality or power of something, a commendable quality or trait, a merit. (Brackett 2011)

Vision is the act or power of imagination, a mode of seeing or conceiving, discernment or foresight. (Brackett 2011)

Weak anthropic principle states that the constants and parameters just happen to be right for our existence. (Brackett 2011)

Weak application data transformation is the use of routines to read and store comparate data while the application still operates with the disparate data. (Brackett 2012)

Weak data resource comparity principle states that a comparate data resource will be developed if people just happen to do it right, which is unlikely to happen. (Brackett 2011)

Wider scope principle states that data resource management must ultimately include all data at the organization’s disposal. It includes non-critical data, super-structured data, historical data, and non-automated data. (Brackett 2011)

Willingness to change principle states that most people are willing to change if they understand the need to change. Most people want to change the current disparate data situation. They see the need for change and would willingly participate in the change. (Brackett 2011)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.30.210