Chapter 13

Literature Mining and Ontology Mapping Applied to Big Data

Vida Abedi, Mohammed Yeasin,  and Ramin Zand

Abstract

Discovering the network of associations and relationships among diseases, genes, and risk factors is critical in clinical and translational research. The goal of this study was to design a system that would enable strategic reading/filtering and reduce information overload, generate new hypotheses, bridge the knowledge gap, and develop “smart apps.” We present the implementation of a text analytic system, Adaptive Robust and Integrative Analysis for Finding Novel Associations (ARIANA). The system is context-specific, modular, and scalable and able to capture direct and indirect associations among 2,545 biomedical concepts. An easy-to-use Web interface was developed to query, interact, and visualize the results. Empirical studies showed that the system was able to find novel associations and generate new hypotheses. For instance, the system captured the association between the drug hexamethonium and pulmonary fibrosis, which in 2001 caused the tragic death of a healthy volunteer. The software is available with a properly executed end-user licensing agreement at http://www.ARIANAmed.org.

Keywords

Adaptive; Associations; Biomedical; Interface; Novel; Semantic

Introduction

The effective mining of literature can provide a range of services such as hypothesis generation or find semantic-sensitive networks of association from Big Data such as PubMed, which has more than 24 million citations of biomedical literature (http://www.pubmed.org) and increases by roughly 30,000 per day. This may also help in understanding the potential confluence among different entities or concepts of interest. A well-designed and fully integrated text analytic tool can bridge the gap between the generation and consumption of Big Data and increase its usefulness in the sense of usability and scalability. A plethora of state-of-the-art applications were reported in the contemporary literature and succinctly reviewed in a recent survey by Lu (2011). A total of 28 tools targeted to specific needs of a scientific community were reviewed to compare functionality and performance. Common underlying themes of the tools were to:
1. Improve the relevance of search results
2. Provide better quality of service
3. Enhance user experience with PubMed
Although these applications were developed to minimize information overload, the questions of scalability and finding networks of semantic associations to gather actionable knowledge.
Traditional literature mining frameworks rely on keyword-based approaches and are not suitable for capturing meaningful associations to reduce information overload or generate new hypotheses, let alone find networks of semantic relations. Existing techniques lack the ability to present biological data effectively in an easy-to-use form (Altman et al., 2008) to further knowledge discovery (KD) by integrating heterogenous data. To reduce information overload effectively and complement traditional means of knowledge dissemination, it is imperative to develop robust and scalable KD tools that are versatile enough to meet the needs of a diverse community. The utility of such a system would be greatly enhanced with the added capability of finding semantically similar concepts related to various risk factors, side effects, symptoms, and diseases. There are a number of challenges to developing such a robust yet versatile tool. A key problem is to create a fully integrated and functional system that is specific to a targeted audience, yet flexible enough to be creatively employed by a diverse range of users. To be effective, it is necessary to map the range of concepts using a set of criteria to a dictionary that is specific to the community. Second, it is important to ensure that the KD process is scalable with the growing size of data and dynamic terminology, and is effective in capturing the semantic relationships and network of concepts.
In essence, because biology and medicine are rich in terminology, KD has to overcome specific challenges. For instance, in pathology reports and medical records, 12,000 medical abbreviations have been identified (Berman, 2004). In addition, this large vocabulary is dynamic and new terms emerge rapidly. Furthermore, the same object may have several names, or distinct objects can be identified with the same name; in the former case the names are synonyms, whereas in the latter case the objects are homonyms. Consequently, literature mining of biological and medical text becomes a challenging task and the terms that suffer the most are gene and protein names (Hirschman et al., 2002; Wilbur et al., 1999). Alternatively, to design and implement a more accurate system, it is important to understand and tackle these challenges at their root level. However, even more challenging is implementing the information extraction, also known as deep parsing.
Deep parsing is built on formal mathematical models, attempting to describe how text is generated in the human mind (i.e., formal grammar). Deterministic or probabilistic context-free grammars are probably the most popular formal grammars (Wilbur et al., 1999). Grammar-based information extraction techniques are computationally expensive because they require the evaluation of alternative ways to generate the same sentence. Grammar-based information could therefore be more precise but at the cost of reduced processing speed (Rzhetsky et al., 2008). An alternative to grammar-based approaches is semantic methods such as latent semantic analysis (LSA) (Landauer and Dumais, 1997). The LSA-based methods use a bag-of-word model to capture statistical co-occurrences. These techniques are computationally efficient and are suitable for finding direct and indirect associations among entities.

Background

Latent semantic analysis is a well-known technique that has been applied to many areas in data science. In the LSA framework (Landauer and Dumais, 1997), a word document matrix (also known as TD–IDF matrix) is commonly used to represent a collection of text (corpus). The LSA extracts the statistical relations among entities based on their second-order co-occurrences in the corpus. Arguably, LSA captures some semantic relations among various concepts based on their distance in the eigenspace (Berry and Browne, 1999). The most common measure used to rank the vectors is the cosine similarity measure (Berry and Browne, 1999). The three main steps of LSA are summarized from Landauer and Dumais (1997) for the sake of clarity:
1. Term-Document Matrix: Text documents are represented using a bag-of-words model. This representation creates a term-document matrix in which the rows are the words (dictionary), the columns are the documents, and the individual cell contains the frequency of the term appearance in the particular document. Term frequency (TF) and inverse document frequency (IDF) are used to create the TF–IDF matrix.
2. Singular Value Decomposition (SVD): SVD or sparse SVD (approximation of SVD) is performed on the TF–IDF matrix and the k largest eigenvectors are retained. This k-dimensional matrix (encoding matrix) captures the relationship among words based on first- and second-order statistical co-occurrences.
3. Information Retrieval: Information related to a query can be retrieved by first translating the query into the LSA space. A ranking measure such as cosine is used to compute the similarity between the data representation and the query.

Parameter Optimized Latent Semantic Analysis

Although LSA has been applied to many areas in bioinformatics, the LSA models have been based on ad hoc principles. A systematic study was performed on the parameters affecting the performance of LSA to develop a parameter optimized latent semantic analysis (POLSA) (Yeasin et al., 2009). The various parameters examined were corpus content, text preprocessing, sparseness of data vectors, feature selection, influence of the first eigenvector, and ranking of the encoding matrix. The optimized parameters should be chosen whenever possible.

Improving the Semantic Meaning of the POLSA Framework

Methods such as LSA have been successful in finding direct and indirect associations among various entities. However, these methods still use bag-of-words concepts; therefore, they do not take into account the order of words, and hence the meaning of such words is often lost. Using multi-gram words would alleviate some of the problems of the bag-of-words model. In a multi-gram dictionary (MGD) the words “vascular accident” (which is a synonym of “stroke”) would be differentiated from “accident,” which could also mean “car accident” in a different context. However, it is challenging to generate such a dictionary. If all combinatorial words in the English dictionary are chosen, the size of such a dictionary would be considerably large—even if one considers only up to three-gram words. A larger dictionary implies also increased sparsity in the TF–IDF matrix. A possible solution is to construct the dictionary based on combinations of words that are biologically relevant for the case of biological text mining. Identification of biologically relevant word combinations can be derived from biological ontologies such as gene ontology (www.geneontology.org) or Medical Subject Headings (MeSH) (http://www.ncbi.nlm.nih.gov/mesh). Using an MGD could in principle improve the accuracy of vector-based frameworks such as the LSA that rely only on bag-of-words models. Use of an MGD provides also a means of extracting associations based on higher-order co-occurrences.

Web Services

It would be ineffective to build an integrated system unless researchers could interact with the system and obtain valuable information directly. Hence, for a system to be used by experts it is imperative to have a robust and practical application tool. There are four key advantages in using a Web service (WS) framework compared with a Web-based application (Papazoglou, 2008):
1. WS can act as client or server and can respond to a request from an automated application without human intervention. This feature provides a great level of flexibility and adaptability.
2. Web services are modular and self-descriptive: The required inputs and the expected output are well defined in advance.
3. Web services are manageable in a more standard approach. Even when an WS is hosted in a remote location, accessible only through the network, and written in an unfamiliar language, it is still possible to monitor and manage it using external application management and workflow systems.
4. An WS can be used by other applications when similar tasks need to be executed. This is important because more tools are being developed and soon integrated to provide improved services.
5. Finally, WS are the next generation of Web-based technology and applications. They provide a new and improved way for applications to communicate and integrate with one another (Papazoglou, 2008). The implications of this transition are profound, especially with the growing body of data and available tools.

ARIANA: Adaptive Robust Integrative Analysis for Finding Novel Associations

Adaptive Robust Integrative Analysis for Finding Novel Associations (ARIANA) is an efficient and scalable KD tool providing a range of WS in the general area of text analytics in biomedicine. The core of ARIANA is built by integrating semantic-sensitive analysis of text data through ontology mapping (OM), which is critical for preserving specificity of the application and ensuring the creation of a representative database from an ocean of data for a robust model. In particular, the Medical Subject Headings ontology was used to create a dynamic data-driven (DDD) dictionary specific to the domain of application, as well as a representative database for the system. The semantic relationships among the entities or concepts are captured through a POLSA. The KD and the association of concepts were captured using a relevance model (RM). The input to ARIANA can be one or multiple keywords selected from the custom-designed dictionary, and the output is a set of associated entities for each query.
The DDD concepts were introduced starting from the domain-specific “dictionary creation” to the “database selection” and to the “threshold selection” for KD using the RM. The key idea is to make the system adaptive to the growing amounts of data and to the creative needs of diverse users. Key features distinguishing this work from closely related works are (but are not limited to):
1. Flexibility in the level of abstraction based on the user’s insight and need
2. Broad range of literature selected in creating the KD module
3. Domain specificity through a mapping ontology to create DDD and an application-specific dictionary and its integration with POLSA
4. Presentation of results in an easy-to-understand form through RM
5. Implementation of DDD concepts and modular design throughout the process
6. Extraction of hidden knowledge and promotion of data reuse by designing a system with modular visualization engine; for instance, ARIANA allows users to expand certain nodes or collapse a sub-network, or stretch some components of the network for better visual clarity
In essence, ARIANA attempts to bridge the gap between creation and dissemination of knowledge by building a framework that ensures adaptiveness, scalability, robustness, context specificity, modularity, and DDD constructs (see Figure 13.1). Case studies were performed to evaluate the efficacy of the computed results.

Conceptual Framework of ARIANA

ARIANA is built on the backbone of the hypothesis generation framework (Abedi et al., 2012). It implements a system that is modular, robust, adaptive, context-specific, scalable, and DDD. The current system uses 50 years of literature from the PubMed (http://www.pubmed.org/) database. It can find associations linking disease and non-disease traits—also referred to as concepts or MeSH. In addition, it can identify direct and indirect associations between traits. From the user’s perspective, ARIANA is an WS that can uncover knowledge from literature. Empirical studies suggest that the system can capture novel associations and provide innovative services. These results may have broader impact on gathering actionable knowledge and the generating of hypotheses. ARIANA is a customizable technology that can fit many specialized fields, when appropriate measures are considered. For instance, text-mining methods have been applied to the following fields:
image
Figure 13.1 Bridging the gap between Big Data and knowledge.
1. Link and content analysis of extremist groups on the Web (Reid, 2005)
2. Public health rumors from linguistic signals on the Web (Collier et al., 2008)
3. Medical intelligence for monitoring disease epidemics (Steinberger et al., 2008)
4. Opinion mining and sentiment analysis (Nasukawa and Yi, 2003; Pang and Lee, 2008)
The expanded system, significantly evolved and fine-tuned over the past years, integrates semantic-sensitive analysis of text data through OM with database search technology to ensure the specificity required to create a robust model in finding relevant information from Big Data. There are five components as building blocks:
1. OM and MGD creation
2. Data Stratification and POLSA
3. RM
4. Reverse ontology mapping
5. Interface and visualization (I&V)
In biomedicine applications, an important addition to the system was the integration of the Online Mendelian Inheritance in Man (OMIM) database (http://www.ncbi.nlm.nih.gov/omim), a flat list of human-curated gene diseases, and MeSH, a hierarchical database of Medical Subject Headings, to provide gene–trait associations. Figure 13.2 summarizes these modules and their main functionalities. In the following section we will elaborate on each of the modules and their main objectives.
image
Figure 13.2 Main modules of the ARIANA system.

Ontology Mapping

The main objective of the OM module is to create a model that is modular, scalable, and domain specific. These characteristics will reduce noise and enhance the overall quality of the system. Because the data is broad and voluminous, attention must be paid to reduce the different sources of noise and bias in the system. By employing a domain-specific ontology to create the model, biases in the data can be minimized. The two functions of OM are to create a concise dictionary, preferably multi-gram, and to facilitate extraction of key concept words in the field. A domain-specific concise dictionary will be used in the statistical LSA and its quality will translate directly to the system’s performance. Furthermore, the selection of key concept words is important to reduce systemic bias that is integral to all statistical text analytic methods. Figure 13.3 summarizes the steps in OM customized for biomedical applications.
Systemic bias is mainly characterized by imbalanced data. As in many applications, there are large numbers of examples for some cases, yet there are few examples for other situations. For instance, there are a significantly higher number of people without migraine compared with the proportion of the population with migraine. The systemic bias is more pronounced for cases in which there are only few examples, such as rare conditions. In addition to systemic bias, subject-level bias can introduce noise into the system. For instance, if concept words were selected by an individual, the model would be biased toward the personal preference of that individual. Automatic selection of concept words based on a domain-specific ontology would greatly reduce this bias in the system.

Data Stratification and POLSA

A context-specific ontology (such as MeSH) is the main input to the POLSA module. Concept words and the MGD, obtained from the OM module, are both input to the POLSA framework. Text data, which can be in form of short texts or text fragments, is downloaded and stratified based on concept words. For instance, all of the text data extracted for the word “stroke” are collectively organized in the database as one document. Concept words have to be specific enough to extract specific text; however, they have to be general enough to secure enough related text to minimize problems resulting from data imbalance and systemic bias. The POLSA framework will produce a ranked list of concepts along with the respective similarity measure for any given user’s query, which will be fed to the RM. Figure 13.4 summarizes the steps in the POLSA module.
image
Figure 13.3 Ontology mapping for biomedical applications.

Relevance Model

The RM is a logical extension of the disease model reported in our previous work (Abedi et al., 2012). It is an intuitive, simple, and easy-to-use statistical analysis of rank values to compute the strongly related, related, and not related concepts (or risk factors) with respect to a user query. Figure 13.5 illustrates the core concept of a disease model hypothesis. The implicit assumption in this model is that if associated factors of a disease are well known, a large body of literature will be available to corroborate the existence of such associations. On the other hand, if associated factors of a disease are not well documented, the factors are weakly associated with the disease, with few factors displaying a high level of association. In general, we expect the distributions to be uneven, and the largest distribution to correspond to the set of risk factors that are not known to be associated with the disease. The disease model can be applied in many fields and facilitate grouping of associated entities into three or more bins.
image
Figure 13.4 Flowchart outlining steps involved in the POLSA module.
In essence, if one accepts this assumption, the distribution of associated factors follows a tri-modal distribution and it will be intuitive to measure the level of association for different factors with respect to a given disease. Use of a disease model (by a tri-modal distribution) allows better identification of the three sets of factors: unknown associations, potential associations, and established associations.
image
Figure 13.5 Disease model based on literature evidence (the horizontal axis represents the similarity measure between the query—in this case, “disease x”—and concept words (risk factors); the vertical axis represents the number of concept words with a given similarity measure).
Estimating the parameters of the tri-modal distribution can be computationally expensive for real-time services. In addition, fuzzy c-mean clustering can be applied to the similarity scores to group the scores into three bins. This DDD process provides robustness and scalability to the system and can group concepts without requiring a fixed threshold. Furthermore, because the distribution of relevance scores is a function of user queries, the cutoff value to separate highly, possible, and weakly associated headings will be determined dynamically.

Reverse Ontology Mapping

Because the visualization of semantically related concepts is an important component of KD, a modular framework is designed to map the highly associated concepts for a given query to the context-specific ontology. This will provide the basis for a flexible and user-centric visualization module.

Visualization and Interface

A flexible visualization engine is implemented to facilitate a user’s interaction with the system. The key idea is to use the hierarchical structure of the context-specific ontology to present results to the user in a way that would enhance the user’s experience and interests. The network representation makes it easier to interact with the results and generate new hypotheses. For example, the interface of ARIANA is designed to give the user the option to expand or collapse a node of interest and capture knowledge at various levels of abstraction.
To present the results in a graphical representation, JavaScript Object Notation (JSON) objects are created for a user’s query to create the network of associations. Figure 13.6 summarizes the steps of the process. To represent the JSON objects as graphical forms, D3 library (http://d3js.org/) is used to implement the collapsibility and expandability of each node. Main advantages of using this representation are:
image
Figure 13.6 Steps in generating JSON files for network visualization.
1. Compliant computing in the World Wide Web Consortium, the main international standards organization for the World Wide Web
2. Use of the widely implemented Scalable Vector Graphics
3. HTML5, JavaScript, and Cascading Style Sheets standards
4. Control over the final visual product
Features that were critical in this project were event handlers such as collapsibility and expandability features. Finally, to represent the network of associations for every query in the system, JSON objects were created and displayed.

Implementation of ARIANA for Biomedical Applications

ARIANA is an efficient and scalable KD tool providing a range of services in the general areas of text analytics. Here, we showcase how ARIANA can be used as a tool in mining biomedical literature—although the system can also be customized for other uses relevant to national security, such as link, opinion, or sentiment analysis. We will refer to the system as applied in biomedicine as ARIANA+. The core of ARIANA+ is built by integrating semantic-sensitive analysis of text data through OM, which is critical to preserve the specificity of the application and ensure the creation of a representative database from millions of publications for a robust model. In particular, the MeSH ontology was used to create a DDD dictionary specific to the domain of application, as well as a representative database for the system. Semantic relationships among the entities or concepts are captured through a POLSA. The KD and the association of concepts were captured using an RM. The input to ARIANA+ can be one or multiple keywords selected from the MeSH and the output is a set of associated entities for each query. The system can be used to identify hidden associations among biomedical entities, facilitate hypothesis generation, and accelerate KD. ARIANA+ can aid in identifying key players in national and international emergencies such as pandemics (e.g., swine flu or Ebola). In essence, the system can help authorities in critical decision-making situations by providing a robust source of knowledge. In particular, we will show how ARIANA+ was able to bring forward critical missed associations in clinical trials.
In the following section we will elaborate technical details of ARIANA+ for each component and highlight key features that provide adaptiveness, robustness, scalability, specificity, and modularity to the system.

OM and MGD Creation

Based on the domain knowledge, a very large database was stratified using concepts and entities with a broad coverage. The selection of concept words (referred to as heading selection) was fully automated to reduce bias and noise while improving the scalability and robustness of the model. The current version of the system (ARIANA+) is mainly based on our work (Abedi et al., 2014) and has a modular design that is reconfigurable. An alpha prototype was developed and reported in a pilot study (Abedi et al., 2012).
The OM provides a systematic way to fine-tune and refine the different features of the system. One of the key functions of the OM is to filter redundant dictionary terms to refine the encoding matrix. This refinement process helped create a rank full-encoding matrix from the TF–IDF matrix that is extremely sparse in nature. The MGD has been also optimized accordingly to overcome the limitations of LSA-based techniques. The two key paths in the OM are to create a revised MGD that is concise and domain specific and to select a heading list with a broad-based coverage. In ARIANA+, node information from MeSH ontology was extracted to stratify the database (see section on heading selection below) and to create the model.
The key input to the OM is the MeSH ontology. Medical Subject Headings provide a hierarchical structure of terms. For instance “Ebolavirus” has two paths in the MeSH hierarchy:
1. Viruses [B04] > RNA Viruses [B04.820] > Mononegavirales [B04.820.455] > Filoviridae [B04.820.455.300] > Ebolavirus [B04.820.455.300.200]
2. Viruses [B04] > Vertebrate Viruses [B04.909] > RNA Viruses [B04.909.777] > Mononegavirales [B04.909.777.455] > Filoviridae [B04.909.777.455.300] > Ebolavirus [B04.909.777.455.300.200].
Therefore, extracting the associations among elements requires evaluating the exact level of specificity and key relations with respect to other elements in the field. It also requires use of a common language to avoid misinterpreting and misrepresenting information.
The hierarchical structure of MeSH is used to extract node identifiers. For instance, “Ebolavirus” or “RNA Viruses” are node identifiers. Based on this information, first an MGD is constructed (see the section on MGD construction below). Parallel to that, a series of nodes from all MeSH nodes is selected (referred to as headings) to create the model through a systematic process (see Automatic Heading Selection below). The selected headings are used in the POLSA module to extract and organize the literature data by creating an encoding matrix. The encoding matrix is evaluated for sparsity and refined accordingly. This refinement process will produce a more concise dictionary of terms by filtering irrelevant words—words that add no new information to the model.

Creation of the MGD

The MeSH ontology was used to create a concise domain-specific dictionary. Creation of a meaningful dictionary is important in developing a data-driven model to find novel associations through higher-order co-occurrences. A context-specific MGD ensures some level of semantics based on the order of words, which is lost in statistical models that are based on bag-of-words. To create the context-specific MGD, first MeSH node identifiers are extracted and then, using a Perl script, the text file containing node identifiers is parsed to construct the multi-grams. Duplicates, stop words, words starting with a stop word or number, and all words of length two or fewer characters were removed in the filtering stage. The size of the dictionary after the first pass was 39,107 words. Gene symbols from the OMIM database were added to this dictionary. An iterative process was employed to fine-tune the dictionary. The refinement involved iterative removal of null rows (filled with zeros) from the encoding matrix. The final size of the dictionary after this process became 17,074 words that contain mono-, bi-, and tri-grams.
The automated process to generate dictionary words and concept words, here referred to as the heading list, provides robustness and scalability to the system. It also adds a layer of modularity and facilitates integration of the system with other ongoing efforts in the field. Having the same language as the community is important in sustainability and future development.

Data Stratification and POLSA

Data stratification

ARIANA+ includes literature data for 2,545 automatically selected headings (see Automatic Heading Selection below). These headings are the main input to the POLSA module. In addition, the MGD was enriched with all of the gene symbols from the OMIM database and represented the second input. Using the selected headings, titles and abstracts of publications of the past 50 years were downloaded from PubMed and stored in an MySQL database on a server. The database construction was simple yet efficient. There is an advantage to using a database to store the data: Because the relationship between the abstract and headings is many to one, by saving the data into a database each abstract will only be downloaded once, which saves significant amount of storage space.
Three tables are used to construct the database for the MeSH-based concepts: (1) Factor table, (2) FactorPMID table, and (3) PMIDContent table. Factor table contains basic information regarding the 2,545 headings, such as Name, ID, and “Most recent article (year)”; the latter is used to update the entry in the database more efficiently. FactorPMID contains information needed to link the factor to PubMed abstracts using PMIDs (unique identifies of PubMed abstracts). PMIDContent contains all of the information about each abstract, such as PMID, Title, Abstract, Year, and MeSH tags. In fact, every article in PubMed is tagged with one or more MeSH to facilitate searches.
The number of items in the corpus was the same as the number of elements in the heading list (2,545 headings). Each of the 2,545 items was parsed to create a TF–IDF matrix using the words in the refined and representative dictionary. The preprocessing step was customized to suit the structure of the dictionary, because it contained multi-gram words. For instance, (1) stemming was not necessary because multi-gram words were not stemmed; (2) stop word removal was also not necessary because the multi-gram words had stop words within them in some cases. In addition, use of the POLSA framework provided scalability to the system.

Automatic heading selection

Automatic heading selection for ARIANA+ was achieved through a statistical filtering process. The key selection criterion was to use a subset of MeSH headings that provides relatively broad coverage. It was critical to choose representative data while creating a balanced dataset from the unstructured abstracts. Eight categories from the MeSH tree were selected based on the application constraints and domain knowledge: Diseases (C); Chemicals and Drugs (D); Psychiatry and Psychology (F); Phenomena and Processes (G); Anthropology, Education, Sociology, and Social Phenomena (I); Technology, Industry, and Agriculture (J); Named Groups (M); and Health Care (N). These categories were subject to filtering, and about 2.5–17% of their descendant nodes were selected in the final list. Three features were used in the filtering process: (1) number of abstracts for each heading, (2) number of descendant nodes associated with each heading, and (3) ratio of the number of abstracts between child to parent node (also referred to as fold change). Finally, 2,545 headings from a total of 38,618 were selected to populate the database.
Heading selection rules were progressive and were fine-tuned with heuristic rules consistent with different categories. Table 13.1 summarizes the rules applied to the eight distinct categories. These rules were adjusted for each category to include concepts from a wide range of fields while keeping a higher number of headings from the disease class. The disease class included the MeSH from the C category and the non-disease class contained headings from the remaining seven categories. Furthermore, inclusion criteria were continuously adjusted to reduce the skewness in the dataset. For instance, some categories were very large—Chemicals and Drugs had over 20,000 subheadings—whereas others were small—Named Groups had 190 subheadings. Therefore, the selection criteria were progressively adjusted to reduce the bias in the dataset. A total of 475 out of 20,015 subheadings were selected from the Chemicals and Drugs category (only 2% coverage), whereas a total of 13 out of 190 (or 7% coverage) were selected from the Named Groups category. Progressive heading selection rules were important in providing robust and context specificity of the system.

Table 13.1

Progressive Filtering Rules Applied to the Eight MeSH Categories

Medical Subject Heading CategoriesNumber of Selected HeadingsProgressive Selection Rules
Abstracts/HeadingsDescendant Node/HeadingFold Change
C: Diseases1,8281,000–50,0001–100<10
D: Chemicals and Drugs4755,000–10,0001–100<5
F: Psychiatry and Psychology1281,000–30,0001–10<10
G: Phenomena and Processes2421,000–20,0002–50<10
I: Anthropology, Education, Sociology, and Social Phenomena311,000–10,0001–10<10
J: Technology, Industry, and Agriculture661,000–10,0001–10<10
M: Named Groups131,000–20,0001–5<5
N: Health Care635,000–10,0001–10<10

image

The main constraint in this model was to select more than 50% of headings from the Diseases category. In essence, key objectives of the project were to determine disease networks, identify associated risk factors for a disease, and highlight traits that were directly or indirectly associated with a disease, to aid in our understanding of disease mechanisms. The three features (number of abstracts for each heading, number of descendant nodes associated with each heading, and fold change) were used to create the heuristic that would measure the specificity (as an estimated measure of level of abstraction) of the headings and facilitate the selection. A total of 1,828 headings were therefore selected, representing 17% coverage and accounting for 64% of the total number of headings in the ARIANA+ database.
The Chemical and Drugs category was one of the largest MeSH categories, with 20,015 headings. Selection criteria for this category were therefore stringent. One of the main objectives was to select headings that would represent a maximum of 50% from the non-disease group. A total of 475 headings were selected, representing 47% of the headings from the non-disease group. The Psychiatry and Psychology category had only 1,050 headings. Selection criteria were adjusted to keep roughly 10% of the best representative headings from this category. These headings had a wide range; among other measures, the number of abstracts for each heading in this category ranged from one to 859,564. The filtering process attempted to select the most homogenous headings to minimize systemic bias and noise. The G category (Phenomena and Processes) was relatively large, with 3,164 headings. A total of 242 headings were selected from this category to represent 24% of the non-disease class in the database.
Other categories such as I (Anthropology, Education, Sociology, and Social Phenomena) and J (Technology, Industry, and Agriculture) had similar characteristics, with 559 and 558 headings, respectively. Category I had an average of 7,374 and category J an average of 7,290 abstracts per heading. Similarly, category I had an average of 1.7 child nodes whereas category J had an average of 1.6 child nodes. Finally, categories I and J had an average of 114 and 99 fold changes per heading, respectively. The selection rules were adjusted in a similar manner, with the ultimate goal to select about 100 nodes to populate roughly 10% of the non-disease category. By applying the filtering process, the average number of abstracts was reduced to 5,520 in category I and 4,787 in category J. Similarly, the average number of child nodes after filtering was 2.2 in category I and 1.6 in category J; finally the average fold changes per heading were reduced to 3.5 in category I and 3.1 in category J. These numbers demonstrate that a progressive filtering process can be beneficial, and fold change in this case had a more discriminative power. In fact, the average number of abstracts was reduced only by 25% and 34% after filtering for I and J categories, respectively. However, average fold change was reduced by 97% for both categories.
The M category (Named Groups) was small, with only 190 headings. The selection process filtered this category in a way to include only a small subset of headings in the non-disease class. Although this category had a limited number of headings, variation in terms of the specificity of topics was large. After filtering, 13 headings were selected to be in the non-disease class. The inclusion of a small representative sample from this category can be important, because these were potentially interesting headings for epidemiological studies, such as: “Hispanic Americans,” “Twins,” and “Emergency Responders.” The Health Care category had 2,207 headings with a large range of specificity. This filtering process created a small subset of headings from this category (for a total of 63, or 6% of the non-disease group). This selection process ensured the inclusion of headings with moderate specificity, therefore reducing systemic bias in the dataset.
Once the headings were selected, the duplicates were removed. In MeSH, some nodes are duplicated because their parent node is different. However, the documents retrieved for both duplicated nodes were identical; hence, duplicates were removed without causing inconsistency. A total of 301 headings were duplicated from the following categories: 218 (or 12%) from the C category, 39 (8%) from the D category, 7 (5%) from the F category, 32 (13%) from the G category, 2 (3%) from the J category, and 3 (or 4%) from the N category. This final step in heading selection reduced the list from 2,846 to 2,545.

Parameter Optimized Latent Semantic Analysis

Using 2545 headings, the TF-IDF matrix was employed to generate the encoding matrix. Dimensionality was reduced to cover 95% of the total energy. In particular, dimensionality was reduced from 2,545 to 1,400 headings to create the encoding matrix. Using the encoding matrix, the query was translated into the eigenspace to rank the headings based on the cosine similarity measure. This process was applied iteratively to fine-tune the dictionary, which was used to generate a final encoding matrix. The iterative fine-tuning provides robustness and DDD property to the system.

Fine-tuning the encoding

After the analysis of the initial encoding matrix obtained by POLSA, it was observed that many of the entries (rows) were zero. Removing these rows made the dictionary more concise and relevant to the data. It also helped create a full rank-encoding matrix that improved the robustness of the system by capturing meaningful semantic associations.

Relevance Model

The list of associated headings that are ranked with respect to a user query is used as input to the RM. The top-ranked headings are strongly associated with the query, and the headings ranked at the bottom do not have significant evidence to support their association with the query. The headings that are between the two extremes are those that might or might not be associated with the query, because there is some supportive evidence for their association. These weak associations are important in the KD process and call for further investigation by domain experts. In essence, the RM is an intuitive and easy-to-use statistical analysis strategy of rank and group similarity scores of associated headings. The goal is to group related headings into three bins: namely strongly related, possibly related, and unrelated headings.
The underlying assumption is that if concepts (also referred to as headings) are highly associated, a large body of literature is available to corroborate existence of their association. Similarly, if two headings or biological entities are not well documented, they are only weakly associated. Furthermore, because the distribution of relevance scores is a function of user queries, the cutoff value to separate highly, possible, and weakly associated headings must be determined dynamically. This requires a simplified yet effective model to ensure scalability; therefore, it was assumed that the distribution of the ranked list can be viewed as a Gaussian mixture model and the partition can be computed using the DDD threshold estimation. In particular, the distribution of relevance scores of the headings for a given query was approximated as a tri-modal Gaussian distribution. The separation of the three distributions allowed implementation of the DDD cutoff system. A curve-fitting approach can be used to estimate the parameter of the tri-modal distribution and determine two cutoff values to separate the three groups. However, the estimation process can be computationally expensive. A more practical approach is to use a fuzzy c-mean clustering approach to group the scores. The latter is more robust and scalable and can provide a finely tuned means to evaluate the results on demand. Furthermore, this DDD cutoff value determination can also be integrated in other information retrieval (IR) systems.
Fuzzy c-means clustering is applied to group associated headings using the MATLAB built-in function. Using the clustering, the scores are first grouped into two clusters based on the membership values of these two clusters, Algorithm 1 is used to assign each heading to one of the three groups in the RM. The cosine cutoff values estimated through this process are DDD; hence, the cutoffs are subject to change as the dataset expands. The input is the limit defined by an expert to separate the known and unknown headings and place them into the possible heading group (i.e., the gray zone). A conservative limit threshold of 0.9 was chosen to analyze the results (value of j in Algorithm 1).
image
The RM was applied to the top 750 headings (30%), assuming the number of highly and possibly associated headings is less than 20% or 500 headings. In cases in which the number of associated headings would be higher, the initial query must be revised because it likely represents a generic term such as “disease” or “medicine.” As also indicated in our empirical study, finding a novel association with no citation in the database required analyzing the top 10% of the headings. In that specific case (see Empirical Studies section below) five associated headings leading to a new hypothesis were among the top 10% of the ranked list. The RM is one of the key modules that ensure the DDD property of the system while providing a robust and scalable framework.

Reverse Ontology Mapping and I&V

Reverse ontology mapping maps back the semantically associated concepts to the MeSH tree to create the network of associations for a given query. It is easy to interact with and understand the network representation. It also helped to implement flexible visualization, in which users can expand or collapse nodes to interact with the captured knowledge at various levels of abstraction.
To present the results in a graphical representation, JSON objects are created for a user’s query to create the network of associations. At the first stage, MeSH terms are used to create a Hash Table (HT). For instance, “dementia” is an MeSH identifier and is identified in MeSH as F03.087.400, meaning that “dementia” is a third-level node in the MeSH tree in the F category. In the HT, “dementia” has a key that corresponds to its tree number and a value, which is its identifier (i.e., “dementia”). Similarly, highly and possibly associated headings with a user’s query are identified. For each associated heading, a path is created for each value in the HT. For instance, if an associated heading to a user query includes “dementia,” a path is created for that heading; in this case, the path for “dementia” is “Root > Mental Disorders > Delirium, Dementia, Amnestic, Cognitive Disorders > Dementia.” After the path is generated, JSON files are constructed with paths for each associated heading. Then the JSON files are pruned to remove duplicated terms. The final JSON files are used to create the networks using the D3 library.
Users can interact with the tool and explore different queries. For instance, in the case of “caffeine,” the tree will be crowded and can be partially expanded to allow users to explore topics of interest. Associated concepts related to “caffeine” are diverse, ranging from “leisure activities” such as “relaxation” and “skin disease” such as “pigmentation disorder” to “acyclic acids” such as “maleimides.” Considering the example in which the query is “iron metabolism.” There are four associated headings: “Iron Overload,” “Growth Disorders,” “Pigmentation Disorders,” and “Myelodysplastic Syndromes.” All of these detected associations have supporting evidence in PubMed. In essence, ARIANA+ provides a global view based on a reliable source of information. Furthermore, exploration of weakly related entities could bring forward new emerging research trends and potential new hypotheses. To explore weakly associated concepts, users can download the ranked list of associated headings and their relative cosine scores. In addition, users can perform multiple search queries simultaneously and extract common associated headings instantaneously.
Reverse ontology mapping and V&I are important components of the system; they provide adaptive specificity and modularity. ARIANA+ could be further expanded and customized in specialized fields using the described framework with minor modifications.
In the following section, case studies are presented to illustrate the potentials of this tool.

Case Studies

To develop the ARIANA+ system, two pilot studies were performed to identify and address challenges and design a robust and scalable system. In the initial study (Abedi et al., 2012), 96 concepts were considered and 20 years of literature were analyzed. In the second stage of the system, 276 concepts were considered and the past 50 years of literature were analyzed (Abedi et al., 2014). Even in those smaller-scale studies, interesting observations were made. However, the most important KD case occurred when we expanded the system to incorporate 2,545 concepts from MeSH and fine-tuned the system at different levels. For instance, we identified the association between the drug hexamethonium and pulmonary inflammation and fibrosis, which in 2001 caused the tragic death of a healthy volunteer who was enrolled in an asthma study. The system was also able to identify a link between Alzheimer disease (AD) and tuberculosis (TB), two distant conditions.

Case Study I: KD: Lethal Drug Interaction

In 2001, an asthma research team at the Johns Hopkins University used the drug hexamethonium on a young healthy volunteer, which ended in the death of the woman as a result of pulmonary inflammation and fibrosis. Hexamethonium was a drug used mainly to treat chronic hypertension and was proposed as a potential drug to treat asthma; however, the non-specificity of its action led to its use being discontinued (Nishida et al., 2012; Toda, 1995). During the course of the asthma study, a healthy volunteer, Ellen Roche, died only a few days after inhaling this drug. She was diagnosed with pulmonary inflammation and fibrosis based on chest imaging and an autopsy report after her death. The autopsy report stated the following facts: “The microscopic examination of the lungs later revealed extensive, diffuse loss of alveolar space with marked fibrosis and fibrin thrombi involving all lobes. There was also evidence of alveolar cell hyperplasia as well as chronic inflammation compatible with an organizing stage of diffuse alveolar damage. There was no evidence of bacteria, fungal organisms, or viral inclusions on routine or special stains” (Internal Investigative Committee Membership, 2001). The principal investigator made a good-faith effort to research the drug’s (hexamethonium’s) adverse effects, mainly by focusing on a limited number of resources, including the PubMed database, and the ethics panel subsequently approved the safety of the drug. This tragedy highlights the importance of a literature search in designing experiments and enrolling healthy individuals in control groups.
The volunteer was a young healthy person with no lung or kidney problems. One day after enrolling in the study she developed a dry cough and dyspnea and 2 days after she developed flu-like symptoms. Her forced expiratory volume in the first second was reduced. On May 9, 2001, she became febrile and was admitted to the Johns Hopkins Bayview Medical Center. The chest X-ray revealed streaky densities in the right perihilar region. Arterial oxygen saturation fell to 84% after she walked a short distance. She was in critical condition and 3 days after was referred to the intensive care unit, where she was intubated and ventilated. She experienced bilateral pneumothoraces and presented a clinical picture of adult respiratory distress syndrome. She died on June 2, 2001. However, this accident could have been prevented if the researcher had known of a case report published in 1955 (Robillard et al., 1955) or extracted the association using ARIANA+.
Interestingly, a literature (PubMed) search of “Hexamethonium” and “pulmonary fibrosis” returns (as verified in August 2014) four hits, none of them with available abstracts online. One of the publications is in the Russian language, published in 1967 (Malaia et al., 1967). The other three were published 60 to 30 years ago (Brettner et al., 1970; Cockersole and Park, 1956; Stableforth, 1979). Searching individual entries returned 21,167 record for “pulmonary fibrosis” and 7,102 entries for “Hexamethonium.” However, to date there is limited direct evidence of the toxicity of this drug in PubMed. The PDF of the case report published in 1955 can be found in PubMed today; however, many data mining tools including ARIANA+ do not take into account PDFs of very old articles. Nonetheless, ARIANA+ was able to capture this association.
The analysis revealed five clear indications among the top 10% of the ranked headings, providing strong evidence for such an association. ARIANA+ was able to extract this information from 50 years of literature, even though the 1955 case report (Robillard et al., 1955) was not in the database. Of 2,545 concepts in the system, ARIANA+ ranked “Scleroderma, Systemic” as the 13th ranked concept, “Neoplasms, Fibrous Tissue” as the 16th, “Pneumonia” as the 38th, “Neoplasms, Connective and Soft Tissue > Neoplasms, Connective Tissue > Neoplasms, Fibrous” as the 174th, and finally, “Pulmonary Fibrosis” as the 257th. ARIANA+ captured this association and could have prevented the volunteer’s death.

Case Study II: Data Repurposing: AD Study

The identification of networks of semantically related entities with a single or double query can uncover hidden knowledge and facilitate data reuse among other things. AD is a debilitating disease of the nervous system. It mostly affects the older population. ARIANA+ captured some of the obvious associations, such as “Tauopathies,” “Proteostasis Deficiencies,” “Amyloidosis,” “Cerebral Arterial Diseases,” “Multiple System Atrophy,” and “Agnosia.” It also identified some of the less obvious associations, such as “Tissue Inhibitor of Metalloproteinases” (Ridnour et al., 2012; Wollmer et al., 2002). Using “TB” as a second query, a common entity was recognized to be linked to both AD and TB. “Proteostasis Deficiencies > Amyloidosis” is highly related (cosine score of 0.5651) to TB and moderately related (cosine score of 0.0734) to AD. Further investigation by experts revealed that AD and TB could be indirectly related through matrix metalloproteinase (MMP) gene family members.
MMPs are zinc-binding endopeptidases that degrade various components of the extracellular matrix (Brinckerhoff and Matrisian, 2002; Davidson, 1990). They are believed to be implicated in TB by the concept of matrix degrading phenotype (Elkington et al., 2011). Various studies in human cells, animal models, as well as gene profiling studies support the association of MMPs and TB and involvement of TB-driven lung matrix deconstruction (Berry et al., 2010; Mehra et al., 2010; Russell et al., 2010; Thuong et al., 2008; van der Sar et al., 2009). MMPs are also implicated in AD (Yong et al., 1998). In fact, MMP proteins can breakdown amyloid proteins (Yan et al., 2006) that are present in the brain of AD patients. Therefore, this association is advantageous.
In summary, there is literature evidence for the link between MMP genes and AD, in which MMP genes are beneficial; and similarly between MMP genes and TB, in which MMP genes have a negative effect. However, the connection between AD and TB through MMP genes is extracted by a global analysis of the literature, facilitated by visual inspection of the network of semantically related entities.

Discussion

ARIANA is a system targeting a large scientific community: medical researchers, epidemiologists, biomedical scientific groups, high-level decision makers in crisis management, and junior researchers with focused interests. The tool can be used as a guide to broaden one’s horizon by identifying seemingly unrelated entities. ARIANA+ provides relations between query word(s) and 2,545 headings using 50 years of literature data from PubMed. The design is efficient, modular, robust, context specific, dynamic, and scalable. The framework can be expanded to incorporate a much larger set of headings from the MeSH or any other domain-specific ontology. In addition, a DDD system is implemented to group ranked headings into three groups for every query. The DDD system can be applied in other systems to improve the quality of information retrieval. As a consequence of incorporating a context-specific MGD, the sparsity of the data model is lower and the size of the dictionary is significantly smaller than if all combination of English words were taken into consideration.
The features and functionalities of the system are compared and contrasted with state-of-the-art systems. In a survey in which 28 applications were reviewed (Lu, 2011), five used clustering to group search results into topics and another five used different techniques to summarize results and present a semantic overview of the retrieved documents. The tools that are based on clustering are fundamentally different from ARIANA+, whereas the rest of the tools have some similarities in their scopes and designs.
One of the systems, Anne O’Tate (Smalheiser et al., 2008) uses post-processing to group the results of literature searches into predefined categories such as MeSH topics, author names, and year of publication. Although this tool can be helpful in presenting results to the user, it does not provide the additional steps to extract semantic relationships.
The McSyBi (Yamamoto and Takagi, 2007) clusters results to provide an overview of the search and to show relationships among retrieved documents. It is reported that LSA is used with limited implementation details; furthermore, only the top 10,000 publications are analyzed. ARIANA+ analyzes over eight million publications. XplorMed (Perez-Iratxeta et al., 2001) allows users to further explore the subjects and keywords of interest. MedEvi (Kim et al., 2008) provides 10 concept variables as semantic queries. XplorMed puts a significant limit (no more than 500) on the number of abstracts to analyze. MEDIE (Ohta et al., 2006) provides utilities for semantic searches based on deep-parsing and returns text fragments to the user. This is conceptually different from ARIANA+. EBIMED (Rebholz-Schuhmann et al., 2007) extracts proteins, gene ontology, drugs, and species, and identifies relationships among these concepts based on co-occurrence analysis.
Among all reviewed tools (Lu, 2011), EBIMED is most comparable to ARIANA+; yet, that system focuses only on proteins, gene ontology annotations, drugs, and species as concepts. ARIANA+ differs from EBIMED in a number of ways. First, ARIANA+ provides systematic data stratification based on domain knowledge and application constraints. Second, it uses OM to create a robust dictionary, which in turn produces a better model and also helps in finding crisp associations of concepts. Third, it computes the associations based on higher-order co-occurrence analysis and the introduction of an RM to present the results into an easy-to-use and understandable manner. In addition, Because MeSH provides a hierarchical structure, ARIANA+ could be expanded to include a large number of headings.
In summary, the ARIANA system with only 276 MeSH (Abedi et al., 2014) was able to extract interesting knowledge, such as the association between sexually transmitted diseases and migraine: an association that was published after we downloaded the abstracts from PubMed (Kirkland et al., 2012). The expanded and fine-tuned ARIANA+ (with 2,545 headings) was able to extract even more valuable information, leading toward actionable knowledge. Among the refinement steps, the headings selection process created a balanced representative dataset across all selected categories, in which noise and systemic bias were minimized. This fine-grain filtering process provided a stratified data, and specificity measures were used to create a robust model. In addition to a strong data model, the interactive visualization and interface module gives control to users to view only associations that are relevant to them, by collapsing irrelevant topics. The visualization module was based on the use of a hierarchical structure to represent the terms. In this case, using MeSH has the advantage of also providing modularity and scalability to the framework. In addition, the interactive system empowers users with more search options, such as multi-query search. The latter will translate to wider usage and exploration of the tool by inter- and multidisciplinary teams.
Finally, the path from Big Data to actionable knowledge is multidimensional and nonlinear. However, the investigation of cause–effect relationships in translational research could be a step toward bridging that gap. This study shows that a custom-designed literature mining tool can be successful in the discovery of semantically related networks of associations. In an empirical study, it was shown that ARIANA+ can capture the hidden association of “Pulmonary Fibrosis” and “Hexamethonium,” even though such an association is still not evident in a PubMed search.
With the current version of the system, once an association is found, the user’s expertise will guide the search direction. The user can use that information and search an array of databases and explore various tools such as PubMed, OMIM, and Phenotype Genotype Integrator (PheGenI), all maintained by the National Center for Biotechnology Information; GeneMANIA (Zuberi et al., 2013); Gene Ontology (Ashburner et al., 2000); STRING (Franceschini et al., 2013); and so forth, to further refine the hypothesis. Interestingly, the current version of PheGenI accepts MeSH as input and can extract associated genetic information from NHGRI genome-wide association study catalog data and other databases such as Gene, the database of Genotypes and Phenotypes (dbGaP), OMIM, GTEx, and dbSNP. The comprehensive system uses the dbGaP, which was developed to facilitate research in clinical and epidemiological fields. The modularity provided by ARIANA+ will further integrative analysis. In essence, in the case of an indirect association, more in-depth analysis of available data will be needed to understand the intriguing mechanism linking two or more traits. In essence, no one tool or technique can best extract knowledge from an ocean of information; however, literature data could provide a starting point and additional sources can aid in the quest for actionable knowledge.

Conclusions

Strategic reading, searching, and filtering have been the norm in gaining perspectives from the ocean of data in the field of biomedicine and beyond. Intriguingly, information overload has contributed to widening the knowledge gap; therefore, more data do not translate to more knowledge directly. It is widely acknowledged that efficient mining of biological literature could provide a variety of services (Rzhetsky et al., 2008) such as hypothesis generation (Abedi et al., 2012) and semantic-sensitive KD. The same is true for national security applications such as link and content analysis of extremist groups on the Web (Reid, 2005), public health rumors from linguistic signals on the Web (Collier et al., 2008), or opinion mining and sentiment analysis (Nasukawa and Yi, 2003; Pang and Lee, 2008).
Traditionally, literature mining tools focus on text summarization and clustering techniques (Lu, 2011) with the goal of reducing data overload and with the ability to read and synthesize more information in a shorter time. It was argued that a text analytic tool capable of extracting networks of semantically related associations may help bridge knowledge gaps by using humans’ unique visual capacity and information-seeking behavior. For instance, in a study, 16,169 articles were chosen to create a visual representation of main concepts, creating a visual map of verbal information (Landauer et al., 2004). In that analysis, “verbal presentation offers more precise information […], whereas the visual presentation offers a more flexible style of exploration that better shows multiple, fuzzy, and intermixed and complexly patterned relations among the documents.” In addition, literature mining tools that can capture semantic relationships could in principle connect disjoint entities among different research fields.
ARIANA+ can uncover networks of semantic associations and provide WS to generate hypotheses. In addition, because of its modular design, it can be integrated with additional tools and be designed to provide complementary information to refine the hypotheses. In essence, ARIANA+ will enable exploration of literature to find answers to questions that we did not know how to ask. We are still working to enhance the visualization module to enrich the user’s overall interactive experience. Ongoing effort is focusing on integrating ARIANA+ with other tools that provide complementary information to generate actionable knowledge.
Finally, in critical situations such as crisis management and epidemic monitoring, time becomes the most crucial parameter; understanding and extracting meaningful associations and exploring various hypotheses simultaneously can save human lives and expedite the process of rescue. In essence, in time-critical circumstances rapid response is needed to preserve national security at many levels (also see Chapters 4 and 5). However, in today’s fast-paced and globally interconnected world, public health experts who historically have been trained in medical or epidemiological fields are from diverse backgrounds including anthropology, economics, sociology, and engineering (Garcia et al., 2014). Therefore, policy decision makers from diverse backgrounds have to navigate through large and complex literature evidence of varying quality and relevance to make important decisions quickly (Cockcroft et al., 2014). A tool such as ARIANA to extract knowledge and summarize the literature has great value in providing evidence-based decision support systems to government agencies and decision makers. The presented framework can be a great tool in KD, hypothesis generation, and data repurposing. In addition, identification of potential hypotheses for a fast response to pandemics can be of great importance. Furthermore, a customized system can be implemented to analyze different types of short text data in the English language or any other language, such as e-mail or other types of communications, reports, or Web-based information. The system can be configured to address other areas of concern for national security, such as law and order or combating terrorism.

Acknowledgment

This work was supported by the Electrical and Computer Engineering Department and Bioinformatics Program at the University of Memphis; by the University of Tennessee Health Science Center; and by NSF Grant NSF-IIS-0746790. The authors thank Faruk Ahmed, Shahinur Alam, Hossein Taghizad, and Karthika Ramani Muthukuri for programming support and implementation of the Web tool for the ARIANA+ system.

References

Abedi V, Yeasin M, Zand R. ARIANA: adaptive robust and integrative analysis for finding novel associations. In: The 2014 International Conference on Advances in Big Data Analytics. Las Vegas, NV. 2014.

Abedi V, Zand R, Yeasin M, Faisal F.E. An automated framework for hypotheses generation using literature. BioData Mining. 2012;5:13. doi: 10.1186/1756-0381-5-13.

Altman R.B, Bergman C.M, Blake J, Blaschke C, Cohen A, Gannon F, Grivell L, Hahn U, Hersh W, Hirschman L, Jensen L.J, Krallinger M, Mons B, O’Donoghue S.I, Peitsch M.C, Rebholz-Schuhmann D, Shatkay H, Valencia A. Text mining for biology–the way forward: opinions from leading scientists. Genome Biology. 2008;9(Suppl. 2):S7. doi: 10.1186/gb-2008-9-s2-s7.

Ashburner M, Ball C.A, Blake J.A, Botstein D, Butler H, Cherry J.M, Davis A.P, Dolinski K, Dwight S.S, Eppig J.T, Harris M.A, Hill D.P, Issel-Tarver L, Kasarskis A, Lewis S, Matese J.C, Richardson J.E, Ringwald M, Rubin G.M, Sherlock G. Gene ontology: tool for the unification of biology. The gene ontology consortium. Nature Genetics. 2000;25:25–29. doi: 10.1038/75556.Gene.

Berman J.J. Pathology abbreviated: a long review of short terms. Archives of Pathology & Laboratory Medicine. 2004;128:347–352. doi: 10.1043/1543-2165(2004)128<347:PAALRO>2.0.CO;2.

Berry M.P.R, Graham C.M, McNab F.W, Xu Z, Bloch S.A.A, Oni T, Wilkinson K.A, Banchereau R, Skinner J, Wilkinson R.J, Quinn C, Blankenship D, Dhawan R, Cush J.J, Mejias A, Ramilo O, Kon O.M, Pascual V, Banchereau J, Chaussabel D, O’Garra A. An interferon-inducible neutrophil-driven blood transcriptional signature in human tuberculosis. Nature. 2010;466:973–977. doi: 10.1542/peds.2011-2107LLLL.

Berry W.M, Browne M. Understanding Search Engines: Mathematical Modeling and Text Retrieval. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics Philadelphia; 1999.

Brettner A, Heitzman E.R, Woodin W.G. Pulmonary complications of drug therapy. Radiology. 1970;96:31–38. doi: 10.1148/96.1.31.

Brinckerhoff C.E, Matrisian L.M. Matrix metalloproteinases: a tail of a frog that became a prince. Nature Reviews Molecular Cell Biology. 2002;3:207–214. doi: 10.1038/nrm763.

Cockcroft A, Masisi M, Thabane L, Andersson N. Science Communication. legislators learning to interpret evidence for policy. Science. 2014;345:1244–1245. doi: 10.1126/science.1256911.

Cockersole F.J, Park W.W. Hexamethonium lung; report of a case associated with pregnancy. Journal of Obstetrics and Gynaecology of the British Empire. 1956;63:728–734.

Collier N, Doan S, Kawazoe A, Goodwin R.M, Conway M, Tateno Y, Ngo Q.-H, Dien D, Kawtrakul A, Takeuchi K, Shigematsu M, Taniguchi K. BioCaster: detecting public health rumors with a Web-based text mining system. Bioinformatics. 2008;24:2940–2941. doi: 10.1093/bioinformatics/btn534.

Davidson J.M. Biochemistry and turnover of lung interstitium. European Respiratory Journal. 1990;3:1048–1063.

Elkington P.T, Ugarte-Gil C.A, Friedland J.S. Matrix metalloproteinases in tuberculosis. European Respiratory Journal. 2011;38:456–464. doi: 10.1183/09031936.00015411.

Franceschini A, Szklarczyk D, Frankild S, Kuhn M, Simonovic M, Roth A, Lin J, Minguez P, Bork P, von Mering C, Jensen L.J. STRING v9.1: protein-protein interaction networks, with increased coverage and integration. Nucleic Acids Research. 2013;41:D808–D815. doi: 10.1093/nar/gks1094.

Garcia P, Armstrong R, Zaman M.H. Models of education in medicine, public health, and engineering. Science. 2014;345:1281–1283. doi: 10.1126/science.1258782.

Hirschman L, Morgan A.A, Yeh A.S. Rutabaga by any other name: extracting biological names. Journal of Biomedical Informatics. 2002;35:247–259.

Internal Investigative Committee Membership. Report of Internal Investigation into the Death of a Volunteer Research Subject [Online]. 2001 Available from. http://www.hopkinsmedicine.org/press/2001/july/report_of_internal_investigation.htm.

Kim J.-J, Pezik P, Rebholz-Schuhmann D. MedEvi: retrieving textual evidence of relations between biomedical concepts from Medline. Bioinformatics. 2008;24:1410–1412. doi: 10.1093/bioinformatics/btn117.

Kirkland K.E, Kirkland K, Many W.J, Smitherman T.A. Headache among patients with HIV disease: prevalence, characteristics, and associations. Headache. 2012;52:455–466. doi: 10.1111/j.1526-4610.2011.02025.x.

Landauer T.K, Dumais S.T. A solution to Plato’s problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review. 1997;104:211–240. doi: 10.1037/0033-295X.104.2.211.

Landauer T.K, Laham D, Derr M. From paragraph to graph: latent semantic analysis for information visualization. Proceedings of the National Academy of Sciences. 2004;101:5214–5219. doi: 10.1073/pnas.0400341101.

Lu Z. PubMed and beyond: a survey of web tools for searching biomedical literature. Database (Oxford). 2011:baq036. doi: 10.1093/database/baq036.

Malaia L.T, Shalimov A.A, Dushanin S.A, Liashenko M.M, Zverev V.V. Catheterization of veins and selective angiopulmonography in comparison with several indices of the functional state of the external respiratory apparatus and blood circulation during chronic lung diseases. Kardiologiia. 1967;7:112–119.

Mehra S, Pahar B, Dutta N.K, Conerly C.N, Philippi-Falkenstein K, Alvarez X, Kaushal D. Transcriptional reprogramming in nonhuman primate (Rhesus Macaque) tuberculosis granulomas. PLoS One. 2010;5 doi: 10.1371/journal.pone.0012266.

Nasukawa T, Yi J. Sentiment analysis. In: Proceedings of the International Conference on Knowledge Capture - K-cap ‘03. New York, USA: ACM Press; 2003:70. doi: 10.1145/945645.945658.

National Center for Biotechnology Information, n.d. Phenotype-genotype Integrator [Online]. Available from: http://www.ncbi.nlm.nih.gov/gap/phegeni.

Nishida Y, Tandai-Hiruma M, Kemuriyama T, Hagisawa K. Long-term blood pressure control: is there a set-point in the brain? Journal Of Physiological Sciences. 2012;62:147–161. doi: 10.1007/s12576-012-0192-0.

Ohta T, Masuda K, Hara T, Tsujii J, Tsuruoka Y, Takeuchi J, Kim J.-D, Miyao Y, Yakushiji A, Yoshida K, Tateisi Y, Ninomiya T. An intelligent search engine and GUI-based efficient MEDLINE search tool based on deep syntactic parsing. In: Proceedings of the COLING/ACL on Interactive Presentation Sessions. Morristown, NJ, USA: Association for Computational Linguistics; 2006:17–20. doi: 10.3115/1225403.1225408.

Pang B, Lee L. Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval. 2008;2:1–135. doi: 10.1561/1500000011.

Papazoglou M. Web Services: Principles and Technology 752. 2008.

Perez-Iratxeta C, Bork P, Andrade M.A. XplorMed: a tool for exploring MEDLINE abstracts. Trends in Biochemical Sciences. 2001;26:573–575.

Rebholz-Schuhmann D, Kirsch H, Arregui M, Gaudan S, Riethoven M, Stoehr P. EBIMed–text crunching to gather facts for proteins from Medline. Bioinformatics. 2007;23:e237–e244. doi: 10.1093/bioinformatics/btl302.

Reid E. US domestic extremist groups on the web: link and content analysis. IEEE Intelligent Systems. 2005;20:44–51. doi: 10.1109/MIS.2005.96.

Ridnour L.a, Dhanapal S, Hoos M, Wilson J, Lee J, Cheng R.Y.S, Brueggemann E.E, Hines H.B, Wilcock D.M, Vitek M.P, Wink D.a, Colton C.a. Nitric oxide-mediated regulation of β-amyloid clearance via alterations of MMP-9/TIMP-1. Journal of Neurochemistry. 2012;123:736–749. doi: 10.1111/jnc.12028.

Robillard R, Riopelle J.L, Adamkiewicz L, Tremblay G, Genest J. Pulmonary complications during treatment with hexamethonium. Canadian Medical Association Journal. 1955;72:448–451.

Russell D.G, VanderVen B.C, Lee W, Abramovitch R.B, Kim M, Homolka S, Niemann S, Rohde K.H. Mycobacterium tuberculosis wears what it eats. Cell Host Microbe. 2010;8:68–76. doi: 10.1016/j.chom.2010.06.002.

Rzhetsky A, Seringhaus M, Gerstein M. Seeking a new biology through text mining. Cell. 2008;134:9–13. doi: 10.1016/j.cell.2008.06.029.

Smalheiser N.R, Zhou W, Torvik V.I. Anne O’Tate: a tool to support user-driven summarization, drill-down and browsing of PubMed search results. Journal of Biomedical Discovery and Collaboration. 2008;3:2. doi: 10.1186/1747-5333-3-2.

Stableforth D.E. Chronic lung disease. Pulmonary fibrosis. British journal of hospital medicine. 1979;22(128):132–135.

Steinberger R, Fuart F, Goot E., Van Der, Best C. Text Mining from the Web for Medical Intelligence. San Fr: Heal; 2008 doi: 10.3233/978-1-58603-898-4-295 295–310.

Thuong N.T.T, Dunstan S.J, Chau T.T.H, Thorsson V, Simmons C.P, Quyen N.T.H, Thwaites G.E, Lan N.T.N, Hibberd M, Teo Y.Y, Seielstad M, Aderem A, Farrar J.J, Hawn T.R. Identification of tuberculosis susceptibility genes with human macrophage gene expression profiles. PLoS Pathogens. 2008;4 doi: 10.1371/journal.ppat.1000229.

Toda N. Regulation of blood pressure by nitroxidergic nerve. Journal Of Diabetes And Its Complications. 1995;9:200–202.

Van der Sar A.M, Spaink H.P, Zakrzewska A, Bitter W, Meijer A.H. Specificity of the zebrafish host transcriptome response to acute and chronic mycobacterial infection and the role of innate and adaptive immune components. Molecular Immunology. 2009;46:2317–2332. doi: 10.1016/j.molimm.2009.03.024.

Wilbur W.J, Hazard G.F, Divita G, Mork J.G, Aronson A.R, Browne A.C. Analysis of biomedical text for chemical names: a comparison of three methods. Proceedings of AMIA Symposium. 1999:176–180.

Wollmer M.A, Papassotiropoulos A, Streffer J.R, Grimaldi L.M.E, Kapaki E, Salani G, Paraskevas G.P, Maddalena A, de Quervain D, Bieber C, Umbricht D, Lemke U, Bosshardt S, Degonda N, Henke K, Hegi T, Jung H.H, Pasch T, Hock C, Nitsch R.M. Genetic polymorphisms and cerebrospinal fluid levels of tissue inhibitor of metalloproteinases 1 in sporadic Alzheimer’s disease. Psychiatric Genetics. 2002;12:155–160.

Yamamoto Y, Takagi T. Biomedical knowledge navigation by literature clustering. Journal of Biomedical Informatics. 2007;40:114–130. doi: 10.1016/j.jbi.2006.07.004.

Yan P, Hu X, Song H, Yin K, Bateman R.J, Cirrito J.R, Xiao Q, Hsu F.F, Turk J.W, Xu J, Hsu C.Y, Holtzman D.M, Lee J.-M. Matrix metalloproteinase-9 degrades amyloid-beta fibrils in vitro and compact plaques in situ. Journal of Biological Chemistry. 2006;281:24566–24574. doi: 10.1074/jbc.M602440200.

Yeasin M, Malempati H, Homayouni R, Sorower M. A systematic study on latent semantic analysis model parameters for mining biomedical literature. BMC Bioinformatics. 2009;10:A6. doi: 10.1186/1471-2105-10-S7-A6.

Yong V.W, Krekoski C.A, Forsyth P.A, Bell R, Edwards D.R. Matrix metalloproteinases and diseases of the CNS. Trends in Neuroscience. 1998;21:75–80. doi: 10.1016/S0166-2236(97)01169-7.

Zuberi K, Franz M, Rodriguez H, Montojo J, Lopes C.T, Bader G.D, Morris Q. GeneMANIA prediction server 2013 update. Nucleic Acids Research. 2013;41:W115–W122. doi: 10.1093/nar/gkt533.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.223.10