Chapter 14

Big Data Concerns in Autonomous AI Systems

James A. Crowder,  and John N. Carbone

Abstract

Current and future space, air, and ground systems are growing in complexity and capability, creating serious challenges to operators who monitor, maintain, and use systems in an ever-growing network of assets. Growing interest in autonomous systems with cognitive skills to monitor, analyze, diagnose, and predict behaviors in real time makes this problem even more challenging. Systems today continue to struggle to satisfy the need to obtain actionable knowledge from an ever-increasing and inherently duplicative store of non–context specific multidisciplinary information content. In addition, increased automation is the norm and truly autonomous systems are the growing future for atomic/subatomic exploration and within challenging environments unfriendly to the physical human condition. Simultaneously, the size, speed, and complexity of systems continue to increase rapidly to improve timely generation of actionable knowledge. However, development of valuable readily consumable knowledge density and context quality continues to improve more slowly and incrementally. New concepts, mechanisms, and implements are required to facilitate the development and competency of complex systems capable of autonomous operation, self-healing, and thus critical management of their knowledge economy and higher-fidelity self-awareness of their real-time internal and external operational environments. Presented here are new concepts and notional architectures to solve the problem of how to take the fuzziness of information content and drive it toward context-specific topical knowledge development. We believe this is necessary to facilitate real-time cognition-based information discovery, decomposition, reduction, normalization, encoding, memory recall (recombinant knowledge construction, and, most important, enhanced/improved decision making for autonomous artificially intelligent systems.

Keywords

Artificial intelligence; Autonomous decision making; Autonomous robotics; Newtonian mechanics; Quantum mechanics; Real-time autonomous systems

Introduction

To be truly autonomous, an artificially intelligent system (AIS) must be provided with real-time cognition-based information discovery, decomposition, reduction, normalization, encoding, and memory recall (i.e., knowledge construction) to improve understanding and context-based decision making for autonomous robotic systems. Cognitive systems must be able to integrate information into their current cognitive conceptual ontology (Crowder et al., 2012) to be able to “think” about, correlate, and integrate the information into the overall AIS memories. When describing how science integrates with information theory, Brillouin (2004) defined knowledge succinctly as resulting from a certain amount of thinking and as distinct from information, which had no value, was the “result of choice,” and was the raw material consisting of a mere collection of data. In addition, Brillouin concluded that 100 random sentences from a newspaper, a line of Shakespeare, or even a theorem of Einstein have exactly the same information value. Therefore, information content has no value until it has been thought about and thus turned into knowledge within a given context.
Decision making is of great concern because of the handling of ambiguity and the ramifications of erroneous inferences. Often there can be serious consequences, when actions are taken based on incorrect recommendations (Crowder, 1996) and misunderstanding of context, which can influence decision making before the inaccurate inferences can be detected or even corrected. Underlying the data fusion domain is the challenge of creating actionable knowledge from information content harnessed from an environment of vast, exponentially growing structured and unstructured sources of rich, complex, interrelated cross-domain data. This is a major challenge for autonomous artificially intelligent (AI) systems that must deal with ambiguity without the advantage of operator-based assistance.
Dourish (2004a) stated that the scientific community has debated definitions of context and its uses for many years. He discussed two notions of context—technical, for conceptualizing human–action relationships between the action and the system, and social science—and reported that “ideas need to be understood in the intellectual frames that give them meaning.” Hence, he described features of the environment, where activity takes place (Dourish, 2004b). Alternatively, Torralba (2003) derived context-based object recognition from real-world scenes and said that one form of performing the task was to define the “context” of an object in a scene in terms of other previously recognized objects. The author concluded that a strong relationship exists between the environment and the objects found within and that increased evidence exists of early human perception of contextual information.
Dey (2001) presented a context toolkit architecture that supported the building of more optimal context-aware applications because, he argued, context was a poorly used resource of information in computing environments, and information must be used to characterize the collection of states—or, as he called it, the “situation abstraction” of a person, place, or object relevant to the interaction between a user and the application. Similarly, when describing a conceptual framework for context-aware systems, Coutaz et al. (2005) concluded that context informs recognition and mapping by providing a structured, unified view of the world in which a system operates. The authors provided a framework with an ontological foundation, an architectural foundation, and an approach to adaptation that supposedly scale alongside the richness of the environment. The authors further concluded that context was critical in understanding and developing information systems. Winograd (2001) noted that intention could be determined only through inferences based on context. Hong and Landay (2001) described context as knowing the answers to the “W” questions (e.g., Where are the movie theaters?). Similarly, Howard and Qusibaty (2004) described context for decision making using the interrogatory 5WH model (who, what, when, where, why, and how). Finally, Ejigu et al. (2008) presented a collaborative context-aware service platform based on a developed hybrid context management model. The goal was to sense context during execution along with internal states and user interactions by using context as a function of collecting, organizing, storing, presenting, and representing hierarchies, relations, axioms, and metadata.
These discussions outline the need for an AIS cognitive framework that can analyze and process knowledge and context (Crowder and Carbone, 2012) and represent context in a knowledge management framework composed of processes, collection, preprocessing, integration, modeling, and representation, thus enabling the transition from data, information, and knowledge to new knowledge. Described in this chapter is a cognition-based processing framework and memory management encoding and storage methodology for capturing contextual knowledge, thus providing decision-making support in the form of a knowledge thread repository that depicts the relationships corresponding to specific context instances.

Artificially Intelligent System Memory Management

Sensory Memories

The sensory memory within the AIS memory system is memory registers in which raw, unprocessed information is ingested via AIS environmental sensors and is buffered to begin initial processing. The AIS sensory memory system has a large capacity to accommodate large quantities of possibly disparate and diverse information from a variety of sources (Crowder, 2010b). Although it has large capacity, it has short duration. The information buffered in this sensory memory must be sorted, categorized, and turned into information fragments, metadata, contextual threads, and attributes (including emotional attributes), and then sent on to the working memory (short-term memory (STM)) for initial cognitive processing. This cognitive processing is known as recombinant knowledge assimilation (RNA), in which raw information content is discovered from the information domain and is decomposed, reduced, compared, contrasted, and associated into new relationship threads within a temporary working knowledge domain and subsequently normalized into a pedigree within the knowledge domain for future use (Crowder and Carbone, 2011b). Hence, based on the information gathered in initial sensory memory processing, cognitive perceptrons, manifested as intelligence information software agents (ISAs), are spawned as in relative size swarms to create initial “thoughts” about the data. Subsequently, hypotheses are generated by the ISAs. The thought process information and ISA sensory information is then sent to a working memory region that will alert the artificial cognition processes within the AIS to begin processing (Crowder and Friess, 2012) Figure 14.1 illustrates the sensory memory lower ontology.
image
Figure 14.1 Sensory memory lower ontology.

Short-term Artificial Memories

Short-term or working memory within the AIS is where new information is transitionally stored in a temporary knowledge domain (Crowder and Carbone, 2011a) while it is processed into new knowledge. This follows the paradigm that information content has no value until it is thought about (Brillouin, 2004). Short-term memory is where most reasoning within the AIS happens. Short-term memory provides a major functionality called rehearsals, which allows the AIS to continually refresh or rehearse STMs while they are being processed and reasoned about, so that memories do not degrade until they can be sent on to long-term memory (LTM) and acted upon by the artificial consciousness processes within the AIS’s cognitive framework (Crowder and Carbone, 2011a).
Short-term memory is much smaller in relative space compared with LTM. Short-term memory should not necessarily be perceived as a physical location, as in the human brain, but rather as the rapid and continuous processing of information content relative to a specific AIS directive or current undertaking. One must remember that STM, which includes all external and internal sensory inputs, will trigger a rehearsal if the AIS discovers a relationship to a previously interred piece of information content in either STM or LTM. Figure 14.2 illustrates the STM lower ontology for the AIS.

Long-term Artificial Memories

In the simplest sense, LTM is the permanent knowledge domain where we assimilate our memories (Crowder and Carbone, 2011a). If the information we take in through our senses does not make it to LTM, we cannot and do not remember it. Information that is processed in the STM makes it to LTM through the process of rehearsal, processing, and encoding, and then by creating associations with other memories. In the brain, memories are not stored in files or in a database. In fact, memories are not stored whole at all, but instead are stored as information fragments. The process of recall, or remembering, constructs memories from these information fragments that, depending on the type of information, are stored in various regions of the brain.
image
Figure 14.2 Artificially intelligent system short-term memory (STM) lower ontology.
To create our AIS in a way that mimics human reasoning, we follow the process of storing information fragments and their respective encoding in different ways, depending on the type and context of the information. Each simple discrete fragment of objective knowledge includes an n-dimensional set of quantum mechanics–based mathematical relationships to other fragments/objects bundled in the form of eigenvector-optimized knowledge relativity threads (KRT) (Crowder and Carbone, 2011a). These KRT bundles include closeness and relative importance value, among others. This importance is tightly coupled to the AIS emotional storage as a function of desire or need, as described in Figure 14.3, in which the LTM lower ontology is illustrated. There are three main types of LTM (Crowder, 2010a): explicit or declarative memories, implicit memories, and emotional memories.

Artificial Memory Processing and Encoding

Short-term Artificial Memory Processing

In the human brain, STM corresponds to the area of memory associated with active consciousness and is where most cognitive processing takes place. It is also a temporary storage and requires rehearsal to keep it fresh until it is compiled into LTM. In the AIS, the memory system does not decay over time; however, the notion of memory refresh or rehearsal is still a valid concept because artificial cognitive processes work on this information. However, the notion of rehearsal means keeping track of versions of STM as it is being processed and evaluated by artificial cognition algorithms, which is why it appears to feed back onto itself (rehearsal loop). This is illustrated in Figure 14.4, the AIS STM attention loop. Three distinct processes are handled within the STM that determine where information is transferred after cognitive processing (Crowder, 2010a). This processing is shown in Figure 14.5.
image
Figure 14.3 Artificial long-term memory (LTM) lower ontology.
Artificial STM processing steps are:
Information fragment selection: This process involves filtering incoming information from the AIS artificial preconscious buffers into separable information fragments and then determining which information fragments are relevant to be further processed, stored, and acted upon by the cognitive processes of the AIS as a whole. Once information fragments are created from incoming sensory information, they are analyzed and encoded with initial topical information as well as metadata attributes that allow the cognitive processes to organize and integrate incoming information fragments into the AIS’s overall LTM system. Information Fragment encoding creates a small information fragment cognitive map that will be used for organization and integration functions.
Information fragment organization: These processes within the artificial cognition framework create additional attributes within the information fragment cognitive map that allow it to be organized for integration into the overall AIS LTM framework. These attributes have to do with how the information will be represented in LTM and determine how these memory fragments will be used to construct new memories or recall memories later as needed by the AIS. This step uses Knowledge Relativity Thread (KRT) representation to capture the context of the information fragment and each of its qualitative relationships to other fragments and/or bundles of fragments already created.
image
Figure 14.4 Short-term artificial memory attention loop.
Information fragment integration: Once information fragments within the STM have been KRT encoded, they are compared, associated, and attached to larger topical cognitive maps that represent relevant subjects or topics within the AIS’s LTM system. Once these information fragment cognitive maps have been integrated, processed, and reasoned about, including emotional triggers or emotional memory information, they are sent on to the LTM system as well as the AIS artificial prefrontal cortex to determine whether actions are required.
One of the major functions within the STM attention loop is the spatiotemporal burst detector. Within these processes, binary information fragments (BIFs) are ordered in terms of their spatial and temporal characteristics. Spatial1 and temporal transitions states are measured in terms of mean, mode, median, velocity, and acceleration and are correlated between their spatial and temporal characteristics and measurements. Rather than just looking at frequencies of occurrence within information, we also look for rapid increases in temporal or spatial characteristics that may trigger an inference or emotional response from the cognitive processes.
image
Figure 14.5 Artificially intelligent systems information fragment encoding.
An AIS system does not process information content differently based on how rapidly content is ingested; an AIS must be able to recognize instances when information content might seem out of place within the context of a situation (e.g., a single speeding car within a crowd of hundreds of other cars). An AIS not only optimizes its processing on the supply side of the knowledge economy, it has to recognize, infer, and avoid distractions on what focuses the demand side of its knowledge economy upon operations and directives. State transition bursts are ranked according to their weighting (velocity and acceleration) together with the associated temporal and/or spatial characteristics and any triggers that might have resulted from this burst processing (LaBar and Cabeza, 2006). This burst detection and processing may help identify relevant topics, concepts, or inferences that may need further processing by the artificial prefrontal cortex and/or cognitive consciousness processes (Crowder and Friess, 2012).
Once processing within the STM system has been completed and all memories are encoded, mapped to topical associations and with their contexts captured, their KRT bundled representations are created and sent on to the cognitive processing engine. Memories that are deemed relevant to remember are then integrated into the LTM system.

Long-term Artificial Memory Processing

The overall AIS high-level memory architecture is shown in Figure 14.6. One thing to note is the connection between emotional memories and both explicit and implicit memories. Emotional memory carries both explicit and implicit characteristics.
Explicit or declarative memory is used to store conscious memories or conscious thoughts. Explicit memory carries information fragments that are used to create what most people would think of when they envision a memory. Explicit memory stores things such as objects and events, i.e., things that are experienced in the person’s environment. Information fragments stored in explicit memory are normally stored in association with other information fragments that relate in some fashion. The more meaningful the association, the stronger the memory and the easier it is to reconstruct or recall the memory when you choose to (Yang and Raine, 2009). In our AIS, explicit memory is divided into different regions, depending on the type or source of information. Regions are divided because different types of information fragments within the AIS memories are encoded and represented differently, each with their own characteristics, which makes it easier to construct or recall the memories, when the AIS later needs the memories. In the AIS LTM, we use fuzzy, self-organizing, contextual topical maps to associate currently processed information fragments from the STM with memories stored in the LTM (Crowder and Carbone, 2011a).
image
Figure 14.6 High-level artificial memory architecture.
Long-term memory information fragments are not stored in databases or as files, but are encoded and stored as a triple helix of continuously recombinant binary neural fiber threads that represent:
• The BIF object along with the BIF binary attribute objects
• The BIF RNA binary relativity objects
• The binary security encryption threads
Built into the RNA binary relativity objects are binary memory reconstruction objects, based on the type and source of BIF, that allow memories to be constructed for recall purposes.
There are several types of binary memory reconstruction objects:
• Spectral eigenvectors that allow memory reconstruction using implicit and biographical LTM BIFs
• Polynomial eigenvectors that allow memory reconstruction using episodic LTM BIFs
• Socio-synthetic autonomic nervous system arousal state vectors that allow memory reconstruction using emotional LTM BIFs
• Temporal confluence and spatial resonance coefficients that allow memory reconstruction using spatiotemporal episodic LTM BIFs
• Knowledge relativity and contextual gravitation coefficients that allow memory reconstruction using semantic LTM BIFs

Implicit Biographical Memory Recall/Reconstruction Using Spectral Decomposition Mapping

We create a nonuniform expanding fractal decomposition of the image to be remembered. We use the right and left eigenvectors of the Pollicott–Ruelle resonances to determine the separable pictorial information fragment (PIF) objects. The resulting singular fractal functions form fractal spectral representations of the PIFs. These binary fractal representations are stored as the binary information fragments for the image. The reconstruction uses these PIFs to create a piecewise linear image memory reconstruction, although the individual PIFs can be used in other memory and cognitive processes, such as to perform pattern matching and/or pattern discovery. The proposed high-level architecture for the ISA cognition and memory system is illustrated in Figure 14.7.
image
Figure 14.7 Artificially intelligent system high-level cognitive architecture.

Constructivist Learning

A major issue in Big Data is the need to learn continually as more information and knowledge is gained as the volume of processed data increases. This leads us to look at constructivist learning as a construct for Big Data processing. In the view of constructivists, learning is a constructive process in which the learner builds an internal illustration of knowledge, a personal interpretation of experience. This representation is continually open to modification—its structure and linkages forming the ground to which other knowledge structures are attached. Learning is an active process in which meaning is accomplished on the basis of experience. This view of knowledge does not reject the existence of the real world and agrees that reality places constraints on what is possible; contending that all we know of the real world are the human interpretations of their experiences. Conceptual growth comes from the sharing of various perspectives and the simultaneous changing of our internal representations in response to those perspectives as well as through cumulative experience (Bednar et al., 1998).
When considering Big Data in light of an AIS, we have to ask ourselves, “What is reality?” Here we take our queue about humans. Each person has experiences of an event. Each person will see reality differently and uniquely. There is also world reality. This world reality may be based on fact or perception of fact. In fact, we construct our view of the world, of reality, from our memories, our experiences. For further thought, let us then consider construct psychology. According to the Internet Encyclopedia of Personal Construct Psychology, the constructivist philosophy is interested more in people’s construction of the world than they are in evaluating the extent to which such constructions are true in representing a presumable external reality. It makes sense to look at this in the form of legitimacies. What is true is factual legitimate, and what is people’s construction of the external reality is another form of legitimacy. Later, we can consider the locus of control in relation to internal and external legitimacies or realities.
An AIS is not human and does not have human perceptions. Artificially cognitive systems may have their own perceptions and realities, and it is important that the cognitive systems and memories have the abilities to construct correct views of the world around it, if we are to rely on them. Thus, a mentor will be necessary. That mentor will need to understand the artificial cognitive system, the AIS, and be able to understand the AIS in a human way, a human reality. After all, is this not this what makes the AIS autonomous?
Constructive psychology is a meta-theory that integrates different schools of thought. According to Bednar (Bednar et al., 1998):

Hans Vaihinger (1852–1933) asserted that people develop “workable fictions.” This is his philosophy of “As if” such as mathematical infinity or God. Alfred Korzybski’s (1879–1950) system of semantics focused on the role of the speaker in assigning meaning to events. Thus constructivists thought that human beings operated on the basis of symbolic or linguistic constructs that help navigate the world without contacting it in any simple or direct way. Postmodern thinkers assert that constructions are viable to the extent that they help us live our lives meaningfully and find validation in shared understandings of others. We live in a world constituted by multiple social realities, no one of which can claim to be “objectively” true across persons, cultures, or historical epochs. Instead, the constructions on the basis of which we live are at best provisional ways of organizing our “selves” and our activities, which could under other circumstances be constituted quite differently.

According to Adlerian Therapy as a Relational Constructivist Approach, the Adlerian perspective affirms the emphasis on the importance of humans as active agents creatively involved in the construction of their own psychology. Here, the position is that “although humans exist in a socio-cultural world of persons, a distinguishing characteristic of personhood is the possession of an individual agentic consciousness.” The article goes on to say, “If there is no self-reflexive individual and situatedness is indeed inescapable, then it is a spurious notion to think we can engage in what can be called the ‘emancipator potential of discourse analysis, that is inquiry which causes us to reflect critically and creatively on our own forms of life.’” Also, Adlerian therapy accounts for both the social-embedded nature of human knowledge and the personal agency of creative and self-reflective individuals within relationships.
According to Personal Construct Psychology, Constructivism, and Postmodern Thought (Luis Botella at http://www.massey.ac.nz/-alock/virtual/Construc.htm), there are three main areas to consider: psychological knowledge, psychological practice, and psychological research. First, we consider psychological knowledge. In his article, Mahoney (2003) said: “knowledge cannot be disentangled from the process of knowing, and all human knowing is based in value-generated processes” (p. 451). Next we consider psychological research. In postmodern terms, research is not viewed as a mapping of some objective reality, but as an interactive co-construction of the subject investigated (Kvale, 1992). This conversational and interpretive view of psychological research requires a multi-method approach, fostering the use of hermeneutic, phenomenological, and narrative methodologies.
For the Big Data concerns of AIS in terms of constructivist learning, the AI cognitive learning process is a building (or construction) process in which the AI’s cognitive system builds an internal illustration of knowledge based on its experiences and personal interpretation (fuzzy inferences) of experience. The knowledge representation and KRTs within the cognitive system’s memories are continually open to modification and the structure and linkages formed within the AI’s STM, LTM, and emotional memories, along with the contextual KRTs, form the bases for which knowledge structures are created and attached to the BIF. Learning becomes an active process in which meaning is accomplished through experience, combining structural knowledge (knowledge provided in the beginning) with constructivist knowledge to provide the AIS’ view of the real world around it. Conceptual growth within the autonomous AIS would come from collaboration among all AIS ISAs within the system, sharing their experiences and inferences—the total of which creates changing interpretations of their environment through their collective, cumulative experiences.
Therefore, one result of the constructivist learning process within the AIS is to gradually change the locus of control from external (the system needing external input to make sense or infer about its environment) to internal (the system has a cumulative constructive knowledge base of information, knowledge, context, and inferences to handle a given situation internally, meaning it is able to make relevant and meaningful decisions and inferences about a situation without outside knowledge or involvement).
It might be possible to pose specific goals for the AIS to cause it to construct knowledge about a subject or situation incrementally as data are added, to aid in its learning process as the system evolves. It may be possible to provide a real-world context for the AIS, giving it the cognitive knowledge to understand whether its locus of control should be internal or external, and when it can make that shift in its understanding.

Adaptation of Constructivist Learning Concepts for Big Data in an AIS

• Learning to strengthen knowledge (gain a better understanding of things, topics, etc. that have been learned)
Role of learning management systems: Administering learning goals and constraints
Role of learning algorithms: Measures of effectiveness against goals and constraints
- Uses hypothesis testing from hypotheses generated by knowledge acquisition learning system
Function of learning in this role: Increase in stimulus–response feedback for this strengthened knowledge within the cognitive conceptual ontology
Focus: Addition of behaviors/information to current memories; addition of contextual threads to current memories; addition of emotional memory triggers; addition of procedural memories
• Learning to acquire knowledge (understanding new information, new topics, etc. that have not been previously experienced or learned)
Role of learning management system: Present new information/concepts to be learned from sensor information correlated with current conceptual ontology
Role of learning algorithms: Receive and process information to form new concept(s) that must be included in conceptual ontology (Occam learning algorithms)
Function of learning in this role: Create new concepts, find fundamental concept that can be learned about this new information, and generate hypotheses about concept for knowledge strengthening learning system to use when new information is available.
Focus: Creation of procedural memories; creation of initial information fragments
• Learning to construct knowledge (create a knowledge representation in our memories; create meaningful connections between knowledge)
Role of learning management system: Cognitive guidance and modeling; deconstruct information into manageable information fragments, correlation (integration) into current memory fragment structure; encoding of memory fragments, based on RNA threads and information encoding schemas
Role of learning algorithms: Reasoning and analysis of data to determine stimulus/response to goals and constraints; making sense of information and constructing knowledge representations
Functions of learning in this role: Create meaningful information fragment representations and contextual threads that allow assimilation into LTMs; memory organization and integration
Focus: Constructivist learning (active learning) using a variety of cognitive processes (reasoner and analyst agents) during the learning process; construction of emotional contexts

Practical Solutions for Secure Knowledge Development in Big Data Environments

As expressed previously, constructing qualitative knowledge is a function of meaningful information content management and the ability of the system to develop high-fidelity weighted n-dimensional value within Big Data storage environments of systems. The practical reality of system solutions is that they must be (S)ecure, they must have the ability to manage the natural (M)alleability of information content, they must be able to (S)ynthesize that content and understand the patterns and store the contextual pedigree (H)euristics (e.g., state, time, form) over locally or geographically distributed nodes. Hence, SMSHy Information Content Management requires practical solutions.
Practical system security for Big Data is made adaptable through the use of discrete obfuscation (DO) enabling data to secure itself by separating information content and knowledge context. A Big Data system is made malleable by implementing a framework to optimize knowledge structures representing the ever-changing situational understanding. They are organized to allow for synthesis or rapid capture of new knowledge, context, and relationships. Finally, a practical Big Data system must have scalable rules that define the ingest, analysis, and storage functions required to retain system pedigree, so that it might heal itself: a practical function most systems do not have and could not perform even if they wanted to.

Practical Big Data Security Solutions

Practical Big Data security is created with the simple understanding of knowledge and context. Understanding through observation is a natural humanistic trait that has been studied for decades. This process of learning is relatively simple in nature. We observe and discover, decompose, and reduce the information to something we believe we can understand. We understand by comparing, contrasting, associating, and normalizing the content we ingest, and then we store it as a memory as described above. This storage might be a procedure, such as how to open a door or pour a glass of water, or more complex, such as how to drive a car. To develop a system to perform practical applications, it must perform similar humanistic tasks (see Chapters 1 and 10).
The information environment a system sees is defined as information artifacts and knowledge. An information artifact is any information perceived or observed but not yet understood. Knowledge components are relationships created between any two or more pieces of information that have crossed a relative importance threshold to become established as something important enough to remember within the mind of the stakeholder. The information has become important enough, or has matured enough, for a stakeholder/system to acknowledge the need for retention, along with the associated characteristics of the relationship. Knowledge relativity threads (Crowder and Carbone, 2011a), as discussed earlier, can be applied to any domain in which enabling the n-dimensional weighted relationship creation of knowledge is of interest (see Figure 14.8). The multi-step process is similar to how humans assemble knowledge over time, for example, using a search engine, constantly refining our learning.
Thus, if KRTs can give us context, cutting threads would remove context and not allow someone or a system to understand. Hence, decomposition supports security because it is the act of slicing the contextual bonds of a relationship between two information artifacts, or what we denote as DO. For example, a document can be sliced into paragraphs, paragraphs can be sliced into sentences, sentences can be sliced up into words, and words can be sliced into characters. A digital picture can be sliced into objects within the picture, the objects within the picture can be sliced into pixels, and the pixels can be sliced into numerical values.
We must also understand the concept of knowledge component contribution; each knowledge component is a function of its subcomponents, each knowledge component has independent value, and each subcomponent contributes to the overall value of the context of its parent; hence, understanding equals the amount of knowledge and context acquired. Knowledge and context are generally mission/activity focused and created/aggregated as, for example, folders, files, pictures, or databases. Hence, the more an attacker sees, knows, or learns about you, your mission, or your system, this will obviously increase your vulnerability. Therefore, a secure system will separate the knowledge from the context, such that the more separation or anonymity is created, the less understanding an attacker has and less damage can be achieved. Finally, a system can become more secure if you comprehend that understanding content can be just a matter of time. Time is not necessarily your enemy. It can be your best friend because information content and learned knowledge many times have an expiration date. Therefore, perform assessments against your system and its proposed uses and always inject time into the equation to determine whether content is valuable enough to be retained and for how long it needs to be secured. Copyrights and patents can expire; so can your data.
image
Figure 14.8 Knowledge relativity threads.

Optimization of Sociopolitical-Economic Systems and Sentiment Analysis

Sociopolitical and economic systems are characterized by many interconnecting parts. Non-technical systems are often difficult to understand because of natural ambiguities, many unclear dependencies, and inabilities to agree on actual problems and effective solutions. Hence, much understanding is usually superficial, when what is needed to generate compelling solutions is real analysis to minimize what is open for interpretation.
Economics and related cycles are generally well-known phenomena. As new technologies periodically drive the marketplace, various time-dependent combinatorial complexities are at work (Suh, 2005). However, analysis of current sentiment is usually after the fact, essentially counting how much has been purchased within a given period of time and using that information as a predictor for the following year; hence, it is not an exact science. To achieve a credible level of predictability, we require greater fidelity of understanding of the complexities, dependencies and sentiment. Knowledge relativity threads can provide the tool to represent these complexities in many dimensions under the covers of systems. n-Dimensional capture and collection of large content are also facilitated by parallel coordinates (Inselberg and Dimsdale, 1991) that can rapidly be presented into human understanding in the second or third dimension. Remember that the presentation of n-dimensional relationships traditionally breaks down quickly at dimension 3 or 4. Figure 14.8 presents a time segment of the complexities of knowledge context creation for a concept in biology known as phenotypic arrays. The two-dimensional shapes depicted show the growth over time of the weighted relationships captured throughout the learning process, in which shape sizes, line length/closeness, and location all give context to learned perceptions of the biology article in question.
If one applies RNA processes depicted in Figure 14.8 (e.g., discovery, decomposition) to sentiment analysis, the output derived is a weighted contribution of elements, a kind of volumetric representing the corpus of what has been learned as a pictorial analogy to chemistry, a molecule of knowledge. Sentiment analysis is a growing field of analytics in the Big Data world, but it has been part of economic growth measures for many years (see Chapters 2 and 9).
A subcategory and higher specialization of sentiment analytics is the analysis of facial expressions to determine human a priori and real-time sentiment relative to a given situation. Imagine that you can combine multiple data points such as voice, breathing, heart rate, and perspiration with facial recognition. The result could be much higher resolution of prediction. A Transportation Security Administration representative at an airport could benefit greatly from understanding the real-time disposition of passengers. How do we model these dependencies to achieve a compelling level of predictability and weed out false detections? Using RNA, the growth of any knowledge molecule contains the full corpus of all perceptions and their weighted importance over time. Hence, at given time t to t + n, relationships are added, modified, and deleted. Figure 14.9 depicts a sample RNA graphic showing the kinds of information an evolving sentiment analysis KRT could hold as a human or a system begins to sift through facial contours or external environmental factors content to come to a knowledge density conclusion surrounding the emotional state of a given individual relative to Sentiment 1. The system should be continuously evolving through various parallel hypotheses, across many possible sentiments forming a more or less dense context and ultimately, an understanding its environment.
image
Figure 14.9 Sentiment analysis using knowledge relativity threads.

Conclusions

We believe the framework presented in this chapter provides an AI architecture and methodology that will allow autonomous operations. The use of the ISA architecture, combined with the cognitive structures described here, have the potential to radically change and enhance autonomous systems in the future. More work is needed to refine the agent technologies and learning sets, but we feel this has much potential.
We described memory processing and encoding methodologies to provide AIS with memory architectures, processing, storage, and retrieval constructs similar to human memories. We believe these are necessary to provide artificial cognitive structures that can truly learn, reason, think, and communicate similar to humans. There is much work to do, and our current research will provide the software processing infrastructure for the ISAs necessary to create the underlying cognitive processing required for this artificial neural memory system, overlaid onto Big Data infrastructure to implement security at levels of understanding with significantly higher levels of fidelity to match growing asymmetric threats.

References

Bednar A, Cunningham D, Duffy T, Perry J. Theory into practice: how do we link? In: M Duffy T, Jonassen D.H, eds. Constructivism and Technology of Instruction: A Conversation. Hillsdale, NJ: Lawrence Erlbaum Associates; 1998:17–35.

Botella, L., Personal Construct Psychology, Constructivism, and Postmodern Thought. Found at: http://www.massey.ac.nz/(aboutsign)alock/virtual/Construc.htm.

Brillouin L. Science and Information Theory. Dover; 2004.

Crowder J.A. X33/RLV Autonomous Reusable Launch System Architecture NASA Report 96-RLF-1.4.5.5-005. Littleton, CO: Lockheed Martin; 1996.

Crowder J.A. The continuously recombinant genetic, neural fiber network. In: Proceedings of the AIAA Infotech@Aerospace-2010, Atlanta, GA. 2010.

Crowder J.A. Flexible object architectures for hybrid neural processing systems. In: Proceedings of the 11th International Conference on Artificial Intelligence, Las Vegas, NV. 2010.

Crowder J.A, Carbone J. Recombinant knowledge relativity threads for contextual knowledge storage. In: Proceedings of the 12th International Conference on Artificial Intelligence, Las Vegas, NV. 2011.

Crowder J.A, Carbone J. Transdisciplinary synthesis and cognition frameworks. In: Proceedings of the Society for Design and Process Science Conference 2011, Jeju Island, South Korea. 2011.

Crowder J, Carbone J. Reasoning Frameworks for Autonomous Systems. In: Proceedings of the AIAA Infotech@Aerospace 2012 Conference, Garden Grove, CA. 2012.

Crowder J, Friess S. Artificial psychology: the psychology of AI. In: Proceedings of the 3rd International Multi-conference on Complexity, Informatics, and Cybernetics, Orlando, FL. 2012.

Crowder J, Raskin V, Taylor J. Autonomous creation and detection of procedural memory scripts. In: Proceedings of the 13th Annual International Conference on Artificial Intelligence, Las Vegas, NV. 2012.

Coutaz J, Crowley J, Dobson S, Garlan D. Context is key. Communications of the ACM. 2005;48:53.

Dourish P. Where the Action Is: The Foundations of Embodied Interaction. The MIT Press; 2004.

Dourish P. What we talk about when we talk about context. Personal and Ubiquitous Computing. 2004;8:19–30.

Dey A. Understanding and using context. Personal and Ubiquitous Computing. 2001;5:4–7.

Ejigu D, Scuturici M, Brunie L. Hybrid approach to collaborative context-aware service platform for pervasive computing. Journal of Computers. 2008;3:40.

Hong J, Landay J. An infrastructure approach to context-aware computing. Human–Computer Interaction. 2001;16:287–303.

Howard N, Qusaibaty A. Network-centric information policy. In: Proceedings of the Second International Conference on Informatics and Systems. 2004.

Inselberg A, Dimsdale B. “Parallel Coordinates.” Human-Machine Interactive Systems. US: Springer; 1991 199–233.

Kvale S. Psychology and Postmodernism. Thousand Oaks, CA: Sage Publications; 1992.

LaBar K, Cabeza. Cognitive neuroscience of emotional memory. Nat. Rev. Neurosci. 2006;7:54–64.

Mahoney M. Constructive Psychotherapy: A Practical Guide. New York, NY: The Guilford Press; 2003.

Suh N.P. Complexity Theory and Applications. Oxford University Press; 2005.

Torralba A. Contextual priming for object detection. International Journal of Computer Vision. 2003;53:169–191.

Winograd T. Architectures for context. Human–Computer Interaction. 2001;16:401–419.

Yang Y, Raine A. Prefrontal structural and functional brain imaging findings in antisocial, violent, and psychopathic individuals: a meta-analysis. Psychiatry Res. November 2009;174(2):81–88. doi: 10.1016/j. pscychresns.2009.03.012. PMID 19833485.


1 Spatial in this context can refer to geographic locations (either two- or three-dimensional), cyber-locations, or other characteristics that may be considered spatial references or characteristics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.221.133