20

 

 

Perspectives on the Human Side of Data Fusion: Prospects for Improved Effectiveness Using Advanced Human–Computer Interfaces

 

David L. Hall, Cristin M. Hall, and Sonya A. H. McMullen

CONTENTS

20.1   Introduction

20.2   Enabling Human–Computer Interface Technologies

20.2.1   Three-Dimensional Visualization Techniques/Environments

20.2.2   Sonification

20.2.3   Haptic Interfaces

20.3   The Way Ahead: Recommendations for New Research Directions

20.3.1   Innovative Human–Computer Interface Designs

20.3.2   Cross-Fertilization

20.3.3   Mining the Abnormal

20.3.4   Quantitative Experiments

20.3.5   Multisensory Experiments

References

 

 

20.1   Introduction

Even before the 9/11 attack, the U.S. military and intelligence communities had expended huge amounts of resources to develop new types of sensors and surveillance methods. Advanced sensors range from the development of nanoscale, smart sensors in distributed sensor networks (Smart Dust)1 to national level sensors collecting image and signal data. This trend has enabled an ever-increasing ability to collect data in a vast “vacuum cleaner” approach. The concept is illustrated in Figure 20.1.

It is tempting to view the fusion process as primarily a data- or energy-driven process, that is, energy is collected by sensors (either actively or passively) and is transformed into signals, images, scalar, or vector quantities. These quantities, in turn, are translated into state vectors, labels, and interpretive hypotheses by automated estimation and reasoning processes. Ultimately, one or more humans observe the results to develop an understanding of an event or situation. Indeed, this is a common (implicit) view of the information fusion process. In Figure 20.1, the “level” (e.g., level 0, level 1, etc.) refers to the levels of functions identified in the Joint Directors of Laboratories (JDL) data fusion process model.2*

Images

FIGURE 20.1
Transformation of energy into knowledge.

Extensive research in the data fusion community has been conducted to develop techniques for level 1 fusion (the data or sensing side of the process). The bulk of the literature on multisensor data fusion focuses on the automation of target tracking and automatic target recognition.3* Although such research is needed, current problems involve complexities such as identifying and tracking individual people and groups of people, monitoring global activities and recognition of events that may be a precursor to terrorist activities. The requisite data for this analysis involves sensor data (including signals, images, and vector quantities), textual information (from web sites and human reports), and utilization of models. This process of analysis is very human intensive, requiring teams of analysts to search for data, interpret the results, develop alternative hypotheses, and assess the consequences of such hypotheses.

Our perception is that, by and large, researchers have started at “the wrong end” of the data fusion process. That is, researchers have started at the input side and sought to address methods for processing sensor data to automatically develop a situation database and display (e.g., target tracks, icons representing target identification, etc.). We argue that research on the user side of the data fusion process has been relatively neglected. It should be noted that the level 5 fusion concept was introduced in 2000 by Hall et al.,4 in part, to call attention to the need to focus on the human side of the fusion process. Level 5 processing recognizes that the data fusion system needs to actively interact with a user to guide the focus of attention, assist via cognitive aids, and improve the human-in-the-loop processing of multisensor data.

Analysts face a situation in which they are immersed in a sea of data (drowning in data), but thirst for knowledge about the meaning of the data. Although this is beginning to be addressed by a number of researchers,5, 6, 7 and 8 there is a continued need to provide cognitive aids to support the process of analysis. We have previously suggested a combination of new directions to improve the human side of fusion, including utilization of three-dimensional (3D) visualization, intelligent advisory agents, and interactive gaming techniques.9 In this chapter, we explore recent advances in human–computer interaction technologies that provide a basis for significant improvements in increasing the human side of data fusion and make recommendations for research directions.

We suggest that the current situation is analogous to a pilot attempting to fly an aircraft by providing quantitative directions directly to the aircraft control structures (e.g., move the left aileron down 3.7°, advance the engine throttles by 23%). It would be literally impossible to fly an aircraft under such conditions. Instead, pilots fly modern aircraft by “electronic wire” in which computer interfaces map human physical motions into commands for controlling physical motions such as moving the flaps. By analogy, we still require that human users interact with databases and search engines by creating Boolean queries with specific terms (e.g., find textual information that contains the terms, Fallujah, weapon systems, and terrorist). It is, therefore, not surprising that interaction with huge databases is frustrating to analysts, and that searches involve extensive work that focuses the analyst’s attention on the data interaction rather than the inference process. New methods are required to improve the interaction of humans with data fusion systems and to increase the level of effectiveness of the “human-in-the-loop” in a fusion process.

 

 

20.2   Enabling Human–Computer Interface Technologies

The rapid evolution in human–computer interface (HCI) technologies provides opportunities for new concepts on how analysts/users could interact with a data fusion system.10 Key technological advances include 3D immersive visualization environments, sonification, and haptic interfaces. A brief overview of these and their application to data fusion is provided in Sections 20.2.1 20.2.2 and 20.2.3.

20.2.1   Three-Dimensional Visualization Techniques/Environments

Perhaps the most spectacular visual display concept is that of the full-immersion display or virtual reality. PC Magazine’s online encyclopedia11 defines virtual reality as an artificial reality that projects you [user] into a 3D space generated by a computer (p. 1). PC Magazine further states, “virtual reality (VR) can be used to create an illusion of reality or imagined reality and is used both for amusement as well as serious training” (p. 1). In 1991, Thomas DeFanti and Dan Sandin conceived the concept of the CAVE virtual reality system.12 The prototype CAVE was developed and tested at the Electronic Visualization Laboratory at the University of Illinois in Chicago the following year. The CAVE is essentially a small room,

Approximately ten feet-wide on each side, with three walls and a floor. Projectors, situated behind the walls, projected computer-generated imagery onto the walls and floor. Two perspective-corrected images were drawn for each frame, one for the right eye and one for the left. Special glasses were worn that ensured that each eye saw only the image drawn for it. This created a stereoscopic effect where the depth information encoded in the virtual scene restored and conveyed to the eyes of those using it.12 (paragraph 2)

Figure 20.2 shows the three-sided CAVE.

Images

FIGURE 20.2
CAVE at the Electronic Visualization Laboratory (http://cave.ncsa.uiuc.edu/about.html, 2000–2004).

Leetaru reports that,

The CAVE works by reproducing many of the visual cues that your brain uses to decipher the world around you. Information such as the differing perspectives presented by your eyes, depth occlusion, and parallax (to name a few) are all combined into the single composite image that you are conscious of while the rest is encoded by your brain to provide you with depth clues. The CAVE must reproduce all of this complex information in real-time, as you move about in the CAVE.12 (paragraph 5)

The user(s) immersed in the CAVE can visualize complex images and data in three dimensions similar in nature to Star Trek: The Next Generation’s “Holodeck.”13 Only the CAVE user must wear special shutter glasses that project a separate image for each eye. The user must also control the visualization using a control device that can be either a wand, glove, or full-body suit, which is tracked by the computer system through electromagnetic sensors allowing for the presentation of the proper viewpoint of the visualization scene. A key advantage of the CAVE technology over head-mounted 3D displays is the ability for multiple observers to simultaneously see the 3D display and each other. This provides an opportunity for multiple analysts to collaborate on an assessment of data, directly comparing notes and literally pointing out observations to each other.

The CAVE is only one of a number of virtual reality systems available. For example, Infinity Wall provides a one-wall display that gives a 3D visual illusion whereas Immersa-Desk developed by the Electronic Vision Laboratory Company allows a user sitting in front of a terminal to see a 3D illusion.14 The Elumen Company has developed a desktop system that provides a full-color, immersive display, with 360° projection and a 180° field of view. The hemispherical screen is positioned vertically so as to fill the field of view of the user, creating a sense of immersion. The observer loses the normal depth cues, such as edges, and perceives 3D objects beyond the surface of the screen.

There are clear applications of 3D visualization techniques to multisensor data fusion. The most obvious example is simply creating 3D situation displays of various kinds. Researchers at Penn State University7,9,10 have developed 3D displays that use the third dimension (height above the floor—on which a situation map is displayed) to represent time of day or time of year. In this example (see Figure 20.3) multisource reports related to the same geographic area are shown in a column above the situation map. Thus, data concerning a specific geolocation are shown in a column above the map whereas data occurring at the same time of day or time of year are shown in a plane parallel to the situation map. In these displays, data occurring in a sphere or ellipsoid are data in a constrained geo-spatial-temporal volume of interest. Other experiments we have conducted use height above the floor (or map) to represent different levels of abstraction or processing. For example, different layers of processing/pattern recognition are shown in different hierarchical layers.

Images

FIGURE 20.3
Example of three-dimensional display using height to represent time.

Other examples of related research by researchers at Penn State University include using 3D representations of data to allow a user to “surf through” large data sets. An example is shown in Figure 20.4. In this figure, provided by Wade Shumaker,15 thousands of data points can be observed simultaneously by stacking the data in a translucent data cube and allowing key data (e.g., reports that meet specified criteria) to be turned into different colors and different degrees of transparency.

Finally, Seif El-Nasr and coworkers9 have explored techniques to map a strategy-based game interface within the analyst domain and measure its utility in enhancing analyst productivity, speed, and quality of the hypotheses generated. Several explored visualization techniques for analysts have assumed a removed point of view with a geographic information system (GIS).16, 17, 18 and 19 These are similar to the visualization methods used in strategy-based games. The interfaces have evolved to adopt a very minimal context-sensitive interface to visualize dynamic and static information and events. The research is focusing on developing several mappings between analyst’s data and the current types of displays or visualizations used by strategy-based games. The goal is to use gaming techniques as well as cinematic techniques to visually depict analyst-related data in a way that allows analysts to quickly grasp the situation and the event.

Images

FIGURE 20.4
Example of three-dimensional display of large data set. (From Shumaker, W., Private communication to D.L. Hall, September, 2007.)

20.2.2   Sonification

Another area for interaction with computers involves the concept of sonification—converting data into sound effects20 for an auditory display. A number of software packages exist to support such transformations. These “audio displays” can be used to improve the ability of users to perform pattern recognition in complex data. For example, Ballora et al.21 have developed a method using digital music software to transform the sequence of intervals between successive human heartbeats into an electroacoustic sound track. They were able to demonstrate the ability of nonspecialists (with limited training) to readily distinguish among the following conditions of heart: (1) healthy heart behavior, (2) congestive heart failure, (3) atrial fibrillation, and (4) obstructive sleep apnea.

In the defense community, sonar operators have long used a combination of sound (listening to transformed acoustic signatures) and visual displays of acoustic spectral patterns to detect, classify, and characterize underwater phenomena such as submarines, whale sounds, and so on.22 In a recent work at Penn State University, we have conducted experiments to use sonification for characterizing uncertainty. A Geiger counter analogy has been used in a 3D visualization environment to represent the uncertain location of a target using 3D sound. Geiger counter sounds are used to allow a user to understand that the location of a target lies within an uncertainty region rather than at a single point. This tends to mitigate the effect of a visual display in which an error ellipsoid is viewed as a “bulls eye”—namely, the uncertainty region tends to be ignored and the target location is viewed as being in the center of the error ellipsoid rather than somewhere within the error region. Finally, we have also experimented with sonification to provide an aural representation of correlation. In a 3D visualization environment, our application allows users to retrieve multisource data reports about a situation, event, or activity. After the user specifies a set of data, reputed to be correlated, we compute a general measure of correlation (MOC) on the basis of common report elements such as location, time, and identity characteristics.23 The MOC is translated into a harmonic sound with the degree of harmony based on the level of correlation. Highly correlated reports produce a pleasing/harmonious sound whereas uncorrelated reports produce a discordant sound. Thus, a user can quickly determine whether his/her hypothesis regarding the relationship of reports is well-founded or not.

20.2.3   Haptic Interfaces

Haptic interfaces are devices that use controlled feedback to provide a user the sensation that he or she is actually touching a figure in a computer. New research is being conducted on the mechanics, ergonomics, and utility of haptic interfaces (see http://www.hapticssymposium.org/). Devices range from single “pen-like” devices to gloves and other methods. Using a pen configuration, for example, a user can observe a figure on a computer screen, obtain a sense of touch of the object, and manipulate the object by carving, shaving, or gouging the surface of the figure. The apparent density and surface hardness of the figure can be readily changed via a slider scale. There has been extensive research on haptic interfaces for medical applications such as training surgeons. An example of the use of such an interface for data fusion might be to allow a user to “touch” the surface of an error ellipsoid to provide a sense of the second-order uncertainty (viz., the uncertainty of the uncertainty). A soft or “squishy” error surface would indicate that the uncertainty was not well known whereas a relatively hard surface for the ellipsoid would indicate a well-known error boundary.

 

 

20.3   The Way Ahead: Recommendations for New Research Directions

To make significant progress to improve the effectiveness of human-centered information fusion, we recommend several approaches (to be conducted in parallel).

20.3.1   Innovative Human–Computer Interface Designs

Innovative designs should be sought to support data fusion and understanding. Although this appears obvious, there is a tendency to simply reformat the same situation displays using the “display technology du jour.” What is needed is true innovation, analogous to Richard Feynman’s invention of Feynman diagrams to represent the interaction of fundamental particles in physics (see http://scienceworld.wolfram.com/physics/FeynmanDiagram.html). Feynman’s diagrams transformed the way physicists understood particle interactions and have created a new analysis approach. Ironically, Feynman24 has reported that he originally developed the diagrams to help himself understand new developments in particle physics. We need the equivalent of Feynman diagrams for situation awareness and threat assessment. Similarly, Tufte25 has made a career of showing how creative ways of displaying information provide new insights. He has provided multiple examples of how new graphic representations lead to new analysis results. It is uncertain how to elicit or motivate such innovative designs. Possible techniques might include conducting contests analogous to creative writing or creative art contests to solicit radical new ways of envisioning data.

20.3.2   Cross-Fertilization

Another potential area for improved design is cross-fertilization—applying data analysis and visualization methods from one domain into another. Could a standard tool such as the Hertzsprung-Russell diagram (see, for example, http://aspire.cosmic-ray.org/labs/star_life/hr_diagram.html) used routinely in astronomy to analyze stellar structure be applied to aspects of data fusion? Could techniques used to understand gene phenomena in biology be useful for situation assessment? A systematic attempt at cross-fertilization might be instructive and a source of new visualization and analysis techniques. An example of numerous types of graphic displays is provided by the “periodic table of visualization” (see http://www.visual-literacy.org/periodic_table/periodic_table.html).

20.3.3   Mining the Abnormal

Inferences about human cognition and perception have often been accurately made from observations about the abnormal or pathological cases of functioning. The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Text Revision (DSM-IV TR, 2000)26 defines many mental health disorders that include changes in perception and thought process. Psychotic disorders such as schizophrenia include reports of delusions, hallucinations, and heightened sensitivity to stimuli. These disorders are also referred to as thought disorders in the field of clinical psychology because of the tendency for delusions (ideas that are believed but have no basis in reality) that affect behavior. Hallucinations, or perceptions that occur without stimulation, have been reported most commonly for the senses of hearing and sound, although olfactory (smell) and tactile (touch) hallucinations have also been reported. According to the DSM-IV there are no laboratory findings that can diagnose schizophrenia; however, there have been associated anatomic differences between those with the disorder and those without. Although psychotic disorders are usually associated with perceptual and thought processes, there are other disorders that have significant associated sequelae. Anxiety disorders such as posttraumatic stress disorder (PTSD) include symptoms of heightened arousal and over-reaction to certain stimuli and have been associated with problems with concentration and attention. Depressive mood disorders have also been shown to have associated problems with concentration. Perhaps, another point of insight might be an examination of those with autism spectrum disorders (ASD). Formerly thought of as “idiot savants” or “autistic savants,” some individuals with a higher-functioning form of autism known as Asperger’s disorder have social interaction problems, difficulty in interpreting social cues, and associated repetitive and stereotyped behaviors. These behaviors are thought to be self-stimulating and include rocking, hand-flapping, preoccupation with parts of objects, and inflexible adherence to routines. Sometimes, these individuals may have highly developed behaviors such as decoding (reading but not necessarily comprehension), mathematics, and music. This is not a typical presentation of ASD; however, these behaviors and associated strengths through neurological and medical inquiry may help us understand normal behaviors. Similarly, case studies in the areas of stroke, brain injury, and seizure disorders have provided invaluable insight into brain functioning, language processing, personality, and impulse control. These kinds of studies and literature review may help us understand more fully the capabilities and limitations of human reason and behavior. Ramachandran27 has provided examples of how the study of abnormal behavior and cognitive anomalies can lead to insights about perception in ordinary cognition.

20.3.4   Quantitative Experiments

The areas of cognitive psychology and even consumer marketing research have examined important aspects of the limitations of human decision making. Level 5 data fusion implies a generalized problem with a need for empirical research that bridges the gap between cognitive science and human–computer interfaces. The following heuristics are just a small sample of potential problems with human judgment that need to be systematically studied to optimize data visualization (see, for example, Ref. 28):

  1. Humans judge the probability of an uncertain event according to the representativeness heuristic. Humans judge based on how obviously the uncertain event is similar to the population in which it was derived rather than considering actual base rates and the true likelihood of an occurrence (e.g., airplane crashes being considered more likely than they actually are).

  2. Humans also judge based on how easily one can recall relevant instances of a phenomenon and have particular trouble when it confirms a belief about the self and the world. For example, the vast media coverage of airplane accidents and minimal coverage of automobile accidents contributes to fear of flying even though statistically it is much safer. Media coverage, however, gives people dramatic incidences of airplane crashes to recall more than automobile accidents.

  3. Even the order and context in which options are presented to people influences the selection of an option. People tend to engage in risk aversion behavior when faced with an option involving potential losses and risk-seeking behavior when faced with potential gains.

  4. People tend to have an inflated view of their actual skills, knowledge, and reasoning abilities. Most individuals see themselves as “above average” in many traits, including sense of humor, driving abilities, and other skills even though not everyone who reports this way could be.

  5. Humans have a tendency to engage in confirmation-biased thinking, in which analysts tend to believe a conclusion if it seems to be true (based on the context in which it is presented) even when the logic is actually flawed.29

One challenge in visualization and understanding of human–computer interaction as it applies to data fusion is a tendency to develop prototype displays or tools and use simple expressions of user interest to determine their utility. Hall et al.29 have argued for a systematic approach to conduct experiments that can quantitatively assess the utility and effect of displays and cognitive tools to support data fusion. An example of such experiments is the work of McNeese et al. who have developed a “living laboratory” approach (http://minds.ist.psu.edu/) to evaluate cognitive aids and collaboration tools. McNeese has developed a tool called NeoCITIES that simulates an ongoing sequence of events or activities. The simulator provides an environment in which individuals or teams can be provided with an evolving situation (with incoming data reports), and are required to make decisions regarding the situation and response. Thus, tools such as intelligent agents, collaboration tools, and visualization aids can be introduced and their effect can be observed on the decision-making efficacy of team.

20.3.5   Multisensory Experiments

Finally, we suggest that experiments be conducted in which the multisensory capabilities of the human user/analyst is deliberately exploited to improve data understanding. That is, experiments should be conducted in which a combination of 3D visualization, sonification, and haptic interfaces is used to improve data understanding and analysis. In addition, research should seek to simultaneously exploit human natural language ability along with sensory-based pattern recognition (viz., by automated generation of semantic metadata to augment traditional image and signal data). The techniques developed by Wang and coworkers30,31 are especially relevant for information fusion at the semantic level.32 Thus, multisource data and information could be interpreted using multisensory capability and multibrain functions (language processing and pattern recognition). This approach should reduce the “impedance” mismatch between a human and data, and assist in conserving the ultimate limited resource—human attention units.

 

 

References

1. http://www-bsac.eecs.berkeley.edu/archive/users/warneke-brett/SmartDust/.

2. Steinberg, A. and C. L. Bowman, Revisions to the JDL data fusion model. In Handbook of Multisensor Data Fusion, D. Hall and J. Llinas (Eds.), CRC Press, Boca Raton, FL, 2001.

3. Hall, D. and A. Steinberg, Dirty secrets in multisensor data fusion. In Handbook of Multisensor Data Fusion, D. Hall and J. Llinas (Eds.), CRC Press, Boca Raton, FL, 2001.

4. Hall, M. J., S. A. Hall, and T. Tate, Removing the HCI bottleneck: How the human-computer interface (HCI) affects the performance of data fusion systems. In Handbook of Multisensor Data Fusion, D. Hall and J. Llinas (Eds.), CRC Press, Boca Raton, FL, 2001.

5. Yen, J., X. Fan, S. Sun, M. McNeese and D. Hall, Supporting antiterrorist analysis teams using agents with shared RPD Process. In Proceedings of the IEEE International Conference on Computational Intelligence for Homeland Security and Personal Safety, Venice, Italy, July 21–22, 2004.

6. Connors, E. S., P. Craven, M. McNeese, T. Jefferson, Jr., P. Bains, and D. L. Hall, An application of the AKADAM approach to intelligence analyst work. In Proceedings of the 48th Annual Meeting of the Human Factors and Ergonomics Society, Human Factors and Ergonomics Society, New Orleans, LA, 2004.

7. McNeese, M. D. and D. L. Hall, User-centric, multi-INT fusion for homeland defense. In Proceedings of the 47th Annual Meeting of the Human Factors and Ergonomics Society, Human Factors and Ergonomics Society, Santa Monica, CA, pp. 523–527, Oct 13–17, 2003.

8. Patterson, E. S., D. D. Woods, and D. Tinapple, Using cognitive task analysis (CTA) to seed design concepts for intelligence analysis under data overload. In Proceedings of the Human Factors and Ergonomics Society 43rd Annual Meeting, Human Factors and Ergonomics Society, Minneapolis, MN, 2001.

9. Hall, D., M. Seif El-Nasr, and J. Yen, A three pronged approach for improved data understanding: 3-D visualization, use of gaming techniques, and intelligent advisory agents. In Proceedings of the NATO N/X Conference on Visualization, Copenhagen, Denmark, October 17–20, 2006.

10. D. L. Hall, Increasing operator capabilities through advances in visualization. In 3rd Annual Sensors Fusion Conference: Improving the Automation, Integration, Analysis and Distribution of Information to the Warfighter, Washington, DC, Nov. 29–Dec. 1, 2006.

11. PC Magazine. (n.d.). Definition of: Virtual reality, Retrieved on 24 August 2007 from http://www.pcmag.com/encyclopedia_term/0,2542,t=virtual+reality&i=53945,00.asp.

12. Leetaru, K., The CAVE at NCSA: About the CAVE, Retrieved on August 4, 2007 from http://cave.ncsa.uiuc.edu/about.html, 2000–2004.

13. Roddenberry, G. (Creator) Star Trek: The Next Generation, Paramount Pictures, 1987.

14. Hall, S. A., An investigation of cognitive factors that impact the efficacy of human interaction with computer-based complex systems. A Human Factors Research Project Submitted to the Extended Campus in Partial Fulfillment of the Requirements of the Degree of Master of Science in Aeronautical Science, Embry-Riddle Aeronautical University, Extended Campus, 2001.

15. Shumaker, W., Private communication to D. L. Hall, September, 2007.

16. Risch, J. S., D. S. Rex, S. T. Dowson, T. B. Walters, R. A. May and B. D. Moon, The STARLIGHT information visualization system. In Proceedings of IEEE Conference on Information Visualization, 42, 1997.

17. Rex, B., Starlight Approach to Text Spatialization for Visualization, Cartography Special Group Sponsored Workshop, 2002.

18. Chen, H., H. Atabakhsh, C. Tseng, D. Marshall, S. Kaza, S. Eggers, H. Gowda, A. Shah, T. Peterson and C. Violette, Visualization in law enforcement. CHI 2005 Extended Abstracts on Human Factors in Computing Systems, Portland, OR, 2005.

19. Corporation, K. C. COPLINK, White Paper, Tucson, AZ, 2005.

20. Madhyastha, T. M. and D. A. Reed, Data sonification: Do you see what I hear? IEEE Software, 12(2): 45–56, 1995.

21. Ballora, M., B. Pennycook, P. Ch. Ivanov, A. L. Goldberger, and L. Glass, Detection of obstructive sleep apnea through auditory display of heart rate variability. In Proceedings of Computers in Cardiology 2000, IEEE Engineering in Medicine and Biology Society, 2000.

22. Baker, L., Sub hunters: Detecting the enemy beneath the sea, Pakistan Daily Times, August 30, 2007.

23. Hall, D. and S. McMullen, Mathematical Techniques in Multisensor Data Fusion, ARTECH House, Boston, MA, 2004.

24. Feynman, R., Surely You’re Joking Mr. Feynman: Adventures of a Curious Character, Norton & Co., New York, NY, 1997.

25. Tufte, E., The Visual Display of Quantitative Information, 2nd edn., Graphics Press, Cheshire, CT, 2001.

26. Diagnostic and Statistical Manual of Mental Disorders, 4th edn., Text Revision (DSM-IV-TR), American Psychiatric Association, Washington, DC, 2000.

27. Ramachandran, V. S., A Brief Tour of Human Consciousness: From Impostor Poodles to Purple Numbers, PI Press, Essex, UK, 2004.

28. Heuer, R., The Psychology of Intelligence Analysis, Norinka Books, CIA Center for the Study of Intelligence, Washington, DC, 1999.

29. Hall, C., S. A. H. McMullen, and D. L. Hall, Cognitive engineering research methodology: A proposed study of visualization analysis techniques. In Proceedings of the NATO Workshop on Visualizing Network Information, Copenhagen, Denmark, October 17–20, 2006.

30. Li, J. and J. Z. Wang, Automatic linguistic indexing of pictures by a statistical modeling approach, IEEE Trans. Pattern Anal. Machine Intell., 25(9): 1075–1088, 2003.

31. Wang, J. Z., J. Li, and G. Wiederhold, SIMPLIcity: Semantics-sensitive integrated matching for picture libraries, IEEE Trans. Pattern Anal. Machine Intell., 23(9): 947–963, 2001.

32. Hall, D. L., Beyond level “N” fusion: Performing fusion at the information level. In Proceedings of the MSS National Symposium on Sensor and Data Fusion, San Diego, CA, August 13–15, 2002.

* Also see Chapter 3 of this handbook.

* And Chapter 1 of this handbook.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.144.229