Chapter 18

Making Sense of the Noise

An ABC Approach to Big Data and Security

John N.A. Brown

Abstract

Processing Big Data for security is not a twenty first–century problem. By the time our early ancestors had reached the size of cats some 40 million years ago, they had already developed the tools they needed to process the vast amounts of security-related data that fell on them in a steady stream of sounds, smells, tastes, sights, and feelings. The key was a reflex loop called a “corollary discharge cycle”: the routine with which all living creatures perceive, process, and adapt before perceiving again. At its simplest, this is how the proprioceptors in our joints allow us to stand or to control the placement of our hands and fingers. More complex versions are at the root of how we learn to run or read or interact socially. We reflexively filter out most of the data we perceive, automatically processing only what fits into anticipated patterns labeled important or unimportant. The data that cannot be processed reflexively are “bumped up” to a higher level of pre-attentive processing, where they are compared with known patterns of alarm so that they can either be diverted to the center of our conscious attention or flagged as false positives and returned to the periphery. In this chapter we will take a look at this most basic means by which we filter select elements from the noise around us and—more important—the means by which we recombine these elements into the meaningful noise that allows us to feel secure in our understanding of the world around us. It is our contention that this complex but natural process may be a guideline for the simplification of current processes for manipulating Big Data for security purposes.

Keywords

Anthropology-based computing; Attentive processing; Corollary discharge cycle; Feedback loops; Human–computer interaction; Pre-attentive processing; Reflexive processing

Omnis enim ex infirmitate feritas est.

Seneca the Younger, from De Vita Beata: cap. 3, line 4

How Humans Naturally Deal with Big Data

Processing Big Data for security is not a twenty first–century problem. In 1991, Mark Weiser proposed that in the near future, properly designed computers would help reduce information overload, and suggested that the solution was in our interaction with nature: “There is more information available at our fingertips during a walk in the woods than in any computer system” (Weiser, 1991). By the time our early ancestors had reached the size of cats some 40 million years ago, they had already developed the tools they needed to process the vast amounts of security-related data that fell on them in a steady stream of sounds, smells, tastes, sights, and feelings. We responded in a series of iterative feedback loops; sensing, learning, and responding over and over again, adapting to our changing perception. This is the routine with which all living creatures perceive, process, and adapt before perceiving again. However, the key was a feedback loop called a “corollary discharge cycle”: a sort of “ghost image” of our actual performance that gives us a mental model against which to compare the changing environment. It is an internal report that says, “I am doing this at the moment, so any differences I detect are feedback.”
At its simplest, this system provides us with self-calibration, an understanding of whether we are doing what we think we are doing. More practically, this is how the proprioceptors in our joints allow us to stand or to control the placement of our hands and fingers. More complex versions are at the root of how we learn to run, read, or interact socially.
Most models of human–system interaction do not account for this constant barrage from multisensory feedback loops, but they are at the root of how we, as animals, determine and maintain our security, whether personal, tribal, civil, or national. To wit, security requires a constant cycle of information gathering and response at an organizational level, even though the available data are often too large to be fully analyzed. Instead of trying to decode and interpret all of the Big Data in real time, we apply a series of filters at each level of the organization. The Theory of Anthropology-Based Computing (ABC) and the related Model of Interaction (Brown, 2013, 2014a) allow us to examine this cycle more accurately and apply it to individual and organizational models for coping with Big Data.
Humans reflexively filter out most of the data we perceive, automatically processing only what fits into anticipated patterns labeled important or unimportant. This is how we separate the “signal” from the “noise” when carrying on a conversation in a crowded room. However, not all data are immediately recognized as either one or the other. These outliers require further processing. The data that cannot be processed reflexively are “bumped up” to a higher level of pre-attentive processing, where they are compared with known patterns of alarm so that they can either be diverted to the center of our conscious attention or flagged as false positives and returned to the periphery. In this chapter we will take a look at the organizational tactics that could benefit from an improved understanding of this most basic means by which we filter select elements from the noise around us and, more important, the means by which we recombine these elements into the meaningful signals that allow us to feel secure in our understanding of the world around us (Brown, 2015). Let us examine this process more closely and see how this multi-tiered processing strategy may be a guideline for organizations that are trying to interpret Big Data for security purposes at the national level.

The Three Stages of Data Processing Explained

Since before the beginning of recorded history, humans and their ancestors have reacted to security intelligence based on multiple feedback loops (Brown, 2013). Information is perceived, processed, and filtered for appropriate reaction (or non-reaction). The model of Human–Computer Interaction (HCI) based on this theory generalizes the processes into three stages, as illustrated in Figure 18.1.
This is in contrast to previous models of HCI, which illustrated human perception, processing, and reaction as happening in a single cycle. The single-cycle models fail to account for the natural human ability to interact with peripheral information, dealing with some stimuli either reflexively or pre-attentively, while simultaneously dealing with separate stimuli in a cognitive or attentive manner. When driving a car, one responds reflexively to tactile and visual stimuli to keep the steering wheel where we want it. At the same time, the driver is pre-attentively recognizing patterns such as the relative speeds of other cars and the angling surface of a curve in the road. Also at the same time, the driver may be attentively engaging in a conversation, giving or getting directions, or listening to the news on the radio. The same idea can be adapted to illustrate the same three-stage process as a general model of human interaction, as shown in Figure 18.2.
image
Figure 18.1 Brown’s ABC model of HCI, showing the three generalized levels of human sensory perception, processing, and response.
Let us now consider the three stages.

Stage 1: Reflexive

Perceive as much input as possible and extract only the input that clearly is a sign of immediate danger and the input that clearly is not a sign of immediate danger, to deal with instantly—without conscious thought or consideration. People often talk about “fight or flight,” but it should really be “fight, flight, forget, or faint.” In fact, this stage, “forget,” where we reflexively decide to ignore the information, must be the most common response; otherwise we would be fighting, running, or collapsing all of the time. Once we have filtered out the signals we can respond to immediately with one of those four choices, the remaining signals must be passed along for processing at a deeper level.

Stage 2: Pre-attentive

The previous stage has left us with a body of input that requires further examination. Here, the first layer of pattern recognition is based on our protocol for dealing with known true positives, known false negatives, as well as known false positives and true negatives. This is not as fast as a reflexive response but it is still almost immediate. This is where we must decide, without conscious thought, whether the situation we are facing is one for which we have previously prepared. If so, we must be able to quickly recognize which pattern it fits and initiate a response. If not, we must pass the information deeper into the process of decision making.
image
Figure 18.2 A general model of the three stages of human perception, processing, and reaction.

Stage 3: Attentive

Input that has defied immediate reflexive responses and almost immediate pattern recognition will now require deliberate and detailed, conscious analysis. This is carried out at the deepest and slowest level, where we consciously ask ourselves, “What are my options, and which one should I choose?” At this point, although it is best to continue carrying out Stage 1 and 2 responses, it is inappropriate for their actions to undercut Stage 3 operations. For this reason, sometimes the need to respond appropriately requires that we temporarily restrict or even suspend operations at the other stages while the larger picture is being re-evaluated. Whereas the other stages have upper limits for time taken, the attentive stage does not. The process takes time. This is why strategists make their plans ahead of time: to avoid having to think deeply when time is critical.
As mentioned above, a form of self-evaluation must take place at each of these three stages, feeding an accurate picture of one’s performance into the feedback loop. This is called calibration, and it is vital to successful decision making and execution. Without proper calibration, our actions will deviate from appropriate to inappropriate as our perception of our performance and its context deviate from what is really happening around us. Calibration failures occur all of the time in normal day-to-day human interactions. We have all seen incompetent people who think they are doing a great job, really believing what they think is obvious to everyone around them, even though no one would agree. When children throw a temper tantrum, it is a signal that they perceive the situation they are in to be extremely important, and they are resorting to emergency measures to try to force the people around them to recognize the extremely important emergency that no one else has noticed. In fact, in the right situation the same unflinching and determined behavior would be heroic.
We can accept temper tantrums in children, because they are still learning how to calibrate their behavior to suit the world around them. When an adult throws a temper tantrum, it is a lot harder to accept, but it is caused by the same sort of calibration failure. When managers are screaming at their subordinates, it is a sign that their perception of reality (calibration) has deviated from the actual situation to such a degree that they now believe screaming is truly necessary. In other words, they perceive an emergency where there is none. When drivers try to eat, text, have a deep, emotional conversation, or reset their dashboard navigation system while driving along a crowded street, they are also making a calibration error. In this case, they are failing to see the dangers around them, truly believing instead that they can afford to divert their attention.
We propose that calibration errors are at the root of many failed responses to security-critical situations, and we suggest that our three-tiered model could be used to help individuals and organizations develop a better understanding of the forces affecting their decisions during an engagement. This understanding would improve individual and organizational behavior calibration and so help agents make the best possible choices at each decision-making stage, from high-level planning to field operations. In the next section, we will propose a model illustrating how that could be done.

The Public Order Policing Model and the Common Operational Picture

In the August 2011 issue of the FBI Law Enforcement Bulletin, Masterson proposed a new paradigm for crowd management (Masterson, 2012). In establishing the basis of his proposal, Masterson referred to the Public Order Policing Model and illustrated it as a pyramid of four levels. At the base is the “Science-Based, Event-Tested, Theoretical Understanding of Crowds,” certainly a good foundation. Next is “Police Policy, Knowledge, and Philosophy,” and the original illustration shows that this would include “effective contemporary crowd control methods used by American, Canadian, and British agencies” such as the Madison Method (1975), the Cardiff Approach (2001), and the Vancouver Model (2010). These policies would inform the next level of the pyramid, “Police Training,” which would shape the “Police Response” that sits at the top of the pyramid.
Organizations with more than two levels of security have been using pyramids to illustrate their command structure for a long time, but these structures predate the modern age of ubiquitous and dependable communications. One-way communication was the best available model one hundred years ago, but the inability to feed back along the chain of command led to disasters such as the continued suicidal charge on Gallipoli. By the later twentieth century, both technology and policy had improved and middle management in many organizations could selectively pass information up the chain. Still, the organizational filters for this feedback often worked at cross-purposes, confounding, for instance, the important feedback with the trivial and restricting them equally. This reflects the conditions that may have led to repeated decisions not to pass on concerns about toric joints (O-rings) like the one that failed on the morning of January 28, 1986, causing the space shuttle Challenger to explode.
Technology is no longer a limiting factor for upward and downward communication in modern organizations, except perhaps, in one unpredicted manner. Institutional communication has become so effortless that it is now used for the most trivial matters. Digital messaging services such as SMS, e-mail, and chat applications are designed to catch one’s attention and they work too well, providing a constant source of distraction, and triggering attentive responses regardless of the time or place. Because these applications are designed to break through our natural reflexive and pre-attentive filters, the result is a constant hazard of deep distraction that has been directly linked to countless fatalities, ranging from individual pedestrians to dozens of passengers onboard buses and trains (Brown et al., 2014b).
Because we cannot filter these alerts successfully, trivial messages crowd out the important ones and it is easy to either be overwhelmed by dealing with the trivial or to ignore too much and miss out on the important. These are both types of calibration error, and they are well documented in studies that strongly suggest that universal access to e-mail has reduced on-task communication and increased working hours while reducing productivity (Burgess et al., 2005). In fact, overwhelming numbers of digital messages can be considered a smaller-scale model of the overall problem of trying to deal with Big Data. We cannot take the time to parse every e-mail to see which ones are important. Different e-mail services have established proprietary filtering systems to remove spam and help individuals categorize the contents of their inbox for future examination. But how well do such systems truly work? How often have you, the reader, missed important messages or wasted time dealing with trivial messages?
As referenced in Chapter 7 (Military and Big Data Revolution), the core problem is one that the military calls the common operational picture (COP). As they say, this “is a snapshot that is valued only because it is up-to-date.” Essentially, it supports the idea that everyone working toward a goal should have a common understanding of the situation. They go on to discuss “networked warfare” as an attempt to use modern communication technology to enhance the gathering and dissemination of data, and propose technological means by which to implement improvements to that concept. We agree that modern technology should make such communication easier, but let us return to the idea of instant messaging to see one key problem: the fact that, in general, communication software is being designed to do the wrong thing.
image
Figure 18.3 Feedback loop as feedback to self, peers, and command.
image
Figure 18.4 Feedback loops support a common operational picture.
Humans are good at classifying information into categories, recognizing known patterns, and filtering input in the manner described by our model. Hardware and software that have been designed to catch our attention break through those filters. This is important when the message being sent is a fire alarm or a call to battle stations. Obviously, receiving an uninvited e-mail from a stranger should demand less of our attention. The idea that designers, programmers, and engineers must learn to design according to the filtering systems of the humans they serve has been presented elsewhere (Brown, 2012). The question here is how we can use technology organizationally to complement that natural filtering system’s attempts to cope with Big Data. We propose that the answer is to improve the COP by introducing a conceptual improvement to organizational calibration.
Figure 18.3 shows one individual feedback loop and the ghost image behind it that provides calibration to the individual, as discussed above. This comparison of actual performance with expected performance could be a powerful tool in managing responses in a setting in which massive amounts of incoming data could otherwise be overwhelming to individuals, and so lead to calibration errors and actions that deviate from intent.
Figure 18.4 shows three co-joined feedback loops of the sort discussed earlier. Each has its own ghost image behind it, providing an image of one’s actions for internal calibration. The latter set of three is set in a structure that suggests they are sharing their perceived data and building a COP. Each individual is accepting the perceptions of proximate colleagues and processing them with his or her own. Because the perceptions include each other’s actions, there is a group calibration effect for each individual. Free flow of those data and other observations enhances each individual’s operational picture, supporting a COP within a group of peers.
image
Figure 18.5 Two-tiered feedback chain supporting a COP among peers and superiors.
Figure 18.5 provides an illustration of the same kind of feedback procedure for a multilayered operation in which individual operatives “on the ground” report their perceptions to each other, including perception of one another’s actions. Although each operative is sharing the perceptions with his or her proximate peers, it is the actions that are reported into the command structure. This gives command an operational picture of performance that can be matched against the intent. In our illustration, all three of the operatives are reporting to a single superior. It may be assumed that the peers of this superior are also receiving data from their own operatives. This image does not reflect the usual chain of command that would still be in place.
The two-tiered illustration in Figure 18.5 can be imagined as a supplement to or replacement for the public order policing model, incorporating a mechanism to describe the feedback that would enhance the COP during an operation, using multiple perspectives to more accurately process overwhelming amounts of data and improve individual and group calibration at every stage of the decision-making process.

Applications to Big Data and Security

In this section, illustrative examples of the three stages being administered properly and improperly will be drawn from historical and recent security events. In each case, the different problems will be clearly related to a failure in maintaining a clear COP, and a possible mitigation strategy will be offered based on the idea of our anthropology-based, multilevel behavior calibration system.

Level 1: Reflexive Response

In a true emergency in which no second can be wasted, it is important to have a reflexive trigger so that response time is not impeded by the decision-making process. Examples of this include an automated sprinkler in response to fire detection or an automated alarm that signals as soon as a door is opened. The water and the alarm can be shut down after the fact if, upon consideration, the reflex action was too extreme. Unfortunately, in some situations it is impossible to undo an action that has been taken reflexively. This is why reflexes must be carefully trained and the stimuli that trigger them must be carefully distinguished from the stimuli that prevent their being triggered.
Consider the example of the police shooting of a knife-wielding teenager on a streetcar in Toronto in July 2013 (Bahadi et al. v Forcillo et al., 2013). The situation was under control, in that the driver and other passengers had been allowed to leave the streetcar, which was then surrounded by between 10 and 24 officers, and the teenager with the knife was talking with some of them through the open doors. Some of the officers present later explained that they were waiting for a Taser to arrive. In plain view of many civilians and their cell phone cameras, a single police officer warned the teen not to take another step. As he was finishing the sentence, the officer fired three shots, bringing down the teen. Five seconds later, the officer lowered his aim and fired six more shots, empting the rest of his clip. Thirty seconds after the last shot was fired, the Taser had arrived and was deployed on the teen’s prone body. According to records, the police had been trying to de-escalate the situation. It is certainly possible that the officer in question felt that his life was in danger, a feeling that would justify his use of force. It is hard to justify the second round of shots and the use of the Taser. It is harder still to justify the violent intervention of one officer while other, more senior officers were using a slower-paced, nonviolent strategy.
These actions are hard to justify but could be easy to explain (Damjanovic et al., 2014). Clearly the officer felt threatened and adjusted his responses to suit an active threat, even though none of the other officers there had felt it necessary to do so. An appropriate self-calibration system could have prevented the officer from issuing the challenge or engaging the teen at all. After that escalating utterance, the officer could have been advised to disengage and let others employ the delaying tactics that were part of the plan in place. At the least, the officer could have been prevented from firing into the prone body, and the Taser would not have been deployed once the teen had already been shot.

Level 2: Pre-attentive Response

Post-reflex reconsideration of the response happens at a higher level of processing in the mind, and should happen at a higher level of processing in an organization as well—but not much higher. This “pre-attentive” response can be triggered when patterns are easily matched. Consider the detection of a vehicle during combat (which is, or should be, automatic), and the determination of whether to fire on it as it passes (which requires a higher-level judgment based on recognizing the vehicle as ally, neutral, or enemy). One must quickly determine whether to fire or hold fire.
If it is not possible to be certain of the facts in the time allowed, one must refer to an overriding protocol. Are we in a situation in which we must minimize our risk of firing on a non-enemy, even at the risk of allowing an enemy vehicle to pass? On the other hand, are we in a situation where stopping all potential enemies is more important than the possibility of firing on the wrong vehicle? This protocol must be ingrained before the mission so that the individual facing the situation does not need to waste time either remembering or debating the proper course of action.
One example of protocol-based decisions in a military operation is the sacking of Beziers in 1209. Faced with the difficult task of distinguishing the defeated Cathars from the Catholics who were to be liberated, the crusaders killed everyone within the city regardless of age, gender, and social rank. This is when the papal legate Arnaud Amalric is reported to have said, Caedite eos! Novit enim Dominus qui sunt eius, an expression that, roughly translated, is still used today.1 The legate himself reported to the pope that the final attack and wholesale slaughter were carried out, without official order or sanction, by “persons of lower rank” while the officers were still engaged in negotiations.
If it is true that the slaughter was carried out in the absence of orders, because of a misunderstanding of protocol for interacting with unidentified noncombatants, it could have been prevented by a system that reminded each man of the limits and responsibilities of his role, and also improved feedback between ranks. With modern communication technology as it stands in the developed nations of the twenty-first century, our problem is no longer that we cannot communicate easily; it is that our protocols sometimes prevent it.
If circumstances are outside the parameters of the active protocol, the course of action will require more than pattern matching, and will have to be decided with a higher level of processing.

Level 3: Attentive Response and the Focused, Intellectual Management of Data

The final type of situation is the one that allows us time to think. Now, the high-level agents can try to fit incoming data into more complex patterns and consider probabilities. One such example is the manhunt for the Boston Marathon bombers (Helman and Russell, 2014).
Crowdsourcing and automated facial recognition, cell phone global positioning satellite tracking, and Interpol resources were accessed and processed as part of the massive operation. During that process, a huge number of teams and individuals carried out more immediate security activities, policing the community and searching with more traditional methods. The most traditional method, the first step in the first stage of our three stage process, is to gather as much information as possible, filtering it as well as possible to separate the obviously important and obviously unimportant from information that needs further processing. During that fantastic open information-gathering process, police secured a neighborhood and conducted a yard-to-yard search. Once the search had been completed and the related curfew had been lifted, a 66-year-old man living in the neighborhood went out into his yard to fix the tarp covering his boat. He set his ladder, climbed up, and found bloodstains. Looking into the boat, he saw a man in a hooded sweatshirt curled up in a fetal position. He hurried back to the house, told his wife what he had found, and called 911. Here is the key to the successful operation: investigations at these two different levels did not interfere with each other. The forces in charge did not refuse to investigate the new information because their investigation had already cleared that neighborhood. Nor did the local forces refuse to investigate because of the large amount of false leads they had already received. But there are two components to the investigation that went wrong—and they could have led to a terrible failure.
First, the police had just searched the area and had not looked in the boat. It is possible that, having been missed in the search, the suspect could have escaped again. Second, once security forces responded to the report and surrounded the boat, between 200 and 300 shots were fired, causing damage to nearby houses and other property and certainly jeopardizing the lives and well-being of residents.
So, what happened? As in the other stories told here, the performance of agents on the ground reflects their human nature. There are many reasons why one might fail to fully clear a search area. We are not infallible machines. This is why pilots and surgeons are required to follow checklist procedures. It does not reflect a lack of skill or expertise on their part, but rather the fundamental human propensity to lose track of small details. Similarly, whoever fired first—we know it could not have been the suspect because he was unarmed—it is clear that the first shot(s) triggered a massive case of contagious fire. Approaching a concealed suspect at the end of a prolonged urban and suburban manhunt for armed terrorists provides almost textbook conditions for contagious fire.
Peer-to–proximate-peer observation and feedback would reinforce the performance of mundane tasks such as detailed searches and would instantly recognize who was first to fire, and so limit or eliminate contagious fire situations.

Application to Big Data and National Security

Now let us examine a final illustrative example: an increasingly common Big Data–related process in which personnel on the ground have been replaced by a drone. As in the previous examples, the core problem is maintaining a clear COP across all levels of the operation, but in this case, it is because the data are being gathered and processed in a manner contrary to the natural human means for which we are advocating. Once again, we will suggest a possible mitigation strategy based on our models of interaction and behavior-calibrating COP system.
In our previous examples, we dealt with situations in which data were inappropriately perceived or processed, leading to inappropriate responses. Not all readers might agree with our proposal that these are actually Big Data issues. To provide an illustrative example set in the domain currently addressed as Big Data, we will return to Chapter 7 and borrow an excellent example for examination under our own particular light.
The authors of Chapter 7 provide some details about the General Atomics MQ-9 Reaper, “a surveillance and combat drone built by General Atomics for the US Air Force, the US Navy, (Italy’s) Aeronautica Militare, (Britain’s) Royal Air Force, and the French Air Force.” This drone, the authors say, “generates the equivalent of 20 Tbytes” per mission despite the fact that “95% of the videos are never viewed.”
Whereas our co-contributors to this book propose a technological solution to the problem of dividing those data for parsing, we propose a different approach. According to our model, initial data should not be submitted to attentional, cognitive examination. Instead, we propose that software be developed to instantly filter out the vast majority of these data. This filtering would not involve careful evaluation, but only the most simplistic, reflexive, immediate filtering that we can manage. If too much is filtered out that way, our system should be calibrated to eventually improve performance. However, the goal should always be to filter out almost all of the data that are initially perceived. This seems counterintuitive to people who see data as valuable, but we must remember that most real-world data are not valuable; in fact, they are detrimental.
Our colleagues express a common belief when they write, “The increasing number of events provided by sensors and other means describing individuals and their environment improves the digital version of the world.” We disagree and argue that this makes the digital world worse. “More data captured” is not the same thing as “more useful data captured.” Instead of improving our ability to capture “signal,” the increasing number of sensors in the world has improved our ability to capture “noise.” In learning to live in the world, we have evolved the ability to strategically ignore more data than we process. Our automated systems must be designed to do the same thing.
So, let us return again to the drone and to our three-stage model. Ideally, the first stage should use an inaccurate but high-speed visual recognition system to filter the massive amount of data automatically into three pools:
1. Clearly and obviously unimportant
2. Clearly and obviously important
3. Requires further processing
Under normal circumstances, the first pool should include somewhere around 90% of the rough data. These can then be discarded, which solves the problem of massive storage demands. Can we afford to throw them away? Is it not better to store it in an unusable state, as is currently being done? If we cannot afford to process it accurately, why keep it? There will be more data in the next picosecond, and they will be filtered, as well. Perhaps the following analogy will help to clarify the difference between current practices and the method proposed above.
Consider the prospector panning for gold in a river that runs off a mountain. Traditionally, the prospector catches a mix of water and silt in his pan and then gently rocks the slurry and lets the water run slowly over the edge. The lighter materials are carried away in the leaving water. The right technique leaves the heavier materials in the bottom of the pan, including gold dust.
But what if the gold in the river ranged in size from hidden dust to big, obvious nuggets? The prospector who wants to gather as much gold as possible would continue to use his pan, but he might also set a sieve across the river, something the right size to catch the big nuggets and let everything else run through. He can focus his attention on finding the dust that requires careful, skilled extraction, while letting his automated system catch the things that are easier to find. Of course, he will have to adapt his sieve over time as he learns which shapes and patterns best filter what he wants from the stream. This is what we are proposing: automated filters that can be refined over time, and the freedom to concentrate on extracting the wealth of information that can be gathered only with human skill and judgment. An important element of this model is that most of the water rushes through the filters and either through or around the pan. This is accepted in our model. It is accepted because we will already be gathering more gold than we can process—at least until time, experience, and technological advances enable us to refine our filters. More than that, though, it is acceptable because the alternative is to try and catch every drop of the river and store it in buckets, with the intent that somehow, someone in the future will have the time and skill to process their new, immediate data, and to go back and process yours, as well.
These concepts are illustrated in Figure 18.6, where Figure 18.6(a) shows one in an unending series of buckets full of an unfiltered and unknown mix of important, urgent, and totally unimportant data; Figure 18.6(b) shows two filters and a pan working in sequence. The first filter is “reflexive” and catches only the truly obvious; the second filter is “pre-attentive” and catches only known, easily recognized patterns; and the pan is where human specialists focus their cognitive attention.
image
Figure 18.6 Should we catch all of the data we can, even though we cannot process it all, or should we filter it and process what we can?

A Final Caveat from the FBI Bulletin

Let us finish by returning to the beginning. Back in the August 2011 issue of the FBI Law Enforcement Bulletin, another article touches on another very basic, very human, process. In a report entitled “Focus on Ethics: The Power of Police Civility,” Borello (2012) provides real-world examples of the positive changes in police–community interaction induced by human-centered behavior. This attitude is fundamental to the ABC Theory and must be fundamental to any attempt to implement the methods proposed here.
When the three-stage process is used, individual and organizational ethics must be the default, fallback for every agent supported by the corollary discharge cycle–style models of instantaneous self-calibration. At the same time that agents in the field should never be put in a position to interpret policy or make high-level decisions, their trained and untrained reflexive and pre-attentive responses must always be biased toward the health and well-being of the individuals who make up the community they protect and serve.

References

Borrello A. Focus on Ethics: The Power of Police Civility. FBI Law Enforcement Bulletin; 2012 (Online). Available at: http://www.fbi.gov/stats-services/publications/law- enforcement-bulletin/august-2012/focus-on-ethics.

Brown J.N.A. Expert talk for time machine session: designing calm technology “… as refreshing as taking a walk in the woods”. In: 2012 IEEE International Conference on Multimedia and Expo (ICME), Melbourne, July 9–13, 2012. 2012:423.

Brown J.N.A. It’s as Easy as ABC. Advances in Computational Intelligence. 2013 1–16.

Brown J.N.A. Once more, with feeling using haptics to preserve tactile memories. International Journal of Human-Computer Interaction. 2015;31(1):65–71.

Brown J.N.A, Bayerl P.S, Fercher A, Leitner G, Mallofré A.C, Hitz M. A measure of calm. In: Bakker S, Hausen D, Selker T, van den Hoven E, Butz A, Eggen B, eds. Peripheral Interaction: Shaping the Research and Design Space. Workshop at CHI 2014, Toronto, Canada. 2014.

Brown J.N.A, Leitner G, Hitz M, Mallofré A.C. A model of calm HCI. In: Peripheral Interaction: Shaping the Research and Design Space Workshop at CHI2014, Toronto, Canada. 2014 ISSN:1862–5207.

Burgess A, Jackson T, Edwards J. Email training significantly reduces email defects. International Journal of Information Management. 2005;25(1):71–83.

Damjanovic L, Pinkham A.E, Clarke P, Phillips J. Enhanced threat detection in experienced riot police officers: cognitive evidence from the face-in-the-crowd effect. Quarterly Journal of Experimental Psychology. 2014;67(5):1004–1018.

Helman S, Russell J. Long Mile Home: Boston under Attack, the City’s Courageous Recovery, and the Epic Hunt for Justice. New York: Penguin; 2014.

Masterson M. Crowd Management: Adopting a New Paradigm. FBI Law Enforcement Bulletin; 2012 (Online). Available at: http://www.fbi.gov/stats-services/publications/law-enforcement-bulletin/august-2012/crowd-management.

Ontario Superior Court of Justice, 2013. Sahar Bahadi, et al. v Police Constable James Forcillo, et al. Statement of Claim No. CV-13-490686. (Online). Available at: http://www.cp24.com/polopoly_fs/1.1927687!/httpFile/file.pdf.

Weiser M. The computer for the twenty-first century. Scientific American. 1991;265(3):94–104.


1 Kill them all! God will know which ones are His.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.199.112