Chapter 2

Protection: Exposing the Blind Spots

People protect what they love.

—Jacques Yves Cousteau, explorer

When you first meet her, this 44-year-old mother of three seems like a normal person. However, there is something quite abnormal lying beneath the surface. She laughs in the face of what would cause many to recoil in terror. She pets snakes as though they were playful kittens. She experiences horror movies with delight. And, by her own diary admissions, she engages in behavior that some would find unusual if not outright dangerous. While walking through a park one night, a man threatened her with a knife to the throat. A church choir practicing could be heard nearby. She calmly replied to the knife-wielding druggie, “If you’re going to kill me, you’re gonna have to go through my God’s angels first.” Apparently, even the drug-addled can sense abnormality, as this odd response was enough to send the perpetrator running in the other direction. The next night, she continued about her routine and walked through the same dark park, business as usual.1

This is not the product of a fiction author’s imaginative musings. She is a real 44-year-old mother of three, known to the scientific community as simply “SM.” The cause of her abnormality lies in a gland shaped like a pair of almonds and located in what is referred to as the emotional brain, inward from the temples. Though relatively small in size, the amygdala gland can propel our entire body into immediate action based on its ability to detect danger within milliseconds. SM possesses a rare congenital disorder that began to destroy her amygdala in early childhood, rendering this part of her brain completely dysfunctional by the time she was a pre-teen.2

The amygdala is just one example illustrating how complex our brains are with patterns and workflows designed to incorporate full sensory perception and activate the body with the appropriate response. The brain has three major components, each with a different purpose:

  • Brain stem—The brain stem handles basic survival skills in all animals and is commonly known as the reptilian, or in more casual vernacular, “lizard” brain.
  • Emotional brain—In mammals, the emotional or limbic brain is wrapped around the brain stem.
  • Prefrontal brain—In humans, the higher prefrontal brain is developed to handle logical thought, language, and problem-solving.3

Multiple mappings exist between each of these three functional areas of the brain to govern our thoughts, emotions, and body responses. But the highly traveled pathways going from the emotional brain to the brain stem are most interesting in analyzing the mind–body connection engineered for our most primal protection.

Continuing with the case of SM, consider how one blind spot in the brain’s complex inner workings can create such unusual and reckless behavior. The amygdala is the brain’s opportunity and danger detector. Within fractions of a second, it reads emotions from others and activates our own for fight, flight, or approach behavior. It is so sophisticated in its design that it is able to sense and detect the slightest nuance in the dozens of facial muscles whose only purpose is to signal a person’s emotional state. It can do the same simply by assessing a person’s tone of voice. It has been defined as the gateway to trust by determining if someone is safe or dangerous. It can even spot cues of subtle inconsistency in someone’s words and behavior to determine if the person is trustworthy or deceptive. Although the strength of this capability differs in each of us, some have been reported to be able to separate truth from deception with nearly 100 percent accuracy.4

That’s just one gland in one component of the brain. You don’t have to imagine the exponential mappings of the brain’s advanced design in reading cues from the physical environment to engender the right response from the body. There are scores of scientific experiments proving that the connections exist. Consider the development of these mappings over the generations of humankind since our first existence—an existence in the physical world. But, because many of these connections rely on the brain’s ability to map sensory perceptions of our physical environment, how do they succeed in a virtual world that can limit, if not deliberately confuse, the same? How can the mind–body connection prevail in an artificial world to protect the real human being on the other end of the device?

The Blind Spots

Our primal protection responses are literally programmed into our DNA. At the same time, when just one misfire occurs, the results can be profound, as in the case of SM, but also as evidenced in the host of psychological disorders identified by the scientific community up to this point. When the threat can be seen, our body is primed to respond. But, what if the threat is invisible? Or, even more confounding, what if it exists in some machine, or bot, in the virtual sky?

Science fiction novelists have conjured a world dominated by machines for decades. It appears that the imagination of fiction authors is not as farfetched in a virtual world sustained by machinery, and the threat detected by Bob, an ethnography subject, is all too real:

I think the information is ending up in supercomputers and it’s like any other society—if the information falls into the wrong hands, that’s when it becomes a problem. So, that’s probably a fear that a lot of Americans have when they think of the supercomputers keeping track of everything.

Although some may consider Bob’s rant to be paranoid, the fear factor in a virtual world is shared by other respondents. Even if others don’t share Bob’s concern of “supercomputers” in the sky, they still question the wisdom of putting their trust in those with the most power to violate it. Consider Todd, another of our ethnography respondents and one much younger than Bob:

The thing I’m thinking is, if we have all these people that know how to go in and make all these programs, then we also have all these people that could just go in and access our accounts. I would imagine that if they can write a program or build a program or anything like that for us to use, that they can just do the same kind of thing and just bypass the password and go in and look at your stuff.

And the problem extends beyond the misuse of a typical program. Others are perplexed about how to trust nameless, faceless individuals able to camouflage the nuances of facial expression, tone of voice, mannerisms, and actual identity behind the veil of an anonymous keyboard. Consider Julie, a 30-something soccer mom, as she envisions a new type of service, one that could render truthful information about the anonymous person on the other end of the connection:

You know when it’s a fake driver’s license, right? So, you can’t fake this certain person is this age—you know—this information on them.… All that information needs to come out. And it would be their identity—their real identity.

Whether the threat comes from a bot, programmer, hacker, or casual acquaintance, sensory deprivation in a virtual world creates new blind spots that confound our automatic abilities to protect ourselves and discern deception. The opportunity is not for technology to fully inoculate the innocent from the predatory. After all, technology cannot substitute for the sophisticated judgment capabilities resident in the higher-order emotional and prefrontal brain structures mentioned earlier. However, just as technology has allowed for the creation of a world where others can mask their identities—either for harmless or malicious intent—it can also serve to remedy the blind spots that naturally occur as a result. Much of this prescription is possible simply by augmenting the landscape with tools that allow users to manage the dangers online. We’re not simply speaking of anti-virus and firewall solutions that have existed for some time. These valuable elements serve an important purpose—to protect the device and information contained therein from dangerous attack, an essential shield for any terminal connected to a broad network. The capabilities we refer to are conspicuous by their absence in a world that connects not only machines but also the human beings behind them, each with a real capacity to be harmed.

The Extent of Violation

Not all dangers are created equal. In the physical world, a threat can be as relatively harmless as an affront, up to a more dangerous assault. The virtual world is no different, although the threats are more complex to discern. The concern shared by Bob above of supercomputers tracking our every move may seem extreme to some, but the digital footprint we consciously or unconsciously leave behind us with every click, call, or channel change adds up to a wealth of information. When used positively, the tracking of our habits and behaviors can help shape our virtual and physical worlds to pinpoint the targeted content, people, or offers tailored and practically guaranteed to meet our unique needs, especially as more information is revealed and collected. But, when used negatively, these same habits and behaviors are weapons used against us by annoying solicitors or more insidious predators.

In the case of advertising, this nuisance factor is not a problem best solved by technology, but by company policy. As consumers, we are literally bombarded with thousands of impressions every day by solicitors attempting to woo our hearts such that we are compelled to open our wallets. This market has allowed us to enjoy many services we love for “free,” including television, radio, and the majority of Internet sites, simply in exchange for the ability to deliver an advertiser’s message. We expect this. After all, how could most media subsist if not with the revenues derived from advertisers? The alternative would result in higher prices for many of the services we enjoy, and most would likely find this outcome less attractive.

However, when our own behaviors are tracked without our knowledge, as buried within obfuscated privacy policies that are far too common in the virtual world, the resulting offense is more obvious. Worse yet, because consumers may not realize that their desire for privacy is in conflict with their online behaviors or perhaps have no alternative recourse to align the two, the need for clear and conspicuous policies becomes more pronounced. Although privacy was a hot topic among many ethnography subjects, we also witnessed behavior of these same respondents blindly accepting arduous policy statements to get to a site or content desired. Is this a case in which consumers should take responsibility for not thoroughly reading a policy before impulsively continuing about their online pursuits? Perhaps. But we submit that many of these onerous policies are clearly written by lawyers for lawyers (a necessary evil born of a litigious society). As such, they unintentionally create an additional blind spot, indeed as many respondents blindly click by the technical legalese, given its overwhelming presence in these “authorizations.” And, even if consumers attempt to remedy this blind spot by changing their privacy settings, 20 percent of those in our study admit difficulty in doing so. Perhaps this point helps explain why nearly 60 percent of respondents prefer simpler, as opposed to more comprehensive, privacy settings.

For these consumers, it is not technology they crave, but respect. Several companies have made significant profits providing consumers with valuable services for “free” by monetizing the clicks made by the user to serve up targeted advertising. In fact, this has been a key differentiator of Internet advertising, which has made it such an attractive media for advertisers and users alike (a key topic that we explore in the next chapter). However, when such information is proffered to the highest third-party bidder without the user’s explicit, opt-in consent, the questionable blind spot once again enters the picture to diminish the consumer’s defense mechanisms. In addition, the blind spot in this case does more than simply disarm consumers—it also has an impact on the image of companies attempting to serve them.

As we cover throughout this book, consumers are complex. They have become accustomed to “free” services made possible through advertising, yet their expectations about how their information is collected and used gives them pause. In a networked-community age, the “price” they pay for said services is information that identifies them as users, such as through the unique address assigned to their computers, IPTV set-top boxes, or mobile devices. In our study, more than 40 percent identify with the statement that, to get the most out of online services, one must provide a certain amount of information. Among the most heavily network-engaged consumers in the study, the figure jumps to more than 50 percent. Furthermore, nearly 90 percent of those in our study agree that all technology players—service providers, social networking sites, and search companies—should be governed by the same laws and regulations regarding the collecting, analyzing, and sharing of online data. Yet, despite these findings, our data also conclude that consumers will and do hold companies to measurably different standards when it comes to privacy permissions. In fact, the consumer’s perception of the company has a significant impact on the degree to which he allows his digital footprint to be monetized, marking an interesting collision where company and consumer identities intersect.

As such, one point of competitive differentiation in a networked-community age will be the degree of respect afforded to the consumer. For some companies, that respect will be measured in practical terms, with fair, conspicuous, and understandable policies governed by the same mechanisms that allow users to easily control and adapt their privacy settings at any time. In our quantitative study, consumers gravitate toward options that place their privacy within their direct control. In fact, among the most favorable company policy positions to engender consumer trust is a provider’s requirement for opt-in consent before sharing any information about the user with other interested third parties; among the most damaging inhibitors to trust was sharing the same information with affiliates unless the consumer advises otherwise (the opt-out approach). Here again we have the interesting contradiction that is the U.S. consumer. Most services rely on the opt-out approach today, and several well-known companies have established viable business models with it as their foundation. Despite this, other companies looking to do the same may discover a market less tolerant of their right to play. An opt-in approach appeals to consumers who are becoming increasingly aware that their information is up for grabs, and who are more comfortable deciding when and to whom this veritable gold mine is offered. The potential reward to companies that understand this respect boundary is significant according to our study. More than 80 percent of consumers say that they are very or somewhat comfortable sharing information if they have control over when personal preferences, location, or availability is revealed to others.

In contrast to the annoying nuisances created by unwelcomed spammers hiding behind company policies, there are far more serious threats lurking online. Although technology is never a substitute for common sense, it can support consumers in a virtual world where primal defense mechanisms are ill-suited. In these extremes, it is not a case of respecting consumers’ privacy (a relatively tame endeavor, by comparison), but an issue of protecting them and their loved ones from those seeking to do them irreparable harm. As we discuss in a future chapter, parents, in particular, are lacking in their means to protect their children, who are far more adept in navigating a virtual world. Some parents, like our ethnography respondent Susan, resort to ineffective, prehistoric Band-Aids to compensate. Susan is the mother of a teenage son who is immersed in the world of online console gaming. While attempting to balance her son’s desire for fun and social interaction with her role as a parent to protect him, she takes matters into her own hands (literally) when he engages with others using his gaming headset:

Susan: I’ll just come down and put my hand out and he kinda knows. He’s got to take it [the headset] off and let me listen. So I kinda sneak in and listen to make sure.

Interviewer: And what are you checking for when you listen?

Susan: Mostly older adult people.

Although one would be misguided to judge Susan’s sincere attempts to protect her child from harmful strangers, there is no doubt that her approach leaves much to be desired. Susan is using her sensory defense mechanisms acquired in a real world—such as listening for tone of voice or profane content (another concern expressed by her during the interview)—to assess danger in a virtual environment. In fact, nearly 40 percent of parents in our study admit difficulty in protecting their children from inappropriate content and offensive language online. Unfortunately, physical mechanisms like those used by Susan are of meager effectiveness in a pervasive virtual world waiting 24 hours a day, seven days a week to attract curious teenagers. What if technology could reveal the danger in a more sustainable and effective way and alert Susan to the possible threats to her child, similar to the way she relies on her brain’s mappings to do the same in the physical world?

This actually is not an impossible endeavor. Since the dawn of media and live broadcasts, we have used slight transmission delays to allow manual removal—or muting out–of offensive content spewed over the public airwaves. The technology used to support live broadcast content seems dated, in comparison, to today’s highly evolved, intelligent IP-based networks that are light-years from yesterday’s manual intervention techniques. Look no further than IPTV-based services, such as AT&T’s U-verse, for an example. Using one connection to the home, AT&T is able to deliver voice, data, and video traffic. The network adjusts to the user’s behavior in the household to offer the desired service at the right performance. When the network detects that more bandwidth is needed for a particular service—such as when the consumer changes channels to a bandwidth-hungry HDTV show—it automatically gives the HD stream a higher priority to allocate network resources accordingly. In technology circles, this quality of service (QoS, as it is known in industry parlance) is typically seen as an enabler to deliver higher bandwidth speeds dynamically and reprioritize voice, video, or data traffic accordingly. It certainly does so, but what if we turn QoS on its head to address Susan’s concern for her child?

What if QoS could be used to slightly delay voice traffic coming through an audio headset connected to a gaming console? What if those voice packets could be systematically recognized and compared with a database of offensive words to determine appropriateness and “mute” the questionable content, similar to the way we have slightly delayed live broadcast transmissions for years? The network is also smart enough to impose no delay on the actual game play, ensuring that the teenage gamer on the other end of the console is not penalized with additional latency where it matters most—such as in high-twitch games, like first-person shooter varieties.

Skeptics will offer many reasons why this is not possible or desirable. Some will argue that even a slight voice delay compromises the gamer’s ability to interact in real time with others online to organize their efforts, a critical social element of gaming that makes it such a communal activity. Although this is true, why not let the parent weigh the benefits and consequences of unencumbered game-play versus a more controlled environment for his or her children? Others will say that regulators will intervene. After all, how is it fair that a particular game be the subject of packet reprioritization, and a different user experience as a result, simply because it draws an audience that engages in more colorful verbal expression? As a counterpoint, video games have been the subject of parental ratings for some time, precisely because of the concern of parents attempting to protect their children from this content. If others interacting in the game environment use profanity or other offensive language that distort a game’s PG rating to something more distasteful, shouldn’t the same rules of parental notification apply? Finally, some will take issue with the attempt to censor the content of free speech in the first place. They may say that it is the parent’s job to do the parenting, not anyone else’s, least of all a technology player. But we would turn their attention back to Susan, a parent attempting to do the right thing for her child but unable to defend him fully in a virtual world. Is using a technological approach to remedy a situation enabled by technology in the first place really that objectionable?

This isn’t an issue to be solved in the pages of a book. Yet it illuminates the possibility of using technology as an additional defense mechanism in a world where our physical danger detection wirings can be blinded. As measured by willingness to pay, such a service is not only necessary but desired by parents in our study.

No White Knight

The civilized world is accustomed to law and order. Break the law and suffer the consequences. There are systems established to impose justice on miscreants. And there are reputable law enforcement officials—the proverbial white knights—to serve and protect the public. All of this exists in a world where we also have our primal defense mechanisms in play to protect us from or alert us to danger in the first place. In comparison, such a system is relatively nonexistent in a virtual world, where threats can come from any direction and where our physical senses and danger mechanisms are already compromised. The virtual world has no police jurisdictions, no universal system of governance. Going back to Bob, he expresses the frustration shared by many:

You know all of the hackers that are going on. It’s like a game for these people to see what they can steal and destroy. That’s kind of real annoying that every time you’ve got to hear about all of this identity theft and all of the—just a lot, a lot of fraud—and they [companies] say, “Oh we can’t trace it because it’s in another country.”

To whom does the consumer turn when he finds himself the victim of a cybercrime or less threatening online nuisance? The Wild Wild West of the World Wide Web still exists, and consumers believe that, when it comes to protection, the buck stops with them. Nearly two-thirds of consumers in our study agree that it is impossible for the government or any one company to police the Internet. Instead, it is an accountability they ultimately own for keeping themselves and their families safe from online predators. This should be welcome news to technology companies, who already understand that technology alone will never substitute for sound judgment. It should be sobering guidance to well-intentioned regulators that consumers do not expect or welcome government intervention, despite the inherent online threats that persist in a virtual world. At the same time, there are practical prescriptions in the way of conspicuous company policies and new technology solutions that will not absolve one’s responsibility for protecting oneself but will offer the individual additional tools with which to do so. We tested 20 of these possible solutions in our quantitative study. Throughout this book, we continue to offer which services are most appealing as measured by respondents’ willingness to pay for them.

The Mind–Body–Technology Connection

Our mind–body connection has been hardwired through the ages but has significant limitations outside of our real world. Although the virtual world allows us to be anyone we choose to be, it also exposes us to dangers from the nefarious cloaked in virtual anonymity and seeking to inflict harm. A world governed only by technology can rely on the same to mitigate these risks. Just as our amygdala senses our real environment for danger or opportunity and alerts us accordingly, the role of technology in doing the same in a virtual world is apparent. Whether such capabilities manifest themselves in the authentication of online acquaintances to validate that they are who they say they are (an idea inspired by Julie), enhanced QoS parameters to censor potentially objectionable content from our children (thereby relieving Susan and others like her from resorting to more primitive tactics), or reporting tools to alert users of the digital footprint consciously or unconsciously left by them in cyberspace (a resource that would most likely be welcomed by Bob), technology has a role to play in correcting the blind spots it helped create in the first place.

Recall SM from the beginning of this chapter, who is a case of a real person afflicted with a real blind spot in the real world. Her ability to sense danger and respond appropriately as a result is irreparably compromised. Unfortunately, there are millions of SMs with their own blind spots in a virtual world who are unable to properly discern the difference between a threatening and non-threatening encounter. Although most consumers agree that the final accountability for triggering the appropriate defense response rests with them, many are simply unable to avert danger accordingly. Companies who understand this fundamental need and respond with transparent policies and intuitive solutions will find a significant market ready to respond. With the corresponding mind–body–technology connection that results, one can imagine a virtual world with far fewer SMs obliviously wandering the remote corridors of cyberspace simply waiting to meet their own perpetrators.

Shift Short: Hacking the Person

Phishing, the unsavory mechanism used by hackers to lure unsuspecting victims into providing sensitive password or account information, has been around for some time. Here’s generally how it works: The target receives an e-mail from a “trusted” organization, say, her bank or other company with whom she does business. Of course, the e-mail is not really from said organization. It just appears that way from the e-mail address and the link that directs the target to a seemingly official website where such sensitive information can be populated. The website is actually a fraud, perpetrated by the hacker to appear legitimate, although designed to dupe unsuspecting victims—easy bait in fishing parlance—to surrender their sensitive information.

If the target is to take the bait, research suggests that she is likely to do so within the first hour of receiving the phony e-mail. A study by secure browsing solutions provider Trusteer revealed that 50 percent of people who fall victim to a phishing attack do so within the first 60 minutes. Hence, there has been considerable focus in the industry on spotting the attacks much more quickly, such that the site may be blocked or disabled. On the positive side, because phishing has become one of the nastier side effects of a virtual world where primal defense mechanisms are compromised, it has also entered the public discourse. As such, the number of phishing attacks as reported at the end of 2010 had declined considerably in recent months.5

But hackers are not easily deterred in their pursuit and come with no shortage of creativity in their arsenal. A new flavor of phishing is a bit more insidious in its approach. Using information assembled through social media and other sites, hackers are now evolving from spraying millions of e-mail addresses with the same fraudulent e-mail to targeting specific victims and masquerading as someone much closer to home. You may delete outright an e-mail requesting your Social Security number from your bank or other company. At a minimum, your spider senses may compel you to contact the organization to confirm that such an e-mail is legitimate. But what if the same e-mail purportedly came from your mother through her e-mail address? What if the veiled threat appeared to come from someone who actually knows you—someone you intimately trust?

Known as spear phishing, this is one of the newer threats to hit the cyber scene. “It’s a really nasty tactic because it’s so personalized,” says Bruce Schneier, the chief security technology officer of the British company BT Group. “This is hacking the person. It’s not hacking the computer.”6 In June 2011, Google reported its discovery and deflection of such an attempt to compromise hundreds of Gmail passwords to monitor the accounts of prominent people, including government officials. The threat is not limited to consumers. Increasingly, enterprises find themselves the target as unsuspecting employees innocently respond to e-mails designed to compromise the firm’s security perimeter.

Bots, faceless predators, hackers. We can now add a new category of threat to the online waters—those who dig deep enough to apparently know us and cloak themselves in the disguise of a friendly face to con us.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.32.116