11
Online Deception

As already demonstrated in this book, in some ways we behave differently online from how we behave in the physical world. In Chapter 2 we examined whether individuals might explore and experience different identities in cyberspace from their identities offline. In Chapter 3, Chapter 4 and Chapter 5 we learnt that the ways in which people relate, develop friendships and develop romantic relationships in cyberspace can be somewhat different from the ways in which we form relationships in the physical world. We often self‐disclose more online than we would face to face, and some people develop ‘hyperpersonal’ relationships in cyberspace. Paradoxically, although we might sometimes open up more about ourselves online, we also hold back or censor information. Moreover, some researchers have found that people can, or at least attempt to, be more deceptive in cyberspace than they are offline. Whitty and Joinson (2009) have named this phenomenon the ‘truth/lies paradox’. Whitty and Joinson (2009) wrote:

In many ways the Internet is a very different medium from, for example, the telephone and [face to face]. What makes this space unique is how we communicate within it. … [O]ften our communication is ‘hyperhonest’ and paradoxically is often ‘hyperdishonest’. These two contrasting features should be of concern for scholars, web designers and of course the users of the Internet. (pp. 6–7)

This chapter examines the literature on online deception and examines in what ways individuals might be more or less deceptive in cyberspace than they are in the physical world.

11.1 DEFINING DECEPTION

Buller and Burgoon (1996) have defined deception as ‘a message knowingly transmitted by a sender to foster a false belief or conclusion by the received’ (p. 205). Bok (1989) defined deception as follows: ‘When we undertake to deceive others intentionally, we communicate messages meant to mislead them, meant to make them believe what we ourselves do not believe. We can do so through gesture, through disguise, by means of action or inaction, even through silence’ (p. 13). She made a distinction between deception and lies, arguing that a lie is a form of deception and is essentially ‘any intentionally deceptive message which is stated. Such statements are most often made verbally or in writing, but can of course also be conveyed via smoke signals, Morse code, sign language, and the like’ (p. 13).

Bok reminds us that deceit is not a trivial matter. Over the centuries, philosophers have debated whether lying is an immoral act and if so whether all lies are immoral or only some types of lies under certain conditions. Bok (1989) has written: ‘Deceit and violence – these are the two forms of deliberate assault on human beings. But deceit controls more subtly, for it works on belief as well as action’ (p. 18). Bok writes about individuals who have been deceived, stating:

Those who learn that they have been lied to in an important matter – say, the identity of their parents, the affection of their spouse, or the integrity of their government – are resentful, disappointed, and suspicious. The feel wronged; they are wary of new overtures. And they look back on their past beliefs and actions in the new light of the discovered lies. (p. 20)

Bok has also pointed out that deception can be disruptive for a society. In her view:

A society, then, whose members were unable to distinguish truthful messages from deceptive ones, would collapse. But even before such a general collapse, individual choice and survival would be imperilled. The search for food and shelter could depend on no expectations from others. A warning that a well was poisoned or a plea for help in an accident would come to be ignored unless independent confirmation could be found. (p. 19)

A thorough investigation of all the individual and social consequence of deceit is beyond the scope of this book. Nonetheless, previous writings do demonstrate that lying is a serious matter and, if deceit is more prevalent online then it is a behaviour that warrants psychologists’ and social scientists’ attention.

11.2 DECEPTION IN CYBERSPACE

The Internet has provided us with a new platform on which to deceive. Online, individuals can potentially deceive a greater number of people. Moreover, cyberspace offers novel opportunities for deception not possible in face‐to‐face settings (Walther, 1996, 2007). The main reason why researchers believe that deception might be more prevalent online is because online communications and representations of the self are not physically connected to a person. Digital deception has been defined as ‘the intentional control of information in a technologically mediated message to create a false belief in the receiver of the message’ (Hancock, 2007, p. 209). We might not need a separate definition of deception that takes place in cyberspace compared with other media; however, it might be useful given that researchers have found that in some ways it (at least feels) easier to get away with some types of lies and that we can deceive in ways we are not able to in the physical world.

Researchers have attempted to categorize the types of deception that take place online. Utz (2005) argued that there are three types of deception: category deception (gender‐switching), attractiveness deception and identity concealment. Although these distinctions are important, perhaps Hancock’s (2007) categorization is more useful. Hancock argued that there are two different types of digital deception: identity‐ and message‐based deception. Identity deception is the creation of a false identity or affiliation. This kind of deception might be easier to perform as well as to get away with in cyberspace than it is in the physical world. Message‐based deception is deception based within the content of a communication between two or more people. Scholarly work has focused on both these forms of deception.

11.2.1 Identity‐based deception

Individuals might disguise a range of identity‐based information about themselves – for example, age, gender, race, sexual preference, health and physical attractiveness. Cyberspace offers more opportunities to manage identities and anonymity (or at least visual anonymity) than the ‘real’ world, and it may foster identity‐based deception.

These days, most spaces online are not exclusively text‐based. Nonetheless, in spaces where users can select an avatar to represent themselves, they still have the opportunity and freedom to manipulate their identity (Galanxhi & Nah, 2007). Moreover, users might not feel the same way about deception using an avatar as they do about deception in which they do not use an avatar. For example, Galanxhi and Nah (2007) found that, in online text‐only chat spaces, deceivers felt greater anxiety than nondeceivers. However, in an avatar‐based environment there was no significant difference experienced by deceivers and truth tellers. These authors reasoned that:

In the text‐only medium, anonymity can help protect the identity of deceivers, while in the avatar‐supported medium, anonymity can be further increased by ‘wearing a mask’ to confuse or distract the recipient about one’s identity. Hence, the mask (i.e., avatar) can increase the deceiver’s perceived distance from its communication partner, thus lowering the deceiver’s state anxiety level. (p. 778)

Of course, there are many reasons why a person might engage in identity‐based deception: some may be malicious but others might be to protect the person (e.g., those who believe they hold a stigmatized identity). Bowker and Tuffin (2003) note:

Within computer‐mediated environments, people with disabilities may have much to gain as physical barriers to participation are broken down. Moreover, the online medium’s capacity to conceal physical difference brings forth the opportunity for people with disabilities to access a social space for experiencing alternate subjectivities, which operate outside the stigma often associated with disabled identities.

Notably, not all online spaces promote deception – and, as stressed throughout this book, cyberspace ought not to be understood as a homogenous space. A good example of this is illustrated in Guillory and Hancock’s (2012) study, which examined the effect of LinkedIn on identity deception. These authors focused on LinkedIn because research suggests that it is common for individuals to lie on curricula vitae. George, Marett and Tilly (2004), for example, found that about 90% of their sample lied on job applications. One obvious compelling motivation for doing so is to appear competent to a potential employer, thereby increasing one’s chances of obtaining employment. However, Guillory and Hancock surmised that individuals might be less likely to lie on LinkedIn (an SNS designed for registered members to share details about their employment history, education and personal profile in order to establish networks of people known to them professionally) given that this is a public profile – as opposed to a traditional resume, which is often confidential. Therefore, the risk of being caught out by a lie on LinkedIn is higher than on traditional curricula vitae.

In the study by Guillory and Hancock (2012), undergraduate students aged between 18 and 22 years were randomly assigned to one of three conditions: writing a traditional offline word document resume, writing an online LinkedIn profile that was private, or writing a LinkedIn profile that was public. They were then asked to create a resume for a consultant position with a good salary and international office locations. Participants were asked to tailor resumes using their own information with the aim of appearing to be the most qualified candidate, and were told that the best‐filled‐in resume would win US$100. They were next told that the true purpose of the study was to detect deception and were asked to spend 15 minutes revealing and describing their deceptions. These were coded into verifiable information (e.g., responsibilities, abilities, skills) and unverifiable information (e.g., interests, hobbies). As predicted, Guillory and Hancock found that more verifiable lies tended to be present in traditional and private LinkedIn resumes and that public LinkedIn resumes contained more unverifiable lies. While Guillory and Hancock’s work yields some interesting results, there are some obvious limitations to this work. Using undergraduate students provides a sample of individuals with a short employment history to embellish. Moreover, given that the participants were enrolled at university and that the university was known to the researchers, it is unlikely that the participants would lie about their university course and experience. This form of deception is commonly reported by organizations and so it would be of interest to carry out a similar study with different samples to determine whether the same findings are obtained.

11.2.2 Munchausen by Internet

Munchausen by Internet is an interesting example of identity‐based deception. A review of the literature by Pulman and Taylor (2012) discusses the notion that the Internet has increased the frequency of occurrence of Munchausen syndrome (which is a psychological condition where someone lies about being ill or induces symptoms of illness in themselves). Munchausen syndrome tends to be chronic, and those with this syndrome become habitual liars (Doherty & Sheehan, 2010). It was Feldman (2000) who first coined the phrase ‘Munchausen by Internet’ to describe an individual seeking attention by playing out a series of dramatic near‐chronic illnesses and recoveries on the Internet. Pulman and Taylor argue that Munchausen by Internet should be formally recognized by the American Psychiatric Association in the DSM‐5.

One classic example reported by Van Gelder (1991) is the case of ‘Alex’ and ‘Joan’. Alex, who in reality was a middle‐aged American psychiatrist, joined a chat room using the screen name ‘Shrink Inc.’ Given his original handle was neutral, the group did not realize he was a male and many assumed, mistakenly, that Alex was a woman. Alex realized that by pretending to be a woman he was able to generate vulnerability and intimacy with the woman online – far more so than in his profession offline. He reported being excited by this new opportunity to help people, and created a female character with the handle ‘Talkin Lady’, who eventually told the group that her real name was Joan Sue Greene and that she was a neuropsychologist in her late twenties. Joan was said to have been in a car accident with her boyfriend, who had died, and Joan herself was paralyzed and disfigured and had lost her ability to speak. Face‐to‐face meetings were therefore physically and emotionally challenging. The persona of Joan went through a number of changes from a woman suffering from depression and thoughts of suicide to a confident woman with many friends. Simultaneously, Alex was able to meet and engage in offline affairs with some of the chat room members via an introduction from his online persona, Joan. Alex eventually tried to kill off Joan with a terminal illness. It was at this point that the members discovered the deception – when they rang the hospital to send flowers to discover there was no such patient at the hospital. When members found out the truth about Joan/Alex, perhaps unsurprisingly, there was considerable outrage in the community. This case study provides a good example of Munchausen by Internet as well as a host of other deceptions. It also gives us insights into the motivations for these behaviours, including the ability to play with identity; being able to play out a caring and emphatic role that was possibly unachievable as a male; the ability to garner sympathy and intimacy from group members; and being able to achieve physical intimacy with group members offline.

Pulman and Taylor (2012) believe that social psychology offers a number of theories that can explain why Munchausen by Internet occurs. Some of these theories have been summarized earlier in this book. The ‘disinhibition effect’ (see Chapter 3), for example, suggests that Munchausen by Internet is more likely to occur online given that many spaces are asynchronous, thus providing an opportunity for individuals to be creative with their identity presentation, and anonymity can reduce end users’ concern for other people’s opinions. Other researchers argue that some Munchausen by Internet sufferers might be motivated by the simple pleasure they receive from deceiving others online. Whether Munchausen by Internet is formally recognized by DSM‐5 might be a concern to some practitioners – but perhaps of greater interest is understanding in more detail the root cause.

11.2.3 Message‐based deception

Message‐based deception occurs when the identity of the communicants is known and the deception occurs within the content of the communication. This type of deception occurs more frequently than identity‐based deception in face‐to‐face or mediated interactions. Examples might include blaming congested traffic for lateness when really you delayed your travel plans, or being ill when really you prefer not to take part in an event. As with identity‐based deception, message‐based deception does not always lead to negative outcomes. DePaulo, Wetzel, Sternglanz and Wilson (2003) argue that in many cases deception can lead to improved social cohesion and can protect privacy.

With regard to identity‐ and message‐based deception, deceivers can, of course, simultaneously use both forms. Mass‐marketing fraud is a good example of the use of both forms – where the deceiver takes on a different persona from their own and also uses message‐based deception (e.g., geographic location, the reasons why they are requesting money). Mass‐marketing fraud and other cybercrimes will be considered in Chapter 12 and Chapter 13, respectively.

11.3 DO WE LIE MORE ONLINE?

Do people actually lie more on the Internet? Caspi and Gorsky (2005) found that 73% of individuals believe that deception is widespread online, and some research supports this view. Although there might be more opportunity to lie online, and there is a general perception that people do lie more in cyberspace, it is important to undertake research to determine whether this really is the case. Early research suggested that individuals did lie on the Internet about specific aspects of themselves. Cornwell and Lundgren (2001), for example, found that participants were more likely to lie about their age and physical attributes to their online romantic partners than in face‐to‐face relationships. Whitty and Gavin (2001) found similar results, with participants believing that it was socially acceptable to lie in order to create opportunities for others to communicate with them online. They understood this type of lying to be ‘white lies’ that were not meant to be harmful or malicious. Moreover, the participants believed this type of deception was common online.

In more recent research, Naquin, Kutzberg and Belkin (2010) found that participants are more willing to lie when communicating via email than via pen and paper and felt more justified in doing so. Zimbler and Feldman (2011) examined deception in 15‐minute conversations via email, instant messenger and face to face and found deception to be more evident in digital media compared to face to face. Although Naquin et al. and Zimbler and Feldman’s research provides us with some interesting findings, these studies are nonetheless limited, given that they were conducted in the lab rather than in real life. In real life, individuals have a choice of media and communicate both with people known to them and with strangers.

More systematic research has examined the likelihood that someone might tell a lie over a range of media, including face to face and over the telephone. Such research has also drawn distinctions between the various types of media available online. For example, a text‐only space might provide more opportunities for deception compared with a space where individuals are required to upload photographs and videos of themselves. This research is detailed below.

11.3.1 Theories to predict deception

There have been a number of theories developed to predict in which types of media individuals are more likely to lie. Supporters of the ‘social distance theory’ argue that, because lying makes individuals feel uncomfortable, they will choose leaner or less rich media in order to maintain social distance between themselves and the person they are lying to; that is, they will avoid media that contain cues that people believe will give away deceit (e.g., voice, body and language). Moreover, in less rich media, the deceiver has more control over the interaction – any unexpected questions can be thought about rather than being responded to immediately. If the social distance theory were supported by empirical research, we would find that people lie most in email, followed by instant messaging, followed by phone and then face to face. In contrast, the ‘media richness theory’ explains that, because lying is highly equivocal, individuals elect to lie more in rich media, which includes multiple cue systems, immediate feedback, natural language and message personalization. Hence, this theory predicts that individuals will lie more in face‐to‐face situations, followed by phone, instant messaging and then email.

Hancock (2007; Hancock, Curry, Goorha & Woodworth, 2005; Hancock, Thom‐Santelli & Ritchie, 2004) and his colleagues have proposed an alternative theory, which they refer to as the ‘feature‐based model’. They devised this theory after testing for the two theories above and finding that they were not supported by their research. In their original study, they examined the lies of 28 students using a diary study that ran for seven days, in which the participants recorded all instances of lies (Hancock et al., 2004). The participants engaged in around six interactions a day and lied on average 1.6 times a day (26% of all social interactions included a lie). The most lies occurred during face‐to‐face interaction (n = 202, 1.03 per day), followed by telephone (n = 66, 0.35 per day), instant messaging (n = 27, 0.18 per day) and email (n = 9, 0.06 per day). The largest proportion of lies within a medium occurred on the telephone; the smallest proportion occurred via email. Interestingly, the more experience people had with email, the more lies they were likely to tell using that media (a relationship that was not found for instant messaging).

To explain their findings, Hancock et al. (2004) offered the feature‐based theory (set out in Table 11.1 below). This theory sets out three dimensions that need to be considered when we examine deception, including whether the medium is synchronous, whether it is recordless and whether it is distributed (i.e., not co‐present). The feature‐based theory proposes that, the more synchronous and distributed but the less recordable a medium is, the more frequently lying should occur. One lies more in synchronous interactions, because the majority of lying is spontaneous and hence synchronous communication should present more opportunities to lie. In recorded communication, one is aware that the conversation is potentially kept or stored (e.g., in a saved email) and can be referred to in future conversations; hence, one is less likely to lie if one is aware that there is proof of the lie, which can be referred to later. In media where participants are not distributed, deception should be constrained to some degree as some lies can be immediately obvious (e.g., it could be difficult on a nondistributed medium to lie that one is writing a report when really one is playing a computer game).

Table 11.1 The feature‐based model: ranking predictions of likelihood of lying (adapted from Hancock et al., 2004)

Face to face Phone Instant messaging Email SNS SMS
Media features
Synchronous x x x
Recordless x x
Distributed x x x x x
Lying predictions
Feature based 2 1 2 3 3 3

Research since Hancock et al.’s (2004) study has found that the feature‐based theory does not necessarily hold when considering the target of the lie and the type of lie being told. Whitty and Carville (2008), for example, asked 150 participants to rate on a Likert scale how likely they were to tell different types of lies across different media. They found that individuals were overall more likely to tell self‐serving lies to people not well known to them. An example of a self‐serving lie identified in the study is described as follows:

You are having a [face‐to‐face] conversation with someone that you are ‘close to’ when they invite you to an event. You can think of something else you would rather spend your time doing so you tell them that you can’t make it to the event, even though you can. (p. 1025)

These researchers argue that it is more risky and difficult to get away with telling a self‐serving lie to individuals the liar feels close to – given that people close to us have more information about our day‐to‐day lives. For self‐serving lies, individuals stated they were more likely to tell a lie in email, followed by phone and lastly face to face. This was the case regardless of whether the target was someone close or someone not well known to the liar. Such a finding supports the social distance theory. Whitty and Carville (2008) explain that self‐serving lies are more likely to make the liar feel uncomfortable and apprehensive and so email is the ideal place to tell such as lie. When considering other‐oriented lies, participants in this study were more likely to believe that they would tell these lies to individuals they felt close to. An example of an other‐oriented lie was:

You receive an email from a person you don’t know well. Within the email they ask you if you think they look attractive. You don’t think that they are attractive but you don’t want to hurt their feelings so you email them back and tell them that they are attractive. (p. 1025)

Other‐oriented lies are typically told to protect the feelings of the target of the lie. Given this, Whitty and Carville argue, one might feel more compelled to lie to a person close to oneself (than to someone not close) to protect their feelings rather than saying the truth, which could possibly cause them upset or distress. Participants in this study believed the type of medium would not influence the likelihood of them telling an other‐oriented lie to someone close to them. Perhaps this is because the purpose of this type of lie is to maintain the integrity of the target, and one ought to be motivated to do this for someone one cares about in any type of medium. Such lies are not told to hurt people but are intended to make others feel better about themselves. The more one cares for another, surely the more motivated one is to utter such ‘white lies’. In contrast, when it came to telling an other‐oriented lie to individuals not well known to them, the participants claimed they would be less likely to tell the lie in email and most likely to tell the lie face to face. Again, this did not support the feature‐based theory but instead supported the social distance theory. Whitty and Carville (2008) state:

It is argued that people are more likely to talk aggressively in CMC … than face‐to‐face because online there is a lack of social presence and less contextual cues. This is perhaps why this current study found that individuals were more likely to say a hurtful truth than an other‐oriented lie to individuals not well‐known to them in email. The social distance, in this particular case, motivates the person to tell unpleasant truths. (p. 1029)

One of the major criticisms directed at the above study is that it is based on hypothetical scenarios. It is difficult to ascertain from such a study whether people would lie in reality, and so we need to treat the results with caution. In order to feel confident that the results can be replicated, they need to be tested in the real world – in a diary study, perhaps, akin to Hancock and his colleagues’ work.

Whitty et al. (2012) sought to replicate Hancock and colleagues’ earlier work by drawing from a larger sample, examining a greater number of types of media and by distinguishing between spontaneous and planned lies. In their study, 76 individuals participated in a diary study and focused on six modes of communication: face‐to‐face, telephone, SNSs, instant messaging, email and text messaging (SMS). Based on Hancock and colleagues’ findings, they hypothesized that more lies would be told via the telephone than face to face and via instant messaging, followed by SNS, email and SMS (see Table 11.1). Their hypothesis was only partially supported: participants were more likely to lie on the telephone, followed by face‐to‐face, with no significant differences found between face‐to‐face and instant messaging, as predicted. However, the data did not support the hypothesis that participants would lie more on instant messaging than on the other digital media, except with SMS, where, as predicted, participants lied more on instant messaging than SMS. This might be explained as follows: media features are not the only variable we need to consider when predicting deception, and/or other features might need to be considered within the model. The researchers add a further explanation: that instant messaging should perhaps be considered as ‘near synchronous’ rather than synchronous.

11.4 DETECTING DECEPTION

People are typically poor at detecting lies – even those trained to detect lies, such as police officers (Vrij, 2008). Therefore, it should come as no surprise that psychologists have been interested in detecting deception since long before there was an Internet. It is, of course, interesting to detect deception in people’s everyday lives, but it is also useful to help discern whether a suspect is guilty or innocent. Cognitive psychologists have considered how to improve detection in verbal accounts by increasing the cognitive load in narrating. Vrij et al. (2008), for example, found that detection of deception was improved when truth tellers and liars were asked to report their stories in reverse order.

Lies told by serious offenders might be even harder to detect given that these people are likely to have developed more accomplished strategies to deceive. In fact, research has found that offenders outperform nonoffenders in detecting lies. Moreover, the same researchers found that 94% of nonoffenders believed that lying required more mental effort than telling the truth, while only 60% of criminal offenders reported this opinion (Hartwig, Granhag, Stromwall & Anderson, 2004). Given the difficulties in detecting when a potential offender might be deceiving, there has been much research on how to improve interviewer techniques in order to catch out a liar (e.g., Dando, Bull, Ormerod & Sandham, 2015). One such strategy is to increase the cognitive load when questioning someone to determine whether they are a deceiver. Researchers in the UK have developed a security screening method that they refer to as ‘controlled cognitive engagement’ (CCE). This is a method security agents can apply to control an interview so that a passenger provides information that can be tested for veracity (Dando, 2014; Ormerod & Dando, 2015). According to Ormerod and Dando (2015), CCE ‘embodies each of the six techniques shown in laboratory studies to improve deception detection rates: use of evidence; tests of expected knowledge; effective questioning styles; observation of verbal manoeuvring; asymmetric cognitive loading; and changes in verbal behaviour’ (p. 78). This type of interviewing tests for expected knowledge, so that truth tellers experience a friendly, informal conversation while deceivers have their cognitive load increased given they have to ensure they are consistent and factual with their responses. The researchers have found this method to be highly successful in experimental studies testing for deception in aviation security screening.

In nonoffender populations, detecting deception is said to be slightly easier, although still challenging. Deceivers tend to exhibit overcontrolled behaviour, using fewer ‘illustrators’, such as body movements and gestures, to consciously covey information (Granhag & Strömwall, 2002). Nonoffender liars often consciously reduce their rate of speech (Vrij, 2008). Moreover, because deception might elicit feelings of guilt and fear, when individuals lie, they often show an increase in signs of stress cues, such as nervous smiles, speech hesitations and disturbances, and an increase in fidgeting (DePaulo et al., 2003; Vrij, Edwards & Bull, 2001). Although nonverbal cues might be useful in detecting deception, the problem is that in real life people are not always recorded when they deceive. We have already learnt in this chapter that people often avoid recordable forms of media when they lie. Nonetheless, some lies are told on recordable media, and, when people attempt to deceive using these media, there is a record that could be examined to help detect any potential deception.

The Internet has, therefore, opened up new opportunities to detect deception – especially in criminal cases, where one can potentially collect recorded material for the purposes of collating forensic evidence from around the time when a crime was believed to have taken place. Computer scientists and behavioural scientists have combined knowledge and skills in innovative research to detect deception in online environments. Research by Rashid et al. (2013) can detect criminals who hide behind multiple identities (or so‐called digital personas). Their work examines word frequencies as well as key grammatical categories and semantic fields to establish a ‘stylistic language fingerprint’ for an author. Applying their methods, these researchers are able to establish, with a large amount of confidence, the age and gender of the ‘real’ person behind the digital persona, which would help to elucidate whether someone is creating an online person that varies substantially from whom they really are. This work can be applied to help in the online detection of criminals, such as child sex offenders, romance scam criminals and those involved in the radicalization of youth.

11.5 CONCLUSIONS

The Internet has opened up new opportunities to lie as well as increased the likelihood of the occurrence of identity–message deception (e.g., changing one’s age, gender or physical description). Counterintuitively, however, researchers have found that people are more likely to lie on the telephone than via other media (e.g., face to face, email). The feature‐based model was developed to account for the reasons why individuals might lie more on the telephone but, as is pointed out in this chapter, there is room to develop this model, considering the complexity of lies (the type of lie, the person being lied to, etc.).

As discussed in this chapter, not all people who tell lies do so with the intention to cause harm. Nonetheless, there are many reasons why researchers as well as others (e.g., law enforcement, security, industry) might want to learn where people are more likely to lie and how to detect deception. Although the Internet has provided new opportunities to lie, it has, in turn, also opened up new opportunities to detect deception. This chapter elucidates the benefits of working in interdisciplinary research teams to develop more effective methods in the detection of deception. Moreover, when considering detecting deception, researchers point out that methods need to make a distinction between noncrinimals and individuals who engage in criminal activities and have more reasons to lie and more practice at deception. There are specific types of deception that criminals engage in that academics and nonacademics have been especially interested in detecting. We examine some of these in Chapter 12, which examines online scams, phishing and illegal downloads.

DISCUSSION QUESTIONS

  1. Should lying in all circumstances be considered as immoral? If not, when is it okay to lie and why?
  2. Where do you think you might be more likely to get away with a lie? Why do you think that is? Is your line of thinking supported by any of the theoretical models on deception?
  3. Are you more likely to lie to people closer to you or to strangers? Do you use different media to lie to those close to you versus strangers? Why do you think that might be?
  4. Consider Hancock’s feature‐based model. Do you think there are other features that could be considered in this model to predict where people are more likely to deceive?
  5. In addition to what has been summarized in this chapter, what other methods might be employed using digital media to detect deception? How would you go about testing the effectiveness of the methods you have proposed?

SUGGESTED READINGS

  1. Galanxhi, H. & Nah, F. F.‐H. (2007). Deception in cyberspace: A comparison of text‐only vs. avatar‐supported medium. International Journal of Human–Computer Studies, 65(9), 770–783.
  2. Guillory, J. & Hancock, J. T. (2012). The effect of LinkedIn on deception in resumes. Cyberpsychology, Behaviour, and Social Networking, 15(3), 135–140.
  3. Hancock, J. T., Thom‐Santelli, J. & Ritchie, T. (2004). Deception and design: The impact of communication technologies on lying behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 129–134). New York, NY: Association for Computing Machinery.
  4. Ormerod, T. C. & Dando, C. J. (2015). Finding a needle in a haystack: Veracity testing outperforms behaviour observation for aviation security screening. Journal of Experimental Psychology: General, 144, 76–84.
  5. Rashid, A., Baron, A., Rayson, P., May‐Chahal, C., Greenwood, P. & Walkerdine, J. (2013). Who am I? Analysing Digital personas in cybercrime investigations. Computer, 46(4), 54–61.
  6. Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities (2nd ed.). Chichester, UK: John Wiley & Sons.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.123.120