Chapter 13

Psychology and security

Utilizing psychological and communication theories to promote safer cloud security behaviors

Gráinne Kirwan    Institute of Art, Design and Technology, Dublin, Ireland

Abstract

While technological devices and programs form the bulk of the defence mechanisms against malicious attacks and infiltrations, the human element in cloud security must also be factored into any protection strategy. The use of weak passwords and other aspects of poor security hygiene can dramatically reduce the efficacy of technological protective measures. While users may be aware of the best methods of ensuring increased security, they are not always inclined to follow these directions, particularly where they perceive the methods involved as being difficult, inconvenient or ineffective, or if they feel that they do not have the skills required to implement the measures. This chapter examines how various theories and research in communication and psychology can help management to understand why users may not follow best practice in cloud security, and how they can encourage users to change such behaviours. The chapter describes theories such as Communication Privacy Management Theory, Protection Motivation Theory and Social Learning Theory, as well as various cognitive biases and distortions, particularly in relation to decision making. For each theory and concept, the applicability to cloud security behaviours is considered, illustrating how such phenomena may manifest in appropriate security behaviours, or lack thereof. The chapter continues with insights from psychology on how behaviours can be changed, drawing from the implications of the communication and psychological theories previously described. Specific approaches for the development of intervention and education strategies are proposed which may encourage staff and individual users to more actively engage in safer cloud security behaviours.

Keywords

Cognitive psychology

Learning theories

Online Security

Passwords

Communication Privacy Management

Hyperpersonal Communication

Protection Motivation Theory

1 Introduction

The managing director is confident that he has taken all of the security measures that he needs to in order to protect his company’s cloud based data—his IT manager has tested the system, the software and hardware are the best that money can buy, and employees are given excellent training in data protection. On the day that a security breach is uncovered, he is perplexed and horrified, and he begins an investigation into why the breach occurred. Eventually, he discovers that one of his management team has used the same password for their company file access as they do for an online dating account. When the online dating service was hacked a few minutes beforehand, the infiltrators gained access to the password, and matched it to the employee. The managing director is confused—the employee training provided explicitly stated that the password for the company services should be unique, so why did this happen?

While technological devices and programs form the bulk of the defense mechanisms against malicious attacks and infiltrations, the human element in cloud security must also be factored into any protection strategy. The use of weak passwords and other aspects of poor security hygiene can dramatically reduce the efficacy of technological protective measures. While users may be aware of the best methods of ensuring increased security, they are not always inclined to follow these directions, particularly where they perceive the methods involved as being difficult, inconvenient, or ineffective, or if they feel that they do not have the skills required to implement the measures. This chapter examines how various theories and research in communication and psychology can help management to understand why users may not follow best practice in cloud security, and how they can encourage users to change such behaviors. The chapter describes theories such as Communication Privacy Management (CPM) Theory, Protection Motivation Theory, and Social Learning Theory, as well as various cognitive biases and distortions, particularly in relation to decision making. For each theory and concept, the applicability to cloud security behaviors is considered, illustrating how such phenomena may manifest in inappropriate security behaviors. The chapter continues with insights on how behaviors can be changed, based on the theories described. Specific approaches for the development of intervention strategies are proposed which may encourage staff and individual users to more actively engage in safer cloud security behaviors. The chapter concludes with suggestions for further reading regarding several of the theories and phenomena discussed.

2 Communication theories

Communication is an important part of our daily lives, and it is an area that has been extensively researched—in terms of both online and offline communications. Certain aspects of communication theories are applicable to online security promotion, and in particular, these include CPM and Hyperpersonal Communication.

2.1 CPM Theory

CPM Theory was outlined by Petronio (2002) and has been applied to many aspects of online communication (see, e.g., Child et al., 2012; Jin, 2012; Kisekka et al., 2013). CPM Theory comprises several basic principles, including that individuals or groups believe that they own their private information, and have the right to control how and where this information is disseminated. The user(s) also presume that anyone else who holds this private information will follow the rules that they dictate regarding the sharing of that information. And so, trust is a key element of how we share information.

From a company’s perspective, CPM is important as it partially describes the relationship that the end user has with the organization. The user may be comfortable sharing certain information with the company, such as spending habits and preferences, home address, telephone number, e-mail addresses, and other personal information. But such information has become valuable in modern society and has often been sold or shared with other organizations. In other cases, the information has been shared due to legal requirements to provide data to investigative organizations or through the breach of security by cybercriminals. While such data losses may not be the fault of the organization, the users may still perceive it as a breach of trust and may be hesitant to share information with the organization at all. Should enough instances of such sharing of information occur, it may have a negative effect overall, as individuals become wary of disclosing their details online.

An awareness of CPM can help organizations to understand the trust that users place in them and the importance of not breaking that trust. Should private information be unintentionally disclosed then the organization needs to take care to build the trust relationship again, both for their own benefit, and to contribute to the continued future of online interactions as a method of communication and commerce. Basically, the more we trust an individual or organization, the more likely we are to disclose private information to them—high trust compensates for low privacy (Joinson et al., 2010).

2.2 Hyperpersonal communication

Sometimes it is easier to disclose personal or private information when you do not see the other person face-to-face, or if you think that it is a once-off meeting and you will never meet them again (Rubin, 1974, 1975). Certain aspects of Computer-Mediated Communication (CMC) can also result in higher perceived levels of affection and emotion, a phenomenon which Walther (1996, 2007) referred to as Hyperpersonal Communication. This can result in increased levels of personal disclosures online (Jiang et al., 2011).

Walther (1996) suggested that different elements of online communication influence hyperpersonal CMC—most notably the receiver, sender, feedback, and asynchronous channels of communication. In short, he suggests that it is possible that the lack of face-to-face social cues can result in communicators exaggerating the more subtle cues available in the messages, and this may result in stereotyped and possibly idealized impressions of the other person. This is exaggerated by the potential selective self-presentation that may be conducted by the sender—when communicating online we can choose to edit or omit information that might be obvious in a face-to-face discussion. For example, if a speaker makes a point that we disagree with, we may unintentionally show this disagreement in our facial expression. In a face-to-face meeting, the speaker may notice this, but if our communication is a text-only online interaction then we have the opportunity to hide our reaction. We can tailor the information that we present to the situation, putting our best selves forward, in a similar way that individuals might do when preparing a job application or curriculum vitae.

Similarly, when communicating online, we can develop an overly positive opinion of our communication partner, be they an individual, a group, or an organization. It is therefore a risk that we trust them more than we should, given our knowledge of the entity. We may offer more private information than we would otherwise do, at a much earlier stage. The submission of this private information may result in disclosure of details that might put an individual’s or an organization’s security at risk.

3 Cognitive psychology

Imagine that you are responsible for the security of 600 files. You have been given the choice between two security programs, but you must choose one of them. The first guarantees protection, but only of the first 200 files. The second offers to protect all files, but with only a one-in-three chance of not being compromised. Which would you choose?

Later you are asked to make another decision regarding the files. This time you are required to choose between a program that admits that 400 files will be vulnerable and a program that has a two-in-three chance of being compromised. Which would you choose?

Chances are that, should you be required to make this decision in real life, you would probably choose the first program in the first scenario, and the second program in the second scenario (having both presented here in close succession means that you may have realized the similarity in the questions, which may affect your answer). The difference between the choices is based on how they were framed—specifically if they were given a positive spin (in the first scenario) or a negative spin (in the second scenario). This framing effect, described by Tversky and Kahneman (1981), is one of many different cognitive biases that we can be susceptible to (see also Tversky and Kahneman, 1974). In practice, when we think about the possible risks of sharing information online, how the risks are presented to us may have an impact on the decisions that we make—when a positive frame is presented we are more likely to try to avoid risk. If a negative frame is presented, then we will tend to choose risks. Being aware of this bias can help us to make more informed decisions.

However, there are many other biases. For example, confirmation bias (Einhorn and Hogarth, 1978) suggests that once we have a tentative hypothesis, then we are more likely to seek information that will confirm this, possibly avoiding or ignoring information that conflicts with it. So we may be aware that there are risks when we are not careful with our online security, but we are also aware that many users who are not careful have not been victimized. We also know (or think that we know!) that despite behaviors in the past that were not ideal, our data appears to be safe, and we have not experienced any negative consequences as a result. Based on this information, we might feel that we do not need to worry—our tentatively held belief that extensive security procedures are unnecessary is “confirmed,” and we do not alter our behaviors. The employee in the case study at the start of this chapter has probably used the same password for multiple accounts in the past and has not noted any security problems emerging. For this reason, he feels that it is acceptable to use the same password for his company access and online dating account, despite it being against policy.

A third important bias is the availability heuristic (Tversky and Kahneman, 1973). This refers to our bias to lend more weight to information that we are presented with, or think about, the most often. For example, immediately following news reports of a crash of a passenger jet, individuals may become more wary of traveling by air, despite the relative safety of air transportation compared to many other modes of transportation. Similarly, immediately following news reports of successful infiltrations, or on hearing about a friend or colleague who has been recently victimized, we may be more conscious of the risks, and hence more likely to engage in security focused behaviors. For a while, we will remember this news report, or our victimized acquaintance, and will be more careful and security conscious. However, over time, the memory of the event will fade, and become less “available” to us. As this happens, it is possible that we will become more relaxed about the measures that we are taking, and hence more vulnerable to attack. A similar concept is salience—how much a particular item in our environment grabs our attention. This can have an important impact on our decision making (see, e.g., Payne, 1980). It is important to note that cues can have more salience because of the availability of the information in our thoughts (so that we might be more likely to remember a person’s name because they share it with our sibling), or a cue can become more available because of its salience (the more eye-catching that a logo is, the more likely we are to remember it later on). So the more that we hear about security issues, the more available the information will be to us, and hence we might be more likely to engage in better preventative measures.

Optimism bias refers to a cognitive bias where we are more likely to believe that good things will happen to us, and negative events will not happen to us, when compared to the average (Weinstein, 1980). For example, people tend to believe that they are less likely to become divorced than the average person, while they might put their odds of winning the lottery at higher than they really are. Interestingly, even when informed of the actual odds of a given event occurring, individuals still demonstrate optimism bias, often only incorporating this new information if it indicates that the odds were better than they originally expected (Sharot et al., 2011). Such “unrealistic optimism” can also extend to our online activities (Campbell et al., 2007)—we may tend to believe that our data is more secure than that of others, and that we are less likely to have our information compromised than most individuals. This may result in more risky behaviors online, as we perceive ourselves to be relatively safe. The employee in the case study has possibly heard of cases where passwords had been compromised, but optimism bias prevents him from considering that he is at substantial risk of such an incident himself.

Many of the biases identified in cognitive psychology, including the ones described above, can be a little disquieting. We may like to think of ourselves as being careful decision makers, while we are in actuality vulnerable to many biases and problems in our decision making. Kahneman (2011) describes two types of thinking—System 1 and 2. System 1 is a “fast” system—it is instinctive, and emotional, and it makes many of the decisions that occur on a daily basis—from what words to use in a relaxed conversation with a friend, to which of the many identical cereal boxes to pick up in a supermarket. System 2 is the “slow” system that manages the decisions and tasks that require a great deal of effort, such as developing an acceptable use policy for an organization’s technologies. We need both Systems in our daily lives—putting too much time and energy into simple and risk-free decisions would prevent us from appropriate productivity (and cause a huge bottleneck in the cereal aisle of the supermarket!). But using System 1 to make decisions that are complicated and important will result in poorly thought out policies and multiple errors.

It is helpful if we think about our online security decisions in terms of System 1 and 2. When choosing a service provider, we will possibly use System 2, particularly if trying to weigh up the potential risks and benefits of each. But when choosing a password, it is tempting to use System 1, especially if we are in a hurry or have other things on our minds. The employee in the case study outlined above used System 1 when choosing the same password for both accounts—the password was easily available to him, easy to remember, and appeared to do an adequate job of ensuring his security. Engaging System 2 requires more effort—finding inspiration, creating an appropriate password, and developing a way of remembering that password—but it does result in a more secure system. Using technological aids to turn the task of creating a new password into a System 1 decision will therefore result in more compliance with policies, and greater security. Password management tools provide such a mechanism. If users can be helped so that the System 2 password development becomes as instinctive and easy as a System 1 decision via the use of such password management tools, then compliance with policies should become more widespread. An alternative to this is to attempt to activate System 2 when it is the more appropriate means of making a decision. Kahneman indicates that one method of doing this is to frown—it reduces overconfidence and increases analytical thinking. Asking users to frown every time that they need to generate a new password may not be appropriate or realistic, but there are other mechanisms of encouraging use of System 2. For example, systems which refuse to accept weak passwords can force users into integrating more complexity in the form of including numerals, symbols, and mixed-case letters. Similarly, systems which require users to change passwords regularly and that do not permit reuse of previous passwords are also more likely to engage the System 2 decision-making process.

Overall, decision making and risk are intertwined, and some researchers, such as Chen et al. (2011), have considered how these influence online communication and behaviors. It is helpful to consider the biases that users and employees may be unconsciously experiencing when online and to draw their attention to these biases if appropriate. Ideally, a system should either divert a user into critically considering what they are about to do if it might compromise the situation, or encourage the automated “System 1” to always ensure that the safest response is followed. Unfortunately, the opposite is generally the case—current technologies and interfaces tend to encourage the user to follow System 1 when the most crucial decisions need to be made—for example, we are excited by the prospect of using a new application when we are asked to indicate that we have read the terms and conditions—at such a time, System 1 might be more likely to take over, deciding to tick the relevant box to provide our consent, without allowing System 2 to review the details. Such responses have resulted in many users being surprised by the stipulations that they have agreed to when using particular applications and feeling that the developer has deceived them by including unexpected terms and conditions. In truth, the user has almost always agreed to these terms and conditions by ticking a checkbox or a button which says “I Agree”—they simply have not read the actual policy and so are unaware of what they have agreed to.

4 Other relevant theories

In this section, we will consider some other theories that have relevance to online security behaviors. First, we will consider learning theories, and the concept of reward and punishment. Finally, Rogers’ (1975, 1983) Protection Motivation Theory will be described.

4.1 Learning theories

Psychology has developed many theories for how we learn, and many of these theories are primarily based on the learner seeking a reward, and trying to avoid a punishment (or at least, a lack of reward for our efforts). A key example of such a theory is operant conditioning, proposed by B.F. Skinner (see, e.g., Skinner, 1938). In short, we are likely to continue to exhibit behaviors that we are rewarded for and avoid behaviors that we are punished for. When considering whether or not to engage in a particular behavior, we may consider the possibility of being rewarded versus being punished and then make our choice (in criminology, and other disciplines, this is sometimes referred to as Rational Choice Theory). In practice, this has important consequences for security measures taken by individuals. Setting an appropriately strong password, and using different passwords for each account, is both difficult and frustrating. It can result in many attempts to gain access to an account without success and the inconvenience of resetting passwords following too many failed attempts or if the password is completely forgotten. It is far easier to choose a simple password and use it for every account, thus resulting in unsafe behaviors (and situations similar to that outlined in the case study). This was noted by Tam et al. (2009), who referred to it as the convenience-security trade-off. In short, for as long as setting difficult passwords is an option rather than a requirement, and the user does not see any risk in using (and reusing) a simple password, then they are unlikely to choose the more difficult, but more secure, option.

However, we do not have to previously experience reward or punishment personally in order to learn a behavior—we can learn by watching others. If we see someone steal an item, and then be arrested and imprisoned, we learn that it is better if we do not follow their example. On the other hand, if we see a person work hard and be rewarded with a pay rise, promotion, or appropriate praise and recognition, then we may be encouraged to work harder ourselves, in order to gain the same rewards. These are examples of Social Learning Theory or observational learning, most famously researched by Albert Bandura (1965) in an experiment involving children observing adults be rewarded or punished for violence toward a toy called a Bobo doll. With regard to security procedures, this means that people may look to the individuals around them to determine whether or not to engage in safer behaviors. If they see colleagues leave passwords written beside their computer, or upload confidential documents for convenience rather than out of necessity, they may be encouraged to do the same, especially if there are no obvious repercussions for the behavior. Research has already found a link between social learning and other types of negative online behaviors, such as copyright infringement (see, e.g., Morris and Higgins, 2010), and it is possible that social learning may play an important role in adoption of security measures.

4.2 Protection motivation theory

Nobody wants things to go wrong in their lives, and they will make many efforts to protect the people and things that are important to them. For example, they may have a house alarm installed, or take self-defense classes, or ensure that appropriate medical care is sought if there is a possibility of illness. What they do, and when, is often driven by several factors, and many of these have been outlined by Rogers (1975; 1983) in Protection Motivation Theory. Specifically, these include:

1. Perceived severity of the threatened event (how bad will the consequences be if the event occurs);

2. Perceived probability of the threatened event (how likely is it that the event will occur);

3. Perceived response efficacy of the preventative measures (will the preventative measures actually work for this threat);

4. Perceived self-efficacy in using preventative measures (does the user believe that they can successfully implement the preventative measures);

5. Potential rewards (what are the expectations if avoidance of the threat is successful);

6. Potential costs (what are the sacrifices that the user must make to take the preventative measures).

Many studies have found that perceived self-efficacy is very important in determining what measures individuals will take with regard to online security (see, e.g., Lee et al., 2008; Johnston and Warkentin, 2010; Ng et al., 2009; Ng and Rahim, 2005). As noted by Power and Kirwan (2015), the importance of the self-efficacy factor may pose a problem in light of recent developments regarding international surveillance. As it has emerged that some organizations may be forced by government-run agencies to disclose user information and communications which they have collected, many users may feel that it is impossible to make their data fully secure, regardless of what preventative measures they take. A similar threat is posed by online criminals, who are constantly evolving their techniques to stay ahead of computer security (see Schneier, 2012).

5 Overcoming inhibitions to safer security behaviors

The theories above describe some of the reasons why employees and users may not engage in the safest behaviors when online. They indicate why individuals might choose weak passwords, reuse passwords, share information without fully considering whether the other party is really trustworthy and overestimate their safety and security online. In some cases, users may be aware of these biases and distortions, but in others, they are unaware of the psychological and communication theories that may explain these behaviors.

Based on the theories outlined, we can develop some specific suggestions for how to overcome inhibitors to safer security behaviors.

1. Based on CPM, it may be helpful to include cues for users before they share any information. For example, before a file with customer data is shared with a third party, the employee could be prompted to answer a number of questions to ensure that such sharing is appropriate. These could include questions about if the potential recipient of this information is trustworthy; what we might expect the recipient to do with the data; does the recipient need all of the data that the employee is about to send; and, if the employee was the third party, would they reasonably expect the data provided to be shared. Such an approach might be effective, but would not be suitable for routine cases, partially because completing these questions may become very irritating and time-consuming, and partially because if this approach is used too frequently the employees may develop shortcuts to completing the questions, without actually considering the individual case.

2. Hyperpersonal Communication Theory could be utilized in aiding users and employees to think twice about the individuals that they are communicating with before sharing information with them. The user could be prompted to review their communications with an individual before deciding whether or not to share information with them. They could also be prompted to consider if it is appropriate to share personal information with an individual who they have not previously met, and also if it is necessary to share this information with them.

3. Given the known tendency for individuals to be risk-averse when decisions are presented in a positive frame, it could be helpful to utilize this bias when presenting the prompts described above. The language used in the prompts is important, as is the perceived source of the prompts (e.g., if the perceived source of the prompts is a trusted colleague or manager, then they may be adhered to more than if the perceived source is an impersonal computer program).

4. Be aware of confirmation bias—use mandatory checklists to demonstrate the appropriate security requirements, hence reducing the likelihood that individual users can avoid or ignore contradictory evidence. By ensuring that all information is clearly visible it can reduce the possibility that only confirmatory evidence is attended to. Such checklists may also help to overcome optimism bias by including details on the risks involved should any item be neglected. However, as with item 1 above, care must be taken to ensure that the completion of these checklists does not become so routine that they are not thoroughly read and understood each time.

5. Organizations can take advantage of salience and the availability heuristic by making the risks of poor security behaviors more visible. Use case studies and visual cues to remind employees and users about the consequences of poor security, and ensure that these include tangible and vivid examples of the potential repercussions. If the user can easily visualize the potential harms, they are more likely to remember them.

6. Where possible, try to make decision making about online security risks use System 2, to ensure that users deliberate about their actions appropriately. Do not allow shortcuts to be taken, and ensure that full risk-benefit analysis is completed before a decision can be confirmed. If situations occur where users will be tempted to use System 1, ensure that such use is “failsafe”—meaning that their automatic response is the one which will result in higher security. For example, when prompted to create a password the easiest option should be the development of a strong password via an automated generator.

7. Bear in mind the research regarding learning when trying to encourage safer behaviors. Provide appropriate rewards for following security guidelines (while also considering that not following the guidelines, or seeking shortcuts, is a reward in itself due to the reduced effort required). The rewards offered for following security guidelines should be sufficiently desirable and visible that they are more tempting to employees than the avoidance of the effort required to obtain them. When a behavior is rewarded it is more likely to be repeated.

8. Similarly, ensure that a security-conscious organizational culture is developed where employees learn from watching others be rewarded for such safe behaviors. This could be linked to an already existing bonus system, or small, ad hoc incentives such as spot prizes or recognition awards could be utilized. Social Learning Theory and employee motivation can be combined to ensure that security is enhanced.

9. Consider the elements of Protection Motivation Theory, and in particular, consider the role of perceived self-efficacy. Give users and employees the training and tools that they require to fully protect themselves and the data that they are responsible for, and provide them with the confidence that they can manage their online security appropriately. Giving individuals ownership of their own security measures may also help, as they feel that they have some control over what happens to the data. This might include the opportunity to make decisions regarding what information they provide and under what circumstances it is shared with others.

It must also be remembered that many users do know what they should do to increase security, but fail to implement these strategies, in a discrepancy sometimes called the knowing-doing gap (see, e.g., Workman et al., 2008; Cox, 2012). A number of the suggestions above are developed with this gap in mind, attempting to find ways to encourage users to engage in behaviors that they are aware are appropriate, although they may prefer to choose an alternative course of action. However, individuals exhibit the knowing-doing gap for many reasons, and not all users might be persuaded by the above strategies. It may be helpful to consider the users individually, determining what their particular reasons for such behaviors are, and attempting to develop a personalized strategy if appropriate and permissible under company policy. In the case study at the start of the chapter, the employee had been given training on security, and had been advised not to use their company password for other activities, but they failed to comply. There may be many reasons for this behavior, such as lack of ability, low motivation, high stress or utilization of poor decision-making techniques. Preventing such behavior in the future requires the determination of the primary reasons why the training was not adhered to. It may be necessary to employ multiple techniques across the organization to increase security behaviors, so as to reach and influence the entire workforce. As an example, an employee with low technical skills may not be motivated by an organizational strategy that offers incentives for the use of security behaviors that require advanced skills. However, they may feel relief if the stress of developing and memorizing complex passwords is reduced because their organization provides training in the use of password management tools.

6 Conclusion

By necessity, this chapter has only been able to consider a small subset of the relevant psychological theories, and the interested reader is directed toward some of the texts and readings listed below, where they will find more detail on the theories outlined here, as well as many further theories that are relevant to online security. It is evident that theories taken from psychology and communication studies have relevance to the development of effective online security procedures, and specific practical recommendations can be developed. It should be noted that the relative effectiveness of many of these recommendations have yet to be tested, and future research should aim to empirically assess their benefits. Nevertheless, they provide direction for users, employees, and managers when considering potential methods of increasing adherence to security protocols.

Suggested further readings

 Child, J.T., Haridakis, P.M., Petronio, S., 2012. Blogging privacy rule orientations, privacy management and content deletion practices: the variability of online privacy management activity at different stages of social media use. Comput. Hum. Behav. 28, 1859-1872.

 This article examines how Communication Privacy Management Theory can be applied to online privacy.

 Cox, J., 2012. Information systems user security: a structured model of the knowing-doing gap. Comput. Hum. Behav. 28, 1849-1858.

 This paper gives an excellent overview of the knowing-doing gap, and demonstrates why it might occur. It includes focus on the role of the organisation and perceived threat.

 Kahneman, D., 2011. Thinking, Fast and Slow. Penguin, London.

 Daniel Kahneman is a winner of the Nobel prize in economics (2002), and has led a long and distinguished career in cognitive psychology. This book describes the System 1 and 2 classifications outlined in this chapter, along with an overview of much of his other work.

 Schneier, B., 2012. Liars and Outliers: Enabling the Trust that Society Needs to Thrive. John Wiley & Sons Inc., Indianapolis, IN.

 In this very readable book, Bruce Schneier describes how traditional trust mechanisms need to adapt in modern society, with some specific references to technological advances.

References

Bandura A. Influence of models’ reinforcement contingencies on the acquisition of imitative behaviours. J. Pers. Soc. Psychol. 1965;1:589–595.

Campbell J, Greenauer N, Macaluso K, End C. Unrealistic optimism in internet events. Comput. Hum. Behav. 2007;23:1273–1284. doi:10.1016/j.chb.2004.12.005.

Chen R, Wang J, Herath T, Raghav Rao H. An investigation of email processing from a risky decision making perspective. Decis. Support. Syst. 2011;52(1):73–81. doi:10.1016/j.dss.2011.05.005.

Child JT, Haridakis PM, Petronio S. Blogging privacy rule orientations, privacy management and content deletion practices: the variability of online privacy management activity at different stages of social media use. Comput. Hum. Behav. 2012;28:1859–1872.

Cox J. Information systems user security: a structured model of the knowing-doing gap. Comput. Hum. Behav. 2012;28:1849–1858.

Einhorn HJ, Hogarth RM. Confidence in judgement: persistence of the illusion of validity. Psychol. Rev. 1978;85:395–416.

Jiang CL, Bazarova NN, Hancock JT. The disclosure-intimacy link in computer-mediated communication: an attributional extension of the hyperpersonal model. Hum. Commun. Res. 2011;37:58–77.

Jin S.-A.A. To disclose or not to disclose, that is the question: a structural equation modelling approach to communication privacy management in e-health. Comput. Hum. Behav. 2012;28:69–77.

Johnston AC, Warkentin M. Fear appeals and information security behaviours: an empirical study. MIS Q. 2010;34(3):549–566.

Joinson AN, Reips U-D, Buchanan T, Paine-Schofield CB. Privacy, trust and self-disclosure online. Hum. Comput. Interact. 2010;25:1–24.

Kahneman D. Thinking, Fast and Slow. London: Penguin; 2011.

Kisekka V, Bagchi-Sen S, Rao HR. Extent of private information disclosure on online social networks: an exploration of Facebook mobile phone users. Comput. Hum. Behav. 2013;29:2722–2729.

Lee D, Larose R, Rifon N. Keeping our network safe: a model of online protection behaviour. Behav. Inform. Technol. 2008;27:445–454.

Morris RG, Higgins GE. Criminological theory in the digital age: the case of social learning theory and digital piracy. J. Criminal Justice. 2010. ;38:470–480. Retrieved from http://dx.doi.org/10.1016/j.jcrimjus.2010.04.016.

Ng BY, Rahim MA. A socio-behavioral study of home computer users’ intention to practice security. In: The Ninth Pacific Asia Conference on Information Systems, Bangkok, Thailand, 7-10 July; 2005.

Ng BY, Kankanhalli A, Xu YC. Studying users’ computer security behaviour: a health belief perspective. Decis. Support. Syst. 2009;46:815–825.

Payne JW. Information processing theory: some concepts and methods applied to decision research. In: Wallsten TS, ed. Cognitive Processes in Choice and Decision Behaviour. Hillsdale, NJ: Erlbaum; 1980.

Petronio S. Boundaries of Privacy: Dialectics of Disclosure. Albany, NY: SUNY Press; 2002.

Power A, Kirwan G. Privacy and security risks online. In: Attrill A, ed. Cyberpsychology. Oxford: Oxford University Press; 2015:233–248.

Rogers RW. A protection motivation theory of fear appeals and attitude change. J. Psychol. 1975;91:93–114.

Rogers RW. Cognitive and physiological processes in fear appeals and attitude change: a revised theory of protection motivation. In: Cacioppo J, Petty R, eds. Social Psychophysiology. New York: Guildford Press; 1983:153–176.

Rubin Z. Lovers and other strangers: the development of intimacy in encounters and relationships. Am. Sci. 1974;62:182–190.

Rubin Z. Disclosing oneself to a stranger: reciprocity and its limits. J. Exp. Soc. Psychol. 1975;11:233–260.

Schneier B. Liars and Outliers: Enabling the Trust that Society Needs to Thrive. Indianapolis, IN: John Wiley & Sons Inc.; 2012.

Sharot T, Korn CW, Dolan RJ. How unrealistic optimism is maintained in the face of reality. Nat. Neurosci. 2011;14:1475–1479. doi:10.1038/nn.2949.

Skinner BF. The Behaviour of Organisms: An Experimental Analysis. Oxford: Appleton Century; 1938.

Tam L, Glassman M, Vandenwauver M. The psychology of password management: a tradeoff between security and convenience. Behav. Inform. Technol. 2009;29:233–244. doi:10.1080/01449290903121386.

Tversky A, Kahneman D. Availability: a heuristic for judging frequency and probability. Cogn. Psychol. 1973;5(1):207–233.

Tversky A, Kahneman D. Judgement under uncertainty: heuristics and biases. Science. 1974;185(4157):1124–1131.

Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211(4481):453–458.

Walther JB. Computer-mediated communication: impersonal, interpersonal and hyperpersonal interaction. Commun. Res. 1996;23:3–43.

Walther JB. Selective self-presentation in computer-mediated communication: hyperpersonal dimensions of technology, language and cognition. Comput. Hum. Behav. 2007;23:2538–2557.

Weinstein ND. Unrealistic optimism about future life events. J. Pers. Soc. Psychol. 1980;39:806–820.

Workman M, Bommer WH, Straub D. Security lapses and the omission of information security measures: a threat control model and empirical test. Comput. Hum. Behav. 2008;24:2799–2816.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.100.237