Ulla Bunz and David Montez

7Computer-mediated communication competence

Abstract: The chapter reviews the historical development of computer-mediated communication competence research, introducing key figures and terminology. Researchers’ variations in both conceptualization and operationalization are discussed and early theoretical models are presented. The chapter continues by reviewing ten related concepts (applied skill; cognitive aspects; communication ability; demographics; emotional and psychological factors; methodology; social context; structural resources; technological context; usage experience) and recent research trends within these broader categories. After discussing the limited theory development work in the area compared to the abundance of methodologically-oriented contributions, the chapter concludes by making recommendations for future work in the area. These include the need for more consistent terminology, as well as for more theory-based research.

Keywords: computer-mediated communication competence, theory building, literacy, experience, Internet skill, computer anxiety, gender, self-efficacy, social context, psychological factors

1Historical development and key figures

Narrowly defined to include just this term, the history of computer-mediated communication (CMC) competence begins with the work of Brian Spitzberg. Spitzberg established his expertise in the area of interpersonal communication competence and has widely published on this topic (e.g., Spitzberg and Cupach 1984, 1989). He formulated a theoretical discussion of CMC competence in an unpublished paper in 1997. The model was not published until 2001 when Spitzberg, together with Morreale and Barge, published the textbook Human Communication: Motivation, Knowledge, & Skills and included his CMC competence model in the book. Spitz-berg’s CMC competence model (see Figure 1) postulates that CMC competence is a combination of individual factors (knowledge, motivation, skills, attentiveness, composure, coordination, expressiveness), CMC factors (richness, openness), message factors (ambiguity, complexity, emotionality), contextual factors (culture, time, relationship, situation, function), and outcomes (appropriateness, effectiveness, efficiency, understanding, satisfaction). The subtext of the textbook figure states, “The more motivated, knowledgeable, and skilled a communicator is in selecting and using CMC for a given type of message in a given medium, the more likely the communicator is to achieve competent outcomes” (Morreale, Spitzberg, and Barge 2007: 406). Many of the CMC competence elements were drawn from Spitzberg’s more general work on communication competence and have been analyzed, individually and in various combinations, by many other scholars as is discussed in section 2 below.

Fig. 1: Spitzberg’s Model of Computer-Mediated Communication Competence, adapted from Morreale, Spitzberg, and Barge 2007: 406.

Even before the model was publicly published, researchers began using and testing the model, having become aware of it and its components via personal interactions with Spitzberg at academic conferences. Harper, then a doctoral student at Howard University, used Spitzberg’s original model and scale with his subject pool of African American college students. Harper (2000) found that several components of Spitzberg’s original model did not test significantly with this sample. Harper removed those components and re-analyzed his data. For example, Harper did not find significant results for Spitzberg’s contextual or message factors (Harper 2000: 166), similarly to Bunz (2003: 79–80) who excluded nearly half of the contextual and the message factors due to low reliability scores. Specifically, Harper suggests deleting a number of elements from the original model. Harper’s (2000: 93) modifications to the CMC model are:

From the variable “CMC competence”, remove motivation, interaction management, altercentrism, and composure, leaving knowledge, skills, expressiveness, efficacy, and general usage.

From the variable “Contextual factors”, remove status ratio, distance, and task ambiguity, leaving relational context, time duress, and media access.

From the variable “Message factors”, remove quantity, leaving equivocality, density, and complexity.

From the variable “Media factors”, remove richness, accessibility, velocity, plasticity, and interactivity, leaving immediacy.

From the variable “Outcomes”, remove no elements, leaving efficiency, productivity, co-orientation, appropriateness, effectiveness, and satisfaction.

Around the same time, Bubas, a Croatian national, applied Spitzberg’s notions to theoretical (2001) and to organizational (with Radosevic and Hutinski 2003) contexts successfully. Bunz, who had spent the past year working on developing her Computer-Email-Web (CEW) Fluency scale (Bunz 2001), agreed to include several of Spitzberg’s items into her dissertation questionnaire (Bunz 2002). Later, Bunz (2003) used data from three separate studies to reduce the number of items and variables of the CMC competence scale. The adjusted scale contained seven main variables and 41 items compared to 31 elements in five variables and 105 items in Spitzberg’s original instrument, later (Spitzberg 2006) adjusted to 77 items in fifteen variables (motivation, knowledge, efficacy, coordination, attentiveness, expressiveness, composure, selectivity, appropriateness, effectiveness, clarity, satisfaction, attractiveness, efficiency/productivity, and general usage/experience). Bunz’s adjusted CMC competence instrument consists of: – comfort (containing items from Spitzberg’s motivation and knowledge elements) – contextual factors – efficacy – interaction management (called coordination by Spitzberg) – media factors – general usage – effectiveness

That same year Spitzberg first presented his original scale and model to the larger academic community via a presentation at the International Communication Association conference (Spitzberg 2003).

In the meantime, Harper continued to work in the area of computer-mediated communication, focusing on sex differences in technology use (2002). His study from 2005 measured frequency of email use for interpersonal versus organizational purposes and found sex differences. In one study, Harper (2003) used some of the items from Spitzberg’s CMC competence measure, showing that men scored lower on perceptions of four concepts (accessibility, velocity, interactivty, and immediacy).

Both Bubas and Bunz continued their work in the CMC competence area. Bunz used both her CEW fluency scale and portions of the CMC competence scale in a series of studies published over several years (2004, 2005, 2006, 2009, 2012; Bunz, Curry, and Voon 2007; Rice and Bunz 2006). An unpublished paper presented at the National Communication Association conference (Bunz and Lever 2005) included a large-scale review of publications broadly related to CMC competence, a listing of concepts shown in the literature to influence CMC competence (or similar concepts such as digital literacy, fluency, or aptitude), and a listing of measurement instruments in the area.

Bubas and Spitzberg presented a conference paper (2008) that included references to Spitzberg’s (2006) CMC competence publication in the Journal of Computer-Mediated Communication. Bubas and Hutinski published a study in (2006) in which they used Spitzberg’s CMC competence model to investigate college students’ motivations for the use of the Internet. Bubas’ additional and more recent research has been focusing on technology use and applications to education.

More recently, Spitzberg has continued his work on CMC competence as part of his overall study of communication competence. As such, the concept was part of a critical thinking assessment instrument (Spitzberg 2011). The constructs of motivation, knowledge, efficacy, coordination, attentiveness, expressiveness, composure, and adaptability were used. All except motivation and efficacy showed an increase from time one to time two in participants’ self-assessments. As an aggregate, CMC competence measures correlated significantly with several other variables, especially peer evaluations of the original subjects. Sherblom, Withers and Leonard (2013) applied Spitzberg’s (2006) CMC competence measure to collaborative learning in online classrooms and found that CMC knowledge is an important part of CMC competence. In addition, they conclude that instructors can use CMC motivational aspects to help students overcome computer anxiety.

Spitzberg (2014) also included CMC competence in recent work on meme diffusion. He outlined a “multilevel meme diffusion (M3D) model, which seeks to integrate these theories and to stimulate new theory development in the fields of big data and new media” (article abstract), with “these theories” referring to evolutionary theory, information theory, meme theory, frame analysis, general systems theory, social identity theory, communicative competence theory, narrative rationality theory, social network analysis, and diffusion of innovations theory. CMC competence is only one component of this multi-component model, but as such continues Spitzberg’s efforts to integrate theory into CMC competence research. As discussed in sections 3 and 4 below, CMC competence research is not particularly theory-driven, so Spitzberg’s work adds particular value to the field.

Some of this work on CMC competence has been cited occasionally, such as by Litt (2013) in her recent “review of past assessments and a look toward the future” (sub-title of article). Litt’s article and the overwhelming majority of past and current work in the area continue to be measurement-driven with little or no attempts by authors to strengthen a theoretical foundation.

2Related core concepts and recent trends

Looking beyond the specific term of CMC competence there is a vast body of literature on related concepts. Terminology used by researchers is not consistent by any means. Consequently, an equally large number of measurement scales and typologies exist. In 2005, Bunz (with Lever) presented an unpublished conference paper in which she reviewed and summarized some of the related work. Specifically, the manuscript provided an overview of influencing concepts as defined and operationalized in previous research, and a list of existing measurement instruments. She identified approximately 350 research articles from a variety of disciplines, predominantly psychology, computer science, education, and communication. Bunz also listed more than 100 measurement instruments that had been created and/or used by the authors of the 350 articles. Since 2005, clearly the number of both related articles and instruments has continued to increase. However, a review of the recent literature shows that the main concepts used to study CMC competence (or its various iterations such as experience, expertise, aptitude, literacy, fluency, etc. combined with computer, Internet, media, technology, etc.) have not changed all that drastically. Sections 1–10 present identified concepts grouped alphabetically within ten clusters, along with some sources identified for each concept:

1.applied skill,

2.cognitive aspects,

3.communication ability,

4.demographics,

5.emotional and psychological factors,

6.methodology,

7.social context,

8.structural resources,

9.technological context, and

10.usage experience.

The sources identified in sections 1–10 are by no means a comprehensive list of all literature in the area, but they do provide a broad array of scholarship stretching across multiple decades and countries.

The following sections will review each of the ten conceptual clusters. Each section provides both older and more recent citations for each area and, where appropriate, discusses findings from the literature.

2.1Applied skill

The study of applied technology skill is one of the longer-lasting areas of interest. Since at least the 1980s, researchers have inquired into issues of how well (or not) people can actually use computers. Traditionally, “skill” is defined as being able to operate the computer (e.g., Lanier and White 1998), including skills such as programming, keyboarding, or, more recently, web searching (e.g., Hargittai 2002a). As computers used to be less user-friendly than they are today, a certain level of “technological know-how” or computer literacy was required from the user to even be able to operate the technology (e.g., Kay 1989a; 1993b). As mice, graphical user interfaces, and drag-and-drop were invented, being able to use the computer or related technologies such as the web became more intuitive and is now almost taken for granted in developed countries. Nonetheless, applied skills are still fundamental as they are the pre-requisite for anything else that can be done with technology, such as interaction, information retrieval, or even entertainment.

In more recent years, research has focused on applied skills of other technologies such as the Internet (e.g., Van Deursen 2012), online skill (defined by the authors as one’s ability to access and use a variety of web-based media as both consumer and producer of content; Livingstone and Helsper 2010), or the uploading and sharing of digital material (e.g., Leung and Lee 2012). It should be noted that different researches have used a variety of terms with much overlap between definitions. Table 1 provides a list of some of these terms and the researchers who have used them. Most of these researchers have created and used their own measurement scales, which makes meta-analysis or other statistical comparison between their results difficult. However, seen as a group and discounting slight variation in terminology and definition, these researchers’ work shows some commonalities.

One of these commonalities is obvious – researchers strongly believe that applied skill is a variable in itself that deserves measurement, and is likely to influence other variables. Over the years, this base assumption has not changed, even as technology has changed, and it is one of the reasons why researchers continue to develop new scales. For example, Kay’s (1993b) work on people’s ability to program is outpaced by “wysiwyg” (i.e. “what you see is what you get”) editing systems where clicking of icons replaces the need for programming knowledge. Similarly, “experience”, defined in 1997 as being able to operate Windows 3.1 software and complete tasks in it by Miller, Stanney, and Wooten, evolved over time to encompass “sharing materials”, defined by Leung and Lee (2012) as being able to publish digital products one has created on the computer to the Internet.

More specific results reflecting how applied skills do (or do not) affect other variables vary. Leung and Lee (2012), for example, found that applied skills were not predictors for measures of academic performance. Applied skills did, however, increase the likelihood that a person becomes addicted to the Internet, including gaming, according to the authors. Facer, Sutherland, Furlong, and Furlong (2001), who examined computer usage and experience in the home, found that their participants used the computer for predominantly practical and/or social reasons and not for the educational benefits that educational policies touted their use. Their results led the authors to call for a more careful examination of skills within context, rather than just by themselves. Thus, both social and technological context of use is related to applied skill, as other research reiterates (e.g., Haythornthwaite 2007).

A third conclusion from examining work on applied skills points to the close connection between applied skills and emotional and/or psychological facts. Overall, there is strong evidence that one’s feelings toward and about technology is highly correlated with one’s applied skill level in using these and other technologies. For example, Schulenberg and Melton (2008) showed that computer aversion (equated to computer anxiety by the authors) correlates negatively with computer understanding and experience. On the other hand, computer confidence correlates positively with computer experience (see King, Bond, and Blandford 2002, who also provide a detailed review of anxiety-related concepts as used in a number of other research studies). The connection between applied skills and emotional and/ or psychological factors is also part of the discussion in section 2.5.

Tab. 1: Applied skills terminology in the literature.

Computer efficacy Bunz 2002; Bunz 2003; Bunz 2005
Computer fluency Bunz 2002; Bunz 2004
Computer operational skills Lanier and White 1998
Computer proficiency Bradlow, Hoch and Hutchinson 2002
Computer skills Dickerson and Green 2004; Garland and Noyes 2004
Creating materials Correa 2010; Eshet-Alkalai 2004; Hargittai and Walejko 2008
Email fluency Bunz 2002; Bunz 2004
Generic skills Dickerson and Green 2004
Information seeking skill Facer et al. 2001; Lanier and White 1998; Van Deursen, Van Dijk and Peters 2012
Internet skill Gui and Argentin 2011; Hargittai 2010; Litt 2013; Van Deursen 2012; Van Deursen and Van Dijk 2010; Van Deursen, Van Dijk, and Peters 2012; Zimic 2009
Key boarding skill Hemby 1999
Online fluency Haythornthwaite 2007
Online skill Hargittai 2002b; Hargittai and Shafer 2006; Hargittai and Hinnant 2008; Livingstone and Helsper 2010
Operational Internet skill Gui and Argentin 2011; Van Deursen, Van Dijk and Peters 2012
Programming National Assessment of Educational Progress 1986
Programming knowledge Kay 1993b
Sharing materials Hargittai and Walejko 2008; Leung and Lee 2012
Skills Correa 2010; Hoffman and Blake 2003; Levinson 1986; Lin 2000; Morreale, Spitzberg, and Barge 2007; Shih 2006; Spitzberg 2006; Woodrow 1992
Software knowledge Kay 1993b
Technical know-how Dickerson and Green 2004;
Web fluency Bunz 2002; Bunz 2004
Web-editing fluency Bunz 2002; Bunz 2004
Web-navigation fluency Bunz 2002; Bunz 2004
Web use skill Hargittai 2002a
Windows computer experience Miller, Stanney, and Wooten 1997

2.2Cognitive aspects

Cognitive abilities are often gained through education, but a diploma or degree is not required for application of intellectual abilities (e.g., Ramalingam and Wieden-beck 1998). Accordingly, the “cognitive” category includes abilities usually acquired through schooling, such as mathematical or statistical knowledge (e.g., Miller, Stanney, and Wooten 1997; Ramalingam and Wiedenbeck 1998), or overall academic performance (Gordon et al. 2003). However, cognitive aspects extend beyond formal education to skills such as one’s critical thinking (e.g., Hemby 1999; Yang et al. 2013) or problem solving abilities (e.g., Lowther, Bassoppo-Moyo, and Morrison 1998), one’s cognitive or learning style (e.g., Hemby 1999), mental models (e.g., Nückles and Sturz 2006), or perceptual processes (e.g., Marquie, Jourdan-Bouddaert, and Huet 2002). A number of researchers have investigated knowledge (e.g., Bunz 2002; Gui and Argentin 2011; Hoffman and Blake 2003; Igbaria and Chakrabarti 1990; Lin 2000; Massoud 1991; Morreale, Spitzberg, and Barge 2007; Nückles and Sturz 2006; Potosky 2007; Richter, Naumann, and Horz 2010; Shih 2006; Spitz-berg 2006; Woodrow 1992) or computer literacy (e.g., Appel 2012; Chang 2008; Cheng, Plake and Stevens 1985; Ellsworth and Bowman 1982; Hoffman and Blake 2003; Kay 1990; King, Bond, and Blandford 2002; Lin 2000; Pask and Saunders 2004; Stanley 2003; Unlusoy et al. 2010).

For the most part, cognitive aspects researched in relation to CMC competence do not depend on technology at all. While one cannot readily study the applied skill of “programming” without relating it to the computer, learning style or cognitive ability tend to be defined as independent variables on which other variables, in this case technology-related skills or abilities, depend. Both early and recent research has taken such an approach and the results have been fairly consistent. For example, as early as 1986, Levinson studied “aptitude” and showed clear differences between computer users with prior and/or more experience, and that these differences were not necessarily due to some inherent difference in aptitude. Similarly, Torkzadeh and Koufteros (1994) concluded that it is training that improves people’s level of accomplishment, and not necessarily their internal factors. Marquie, Jourdan-Bouddaert, and Huet’s (2002) research on older and younger users showed that “underconfidence” older people experienced in their computer abilities played an important part in their actual computer knowledge. “Underconfidence” is a perceptual, psychological aspect, not an objective measure of lower cognitive ability. Later studies investigating ability confirmed such results, finding that older users of technology, often stereotyped to be less “smart” about technology than younger users, are just as capable at appropriating skills as long as time is not restricted during the learning process (Broady, Chan, and Caputi 2010).

In terms of literacy-type investigations, in addition to the computer literacy literature (see above), researchers have also investigated digital literacy (e.g., Eshet-Alkalai 2004; Eshet-Alkalai and Amichai-Hamburger 2004; Hargittai 2005; Hargittai 2009; Gui and Argentin 2011), information literacy (e.g., Bruce 1999; Eshet-Alkalai 2004; Pask and Saunders 2004), Internet literacy (e.g., Gui and Argentin 2011; Leung and Lee 2012), media literacy (e.g., Chang and Liu 2011; Chang et al. 2011; Livingstone 2004), and even photo-visual literacy (Eshet-Alkalai 2004), plus related concepts such as information retrieval (Hargittai 2002b), performance (e.g., Igbaria and Chakrabarti 1990; Leung and Lee 2012; Torkzadeh and Koufteros 1994), and strategic Internet skill (Van Deursen, Van Dijk, and Peters 2012). Spitz-berg and colleagues (2006; Morreale, Spitzberg, and Barge 2007) included “under-standing” as a measure of cognitive ability.

Eshet-Alkalai and Amichai-Hamburger (2004) used the concept of “cognitive presence” (as opposed to “social” or “emotional” presence) in their measure of different types of literacy. Their results showed that for complex tasks, older participants actually out-performed younger participants due to enhanced cognitive skill development. So, while younger users had higher “photo-visual literacy”, older users had the highest “information literacy”. Hargittai’s (2010) research further showed that levels of parental education affect young adults’ Internet literacy, adding evidence that cognitive ability is important in studying CMC competence, but extends beyond schooling into environmental aspects.

While these results may not be representative of all literature in the area, they do show a general trend that the higher one’s cognitive “score” or ability, the better one will be at the technology-related variable under investigation. Such cognitive ability can be acquired via training and over time through both formal and informal processes.

2.3Communication ability

Communicative ability in the technology or online environment is usually investigated as a dependent variable. Even if applied skills and cognitive aspects are present, human communication via technology can still fail or be ineffective due to human (as opposed to technological) “malfunctioning”. For example, some people suffer from communication apprehension (e.g., Lustig and Andersen 1990; Scott and Rockwell 1997; Scott and Timmerman 2004; Susskind 2004) which can greatly influence their interactions with others. A person’s skill at verbal persuasion (e.g., Rajendran, Mitchell, and Rickards 2005; Torkzadeh and Koufteros 1994), networking, interpersonal relations (e.g., Durkin, Conti-Ramsden, and Walker 2010; Koutamanis et al. 2013; Lustig and Andersen 1990; Riordan and Kreuz 2010), or exchange of information (e.g., Sassenberg, Boos, and Klappenroth 2001) are only enabled or facilitated by technology.

Communication ability or competence, often of the written variety in technology contexts (e.g. Scott and Timmerman 2004; Volckaert-Legrier, Bernicot, and Bert-Erboul 2009), is still a separate factor which is sometimes studied without taking into consideration the technological mediation per se. There are few early studies that actually investigated the direct relationship between communication competence and applied technological skill (e.g., Kay 1990), though it can be hypothesized that these are directly related in a technologically mediated environment. In recent years, several studies have shown that communication ability within a computer-mediated environment can be beneficial for people with a variety of learning or interaction disabilities (e.g., Durkin, Conti-Ramsden, and Walker 2010; Glenwright and Agbayewa 2012; Rajendran, Mitchell, and Rickards 2005). Research continues to investigate the absence of nonverbal cues in computer-mediated contexts and how people compensate for this absence (e.g., Hancock, Landrigan, and Silver 2007; Hatem, Kwan, and Miles 2012; Riordan and Kreuz 2010; Svensson and Westelius 2013).

Generally speaking, research on computer-mediated communication examines the interaction between people via a technology, and research has shown that even those without communication apprehension or disabilities at times prefer CMC over face-to-face interactions (e.g., Casale, Tella, and Fioravanti 2013). Among other benefits, CMC allows people to manage impressions of themselves in more desirable ways (e.g., Walther, Deandrea, and Tong 2010). Research on the effectiveness of CMC shows positive, encouraging results in cue-rich environments such as SecondLife (Tan, Sutanto, and Phang 2012), indicating that task completion or communication effectiveness can be just as good via CMC as via face-to-face interaction (Tan, Tan, and Teo 2012). Some research has even shown that CMC can be beneficial for foreign language learning (e.g., Rama et al. 2012; Sahin 2009; Yang et al. 2013).

2.4Demographics

The study of demographic variables is common in the social sciences. Age (e.g. Baloğlu and Çevik 2008; Baloğlu and Çevik 2009; Broady, Chan, and Caputi 2010; Correa 2010; Dyck and Smither 1994; Elias, Smith, and Barney 2012; Gordon et al. 2003; Hargittai and Hinnant 2008; King, Bond, and Blandford 2002; Kubiatko 2013; Levinson 1986; Meelissen and Drent 2008; Morris 1994; Pope-Davis and Twing 1991; Rosen and Weil 1995; Unlusoy et al. 2010; Wu and Tsai 2007) and age-related variables such as childhood (e.g., Holloway and Valentine 2001; Livingstone and Helsper 2007; Tripp 2011), adolescence (e.g., Calvani et al. 2012; Leung and Lee 2012; Livingstone and Helsper 2010; Odendaal et al. 2006; Unlusoy et al. 2010; Tripp 2011), youth (e.g., Facer et al. 2001; Livingstone and Helsper 2007), and aging (Jacko et al. 2004) have been examined in most social science work related to CMC competence.

In addition, the two variables of gender/sex (e.g., Colley and Comber 2003; Colley, Gale, and Harris 1994; Correa 2010; Corston and Colman 1996; DeYoung and Spence 2004; Dong and Zhang 2011; Durndell and Haag 2002; Facer et al. 2001; Gordon et al. 2003; Gui and Argentin 2011; Hargittai and Shafer 2006; Hargittai and Walejko 2008; Hemby 1999; Losh 2003; Lustig and Andersen 1990; Mitra et al. 2000; Okebukola and Woda 1993; Pope-Davis and Twing 1991; Ray, Sormunen, and Harris 1999; Rosen and Weil 1995; Schottenbauer et al. 2004; Shashaani 1994; Todman and Day 2006; Unlusoy et al. 2010; Vekiri and Chronaki 2008; Williams et al. 1993; Woodrow 1992) and socio-economic status (e.g., Gui and Argentin 2011; Hargittai and Hinnant 2008; Hargittai and Walejko 2008; Leung and Lee 2012; Okebukola and Woda 1993; Rosen and Weil 1995; Stanley 2003) are popular demographic correlates to CMC competence variables. Maybe surprisingly, comparatively little research exists on CMC competence and ethnicity (e.g., Correa 2010; Rosen and Weil 1995; Tripp 2011), though CMC competence itself has been examined within many different cultures.

Digital divide literature generally claims a direct connection between these demographics and CMC competence. However, this linear relationship is debatable. Digital divide initiatives that have aimed at providing more access to technology to lower-income schools soon found that the issue is much more complex than whether one is able to afford a computer or not (e.g., Correa 2010; Eastin and La-Rose 2000; Hargittai 2002b; Hargittai and Hinnant 2008; Livingstone and Helsper 2007; Stanley 2003; Van Deursen and Van Dijk 2010). Later digital divide studies found that demographic variables are influenced by social factors such as peer influence, and by emotional or psychological factors, such as believing that the technology will be of use in one’s life (also see section 2.5).

A more recent trend within the study of demographic variables is the focus on the Internet competence of so-called Digital Natives (e.g., Bennett, Maton, and Kervin 2008; Hargittai 2010; Helsper and Eynon 2010; Gui and Argentin 2011; Jones et al. 2010) or the Net generation (e.g. Hargittai 2010; Jones et al. 2010; Zimic 2009), including comparisons to their parents. Overall, studies examining intergenerational differences have been inconclusive regarding a relationship between age and Internet skills (Bennett, Maton, and Kervin 2008; Bullen, Morgan, and Qayyum 2011; Helsper and Eynon, 2010). Similarly, studies examining technological use and competence, skill or aptitude among Digital Natives as an age group found considerable variation among those born in the past 30 years (e.g., Hargittai 2010), not unlike the high- versus low-skill divide that other research has found amongst the elderly (Bunz 2012) or the capital enhancing “second-level digital divide” Hargittai and Hinnant (2008) found among younger users.

Also, CMC competence and gender/sex research shows varying and even conflicting results. Some studies (e.g., Bunz 2009; Bunz, Curry, and Voon 2007; Correa 2010; Hargittai and Shafer 2006) showed women have higher computer anxiety but comparable skill to men, while other work showed that women outperform men on certain technology tasks (e.g., Dong and Zhang 2011; Unlusoy et al. 2010). Yet again a third group of work showed that men outperform women in technology use (e.g., Mitra et al. 2000). Cultural aspects may play a role here as non-US research seems to be more likely to show women out-performing men, at least in the literature examined for this chapter.

Thus, while demographics may be the cause of many competence-related problems and the digital divide at large, the literature is neither unanimous on whether a direct cause–effect relationship exists or not, nor on the direction of existing relationships.

2.5Emotional and psychological factors

A large number of concepts have been used in previous research that investigated emotional and psychological factors. These factors can be grouped loosely into three groups – negative factors, positive factors, and factors that are less value-laden or can swing to both the positive and the negative side. Each group will be reviewed here briefly.

To begin with, there is a large body of work on emotional and psychological factors that can have negative effects on related concepts such as CMC competence. As has been the case with other variables, researchers are using many different terms to describe similar notions. Fear, phobia, anxiety or apprehension (e.g. Baloğlu and Çevik 2008; Baloğlu and Çevik 2009; Barbeite and Weiss 2004; Caroll and Kendall 2002; Celik and Yesilyurt 2013; Ceyhan 2006; Ceyhan and Gürcan Namlu 2000; Charlton 2005; Corston and Colman 1996; DeYoung and Spence 2004; Durn-dell and Haag 2002; Dyck and Smither 1994; Erdogan 2009; Farina et al. 1991; Gaudron and Vignoli 2002; Gordon et al. 2003; Igbaria and Chakrabarti 1990; Igbaria and Parasuraman 1989; King, Bond, and Blandford 2002; Mitra et al. 2000; Morris 1994; Okebukola and Woda 1993; Okebukola, Sumampouw, and Jegede 1992; Pope-Davis and Twing 1991; Richter, Naumann, and Horz 2010; Schottenbauer et al. 2004; Schulenberg and Melton 2008; Scott and Rockwell 1997; Scott and Timmerman 2004; Susskind 2004; Todman and Day 2006; Todman and Drysdale 2004; Todman and Monaghan 1994; Tripp 2011; Venkatesh et al. 2003; Wilfong 2006) can hinder the development of applied skill, as was discussed in section 2.1. Such anxiety can even affect how people react to the mere possibility of future problems, such as anticipating y2k (“year-2000”) problems (Schottenbauer et al. 2004). Negative correlates such as computer anxiety can be overcome, however. According to Sherblom, Withers, and Leonard (2013), by increasing CMC motivation, teachers can help students counteract anxiety and help them in their development of CMC competence.

In her comprehensive review of computer anxiety research, Powell (2013) documented how changes in technology and increased access to computers have impacted how anxiety is researched. Powell showed that the scales being used in recent scholarship are a combination of new (e.g. Ceyhan and Gürcan Namlu 2000; Venkatesh et al. 2003) and old (Heinssen, Glass, and Knight 1987; Marcoulides 1989) with some of the more dated scales (Heinssen, Glass, and Knight 1987) being used much more frequently than newer scales.

As mentioned in section 2.4, anxiety and demographics are also often related. Durndell and Haag (2002), for example, found that Romanian females felt higher computer anxiety than Romanian males, leading the authors to conclude that certain Eastern European countries may be starting to show similar gender difference to Western European countries (in 2002). Similar results had already been shown by Okebukola and Woda (1993) with Australian high school students almost a decade earlier. As recent as 2009, Baloğlu and Çevik studied Turkish high school students and found that boys and girls differed only on certain sub-concepts of anxiety. Whether this is a result of the passing of time or of differences in culture would make for an interesting cross-cultural study.

Psychological barriers in general (e.g., Correa 2010; Stanley 2003) as well as specific psychological factors such as discomfort with, aversion to, or resistance to technology (e.g., Parasuraman 2000; Schulenberg and Melton 2008; Susskind 2004) will certainly hinder the development of CMC competence. On the other hand, even those with high CMC competence may experience negative emotional or psychological effects, such as addiction (e.g., Leung and Lee 2012; Scott and Timmerman 2004), or alienation from their environments (e.g., Morris 1994).

The second group of emotional and psychological correlates to CMC competence consists of factors that can have positive effects. Foremost here are variables related to positive attitudes such as acceptance (e.g., Davis 1989; Davis, Bagozzi, and Warshaw 1989; Dong and Zhang 2011; Lee, Hsieh, and Chen 2013; Peng et al. 2012; Venkatesh et al. 2003), comfort (e.g., Bunz 2002; Bunz 2003; Bunz 2005; Miller, Stanney and Wooten 1997), confidence (e.g., Cassidy and Eachus 2006; Colley, Gale, and Harris 1994; Kay 1993a; King, Bond, and Blandford 2002; Marquie, Jourdan-Bouddaert, and Huet 2002; Mitra et al. 2000; Pope-Davis and Twing 1991; Ross 1996; Szajna 1994; Wild 1996), liking or interest (e.g., Colley, Gale, and Harris 1994; King, Bond, and Blandford 2002; Mitra et al. 2000; Okebukola, Sumampouw, and Jegede 1992; Pope-Davis and Twing 1991; Szajna 1994), or the perception that one indeed has high levels of CMC competence or one of its sub-constructs (e.g., Beas and Salanova 2006; Bunz, Curry, and Voon 2007; Correa 2010; Hargittai and Shafer 2006; Kay 1993a; Kay 1993b; Lustig and Andersen 1990; Marquie, Jourdan-Bouddaert, and Huet 2002; Richter, Naumann, and Groeben 2000; Schulenberg and Melton 2008; Torkzadeh and Van Dyke 2002; Wu and Tsai 2007), especially self-efficacy (e.g., Barbeite and Weiss 2004; Beas and Salanova 2006; Charlton 2005; Cassidy and Eachus 2006; Davis 1989; Durndell and Haag 2002; Eastin and LaRose 2000; Hasan 2003; Hemby 1999; Livingstone and Helsper 2010; Ramalingam and Wiedenbeck 1998; Shih 2006; Torkzadeh and Koufteros 1994; Torkzadeh and Van Dyke 2002; Vekiri and Chronaki 2008; Wilfong 2006).

Within educational contexts, Vekiri and Chronaki (2008) showed that social support is a positive correlate to using technology and self-efficacy among Greek boys and girls. Liaw and Huang (2013) showed that interactive learning environments relate to a number of positive outcomes, such as self-regulation, perceived usefulness, and perceived self-efficacy.

As Parasuraman (2000: 311) showed, both positive and negative personality traits including optimism, innovativeness, discomfort, and insecurity influence whether a person has “readiness” to use technology. Likewise, a number of emotional and psychological variables can be studied that in themselves may not have a positive or negative value, but instead can vary by person or circumstance. Consequently, a large number of studies have investigated attitudes from a more value-free perspective (e.g. Beas and Salanova 2006; Broady, Chan, and Caputi 2010; Colley and Comber 2003; Corston and Colman 1996; Ellsworth and Bowman 1982; Garland and Noyes 2004; Igbaria and Chakrabarti 1990; Kay 1989b; Kubiatko 2013; Lee 1986; Leng 2011; Massoud 1991; Meelissen and Drent 2008; Palaigeorgiou et al. 2005; Richter, Naumann, and Horz 2010; Schottenbauer et al. 2004; Torkzadeh and Van Dyke 2002). Other studies have investigated motivation (e.g., Bubas 2003; Bunz 2002; Coovert and Goldstein 1980; Correa 2010; Hemby 1999; Kay 1993a; Morreale, Spitzberg, and Barge 2007; Ramalingam and Wiedenbeck 1998; Smarkola 2008; Spitzberg 2006; Tripp 2011), people’s values (e.g., Ray, Sormunen, and Harris 1999; Vekiri and Chronaki 2008), and personality aspects (e.g., Bakke 2010; Ceyhan 2006; DeYoung and Spence 2004; Hemby 1999; Stanley 2003; Todman and Day 2006; Weil, Rosen, and Wugalter 1990; Wu and Tsai 2007).

As mentioned in section 2.8, lack of motivation can prevent CMC competence development, even when structural resources are present (e.g., Smarkola 2008). On the other hand, increased information commitment predicts the use of more sophisticated search engine strategies (Wu and Tsai 2007). These results show that certain emotional and psychological variables can lean in both positive and negative directions. Similarly, Ray, Sormunen, and Harris (1999) showed that a positive attitude towards computers in the workplace is related to feeling greater comfort with technology use, especially for women. In their Dutch sample, Meelissen and Drent (2008) showed that fifth grade girls’ attitudes towards technology is affected positively by the computer experience of their female teachers. On the other hand, some results also showed that people with negative or low attitudes towards computers do not benefit from training as much as those with positive or high attitudes (Torkzadeh and Van Dyke 2002).

Overall, the concepts combined into this main conceptual cluster are subjectively experienced and tend to be non-rational, but instead emotional or subconscious. Rather than an external social context (e.g., being at work versus being at home) or technological context (e.g., gaming versus blogging), these concepts describe the human or intrapersonal on-goings that are difficult to measure. The difficulty of measuring these concepts comes from their intrapersonal nature. Reflecting on one’s state of mind or state of emotion can lead to an alteration of that state. Response bias may result from perceived stigma of emotional or psychological responses one may experience, or a person’s wish to manage impressions. In addition, many emotional and psychological reactions are subconscious, meaning an individual may not even be fully aware of them, or their extent. Nonetheless, the concepts are highly influential on one’s usage experience and competence as an outcome in a technology-use process.

2.6Methodology

A number of studies were identified that investigated methodology itself and its impact on the results of a CMC competence-type study (e.g. Bradlow, Hoch and Hutchinson 2002; Litt 2013). The evidence implies that definitions and ways of measurement can change the outcome and meaning of results. The concepts reviewed in the other sections (e.g., sections 2.9 and 2.10) are a good example. For instance, a number of studies have employed the terms computer- or Internet experience. Operationalization, however, varies. To some, “experience” is simply defined by the number of years a person has been exposed to a certain technology. To others, “experience” is defined by the number of different tasks one has completed with the same technology. The operationalization is quite different and yet the same term is used, which can result in misleading conclusions or chains of evidence. Certainly, the field of computer-mediated communication – misnamed itself, as it includes the study of technologies other than the desktop computer – needs to strive for a more universal understanding of key concepts in the development of theoretical models.

Regarding methodology itself, a slight emphasis on studying user self-reported behavior can be observed in the last ten years. For example, Chang and colleagues focused on the media literacy of Taiwanese elementary school children using self-assessment methods (e.g., Chang 2008; Chang et al. 2011). While one may question self-assessment as a method with young children, the authors showed that their resulting scale has both reliability and validity and that multi-dimensional approaches are suitable for detecting the interplay between students’ use of computer technology and their levels of competence. The importance of multi-faceted approaches was underlined also by Unlusoy and colleagues’ (2010) work with young teenagers in the Netherlands. Using a facetted approach allowed the authors to pinpoint gender differences more clearly, with girls showing higher competence than boys. Another example of methodology-focused research in this area is Page and Uncles’ (2004) study in which they demonstrated the advantages of including both qualitative and quantitative data in the development of user measurement scales.

Even before these more recent examples of studies investigating methodology, other researchers occasionally examined the influence of methodology itself on competence measurement outcomes. Early on, Davis (1989) pointed to the need for validation of scales rather than just their development. Bradlow, Hoch, and Hutchinson (2002) emphasized the use of parametric test scoring methods and statistical calibration, and Kay (1993a) examined 15 scales published over a ten-year timespan to inform the development of a comprehensive instrument, making use of categorization of items. Categorization of items in itself is not unusual and drives much of the work in scale development. However, as Bubas’ (2003) work and some of the other research reviewed in section 1 showed, re-categorizing items into different categories or leaving out items changes a researcher’s results. Section 3.2 below returns to the topic of methodology and provides references to authors who published comprehensive reviews of types of methodology used in the area of competence research.

2.7Social context

As Spitzberg (2006) highlighted via his “situation” variable, Bunz (2002, 2003, 2005) via inclusion of contextual factors, and others via their research on environment (e.g., Igbaria and Chakrabarti 1990; Lowther, Bassoppo-Moyo, and Morrison 1998; Ray, Sormunen, and Harris 1999), social context is important. Yet, a surprisingly small number of studies were identified that focused on the social environment in which CMC competence is required or acquired, and almost all of this research seems to have been conducted after the year 2000. The work environment (e.g., Bruce 1999; Broady, Chan, and Caputi 2010; Ceyhan 2006; Elias, Smith, and Barney 2012; Felstead, Gallie, and Green 2003; Lee, Hsieh, and Chen 2013; Morris 1994; Scott and Timmerman 2004) was identified as a positive influence on competence. Other studies investigated gaming (e.g., Appel 2012; Leung and Lee 2012; Rama et al. 2012), the health context (e.g., Van Deursen 2012), and the family (e.g. Hargittai 2010; Lee 2013; Odendaal et al. 2006; Tripp 2011).

One specific social context that has received noticeably more attention from researchers than others is the educational environment. While some researchers are likely making use of college student samples out of convenience or because of the direct link between college students’ need for using technology as part of their college experience and their resulting CMC competence (e.g., Broady, Chan, and Caputi 2010; Cassidy and Eachus 2006; Jones et al. 2010; Palaigeorgiou et al. 2005; Richter, Naumann, and Groeben 2000; Richter, Naumann, and Horz 2010), many are studying high school students (e.g., Gui and Argentin 2011; Meelissen and Drent 2008) both in- and outside of the school context (e.g., Holloway and Valentine 2001; Vekiri and Chronaki 2008), as well as elementary school pupils (e.g., Chang and Liu 2011; Chang et al. 2011; Lanier and White 1998; Tondeur, Valcke, and van Braak 2008) and middle school children (e.g., Tripp 2011). Limited research also exists on teachers’ use of or training with technology (e.g., Ceyan 2006; Leng 2011; Smarkola 2008; Tezci 2011).

Interestingly, many of the more recent studies were conducted outside the United States in countries including China, Taiwan, Singapore, the Netherlands, and Greece. Of course, research on computer access in schools and long-term digital divide effects continues in the United States as well (e.g., Judge, Puckett, and Bell 2006; Wood and Howley 2012). In fact, the interplay between culture and competence has been researched repeatedly (e.g., Marcoulides and Wang 1990; Stanley 2003; Morreale, Spitzberg, and Barge 2007; Spitzberg 2006; Tondeur, Valcke, and van Braak 2008; Tripp 2011). Tondeur, Valcke, and van Braak’s (2008) research in Flanders (a Flemish-speaking region within Belgium as opposed to the French-speaking Wallonia region), for example, showed that cultural characteristics are associated with teachers’ computer use in school. Comparing the United States and China, Marcoulides and Wang (1990) found that variables such as computer anxiety cut across culture, influencing users in both countries similarly.

2.8Structural resources

Most often examined in the context of the digital divide, the presence or absence of computer technology resources primarily drives issues of access and exposure. As the digital divide literature points out, without proper resources and support (e.g., Lanier and White 1998), a person’s potential for technological aptitude or CMC competence is thwarted at the source. Access (e.g. Correa 2010) is often confused with or likened to experience with technology. It is certainly correct that without availability of technology, it is rather difficult to use it frequently. However, it is also possible that opportunity exists, such as in the form of technology centers (e.g., Stanley 2003), but people are not making use of these resources, partly for emotional and psychological reasons such as lack of motivation, or technology apprehension (also see section 2.5 above).

Other authors who have examined structural resources as a factor in computer-mediated competence include, for example, Wild (1996). He showed that opportunity to use computers, such as in a school environment, is in itself not sufficient to motivate usage. Wild’s research supports earlier, similar conclusions by Rosen and Weil (1995), and Marcoulides and Wang (1990) who studied college-aged users. More recent research by Tripp (2011) showed that in some cases, access to computer technology may exist, but interpretation of the appropriate use of this resource varies between users. In her work on Hispanic families, she found conflicting perceptions of technology with younger users using computer technology for entertainment while older family members defined computers as more serious, educational tools. A child’s development of CMC competence may thus be hindered by rules established in the home (also see Facer et al. 2001; Lee 2013).

Overall, the body of work shows that structural resources are a fundamental factor in competence development (especially the absence of resources), but in themselves not usually linked to competence in a linear fashion. Instead, the absence or presence of structural resources is usually intertwined with some of the other conceptual clusters reviewed in earlier sections.

2.9Technological context

Different studies have investigated varying technologies. For example, it stands to reason that a computer gaming environment (e.g., Facer et al. 2001) is different from information searching (e.g., Wu and Tsai 2007), and that a person’s high aptitude in one does not necessarily translate to the other. The focus on technology skills is really a focus on technological contexts, rather than social contexts. While it seems useful to investigate varying technological contexts including ICT (e.g., Kubiatko 2013; Tezci 2011), such an approach is really technologically deterministic. A model or theory of CMC competence ought to apply to any or all technologies, and not be dependent on technology-specific criteria. At the time of writing, social media platforms (e.g. Wohn et al. 2013) such as Twitter have already lost their “newness” factor, being replaced by the more visually-oriented Vine. Developing new competence scales or measures for each new technology seems less useful than focusing on underlying similarities, as Spitzberg (2006) did in his “CMC factors” of richness and openness (see Figure 1). For researchers interested in CMC competence within a specific technological context, Table 2 can present a starting point for literature research.

Tab. 2: CMC competence within specific technological contexts.

Asynchronous communication Bakke 2010; Nuckles and Sturz 2006; Riordan and Kreuz 2010
Chat room Tan, Sutanto and Phang 2012
Communication technology Freedman 2002
Computer games Facer et al. 2001; Rama et al. 2012
Computer science National Assessment of Educational Progress 1986
E-learning Lee, Hsieh, and Chen 2013; Liaw and Huang 2013
Function Morreale, Spitzberg, and Barge 2007; Spitzberg 2006
ICT Holloway and Valentine 2001; Kubiatko 2013; Leng 2011; Odendaal et al. 2006; Tezci 2011
Instant messaging Koutamanis et al. 2013
Internet safety Lanier and White 1998
Medium factors Bunz 2002; Bunz 2003; Bunz 2005; Morreale, Spitzberg, and Barge 2007; Spitzberg 2006
Mobile devices Bakke 2010
New technologies Facer et al. 2001
Second Life Tan, Sutanto, and Phang 2012; Tan, Tan, and Teo 2012
Social media Appel 2012; Wohn et al. 2013
Social networking sites Hargittai 2002a; Knobel and Lankshear 2008; Leung and Lee 2012
Type of website Hargittai 2002a
Web searchers Hargittai 2002b; Leung and Lee 2012
Web users Susskind 2004

2.10Usage experience

Often combined or confused with factors of access, the category of usage experience consists of variables that are person-centered. Frequency of technology use (e.g., Colley and Comber 2003; Shashaani 1994), different tasks performed, or types of technology used all build an individual’s personal experience level with technology over time (e.g., Dyck and Smither 1994; Potosky and Bobko 1998). Other factors influence the speed with which this personal experience level is built. Access is arguably one of those, but psychological factors such as motivation or fear of technology are just as important. Simply equating years of experience with actual experience level is an inaccurate measure, as it does not account for personal variables such as openness to change, or diversity of options tried. Overall, both conceptualization and operationalization of usage experience in the literature still varies a lot depending on each researcher’s individual interpretation.

Some of the more frequently used terminology in this category include competence (e.g., Corston and Colman 1996; Lowther, Bassoppo-Moyo, and Morrison 1998; National Assessment of Educational Progress 1986; Shih 2006; Spitzberg 2006; Tompkins and Daly 1992), experience (e.g., Ballance and Ballance 1993; Colley and Comber 2003; Correa 2010; Dyck and Smither 1994; Farina et al. 1991; Hasan 2003; Hemby 1999; Igbaria and Chakrabarti 1990; Jacko et al. 2004; Lee 1986; Marcoulides and Wang 1990; Nückles and Sturz 2006; Okebukola and Woda 1993; Pope-Davis and Twing 1991; Potosky and Bobko 1998; Ramalingam and Wieden-beck 1998; Richter, Naumann, and Groeben 2000; Rosen and Weil 1995; Smith, Caputi, and Rawstorne 2007; Szajna 1994; Todman and Day 2006; Todman and Drysdale 2004; Todman and Monaghan 1994; Torkzadeh and Koufteros 1994; Wilfong 2006; Williams 1993; Wu and Tsai 2007), and use, usage, or utilization (e.g., Bunz 2002; Bunz 2003; Bunz 2005; Eastin and LaRose 2000; Hargittai 2002a; Hargittai 2010; Hargittai and Hinnant 2008; Hargittai and Hsieh 2012; Okebukola, Sumampouw, and Jegede 1992; Rosen and Weil 1995; Shashaani 1994; Smarkola 2008; Shih 2006; Tondeur, Valcke, and van Braak 2008; Vekiri and Chronaki 2008).

A closer look at what exactly “experience” (or similar terms) means both on a general and a specific level is pertinent for future theory-oriented research or for meta-analysis. Without clear definitions and measures of such variables, comparison of results is questionable. Incorrect conceptual patterns may be identified that do not hold up when operationalized. For example, Wilfong (2006) distinguished between computer use and computer experience, with computer use being defined as frequency and duration of use (p. 1001), while computer experience was defined by him as specific knowledge (p. 1008). Eastin and LaRose (2006) on the other hand, operationalized both Internet experience and Internet use by timeframe (months since first logging onto the Internet for experience, and hours per day for use; see section on “Operational Measures”). A third example can be found in Smith, Caputi and Rawstorne’s (2007) article in which the authors noted that “computer experience” has been defined in at least 40 different studies (p. 128). They themselves differentiated between objective computer experience (defined as any kind of human–computer interaction across time) and subjective computer experience (defined as the internal processing of human–computer interaction). The operationalization of objective computer experience, as defined by the authors, includes both questions on the frequency and duration of use (pp. 131–132), bringing us full circle to Wilfong’s (2006) definition of computer use. Thus, researchers wishing to study CMC competence (or experience or use) need to be very clear on both their conceptualization and operationalization of constructs, as well as in their comparison of their own results to those of others. One cannot assume that usage of the same term guarantees the same meaning or measurement. No standard seems to have emerged over the past 30+ years.

2.11Additional recent trends

Over the past decade, an important development in scholarship examining concepts related to computer-mediated communication is the emergence of research conducted in non-Western contexts. This is an expected effect resulting from the exponential growth of mobile and other computational devices around the globe, including in non-First or even non-Second World countries. Such research generally corresponds in focus with the overall ten categories found throughout the body of literature. Some researchers have performed comparative analyses with non-Western populations such as China, Turkey, or Taiwan (e.g., Chang and Liu 2011; Chang et al. 2011; Dong and Zhang 2011). In particular, these researchers have sought to examine aspects of media literacy and gender differences. Their results were provided as examples throughout the previous sections.

3Theoretical and methodological paradigms

3.1Lack of theory development

To date, Spitzberg’s (2006) article on CMC competence and the call to develop more theory in the area has not been heeded broadly. According to the Web of Knowledge in September 2014, Spitzberg’s article has been cited only nine times (Bowman, Westerman, and Claus 2012; Gimenez 2014; Livingstone and Helsper 2010; Ogata et al. 2012; Pilotte and Evangelou 2012; Rains and Young 2006; Rubin et al. 2011; Sutcliffe et al. 2011; Svensson and Westelius 2013) with none of these articles focusing specifically on theory development or testing. Of course, the Web of Science does not include all existing journals, especially not international journals, but compared to the vast body of literature that exists in the general area, it is clear that much work remains to be done on the development and testing of theoretical CMC competence paradigms.

One study citing Spitzberg’s (2006) article without being included in the Web of Science is Bakke’s (2010) article on mobile communication competence. Here, Bakke extended technology-mediated communication competence beyond the computer or Internet to another contemporary device, the cell phone. The model resulting from his research consists of six factors (with the other five being comfort with use, mobile preference, asynchronous communication, communication efficacy, appropriateness, and affect) and 24 items. Bakke used Spitzberg’s (2006) CMC competence instrument and adapted its wording to refer to mobile phones instead of computers. His discussion also referenced motivation and context as important influences, and he proposed a mobile communication competence (MCC) model, taking CMC competence theory development in a lateral direction.

Efforts at theory development related to CMC competence without direct reference to Spitzberg’s model or work exist. For example, Calvani, Fini, Ranieri, and Picci (2012) review literature on digital-, media-, and IT literacy (p. 798), much of which remains focused on the use of various technologies. Even information literacy, an essential component of digital literacy according to Eshet-Alkalai and Amichai-Hamburger (2004) contains the notion of being able to find (and interpret) data which usually includes the use of search engines or similar databases. Calvani and colleagues proposed their instant digital competence assessment (iDCA) model as a theoretical foundation, describing it as stemming from “reflections on the relationships between the mind and the medium, as they have historically emerged, and the related socio-cultural connotations” (p. 799). However, in the discussion of the findings the authors changed their language to “tool” instead of “theoretical model” when referring to the iDCA and provided no conceptual propositions for future theory-testing.

Leu, Kinzer, Coiro, and Cammack (2004) set out to form a theory of new literacies several years before Spitzberg’s (2006) article was published. These authors, too, commented on the lack of unanimous definitions and proposed the following, fairly inclusive definition as a conceptual framework.

The new literacies of the Internet and other ICTs include the skills, strategies, and dispositions necessary to successfully use and adapt to the rapidly changing information and communication technologies and contexts that continuously emerge in our world and influence all areas of our personal and professional lives. These new literacies allow us to use the Internet and other ICTs to identify important questions, locate information, critically evaluate the usefulness of that information, synthesize information to answer those questions, and then communicate the answers to others (Leu et al. 2004: 1572).

Noticeable about this definition is the integration of multiple components that have been studied as separate, unique concepts by other researchers, such as applied skills, information literacy, and multiple technological platforms. Furthermore, Leu and colleagues (2004: 1575) identified three main social forces that influence the development of new literacies, including global economic competition, the rapid emergence of the Internet (and, presumably, related technologies), and governmental public policy initiatives aimed at increasing literacy efforts. A thorough review of related literature led the authors to stipulate ten principles of what they call the “New Literacy Perspective” (Leu et al. 2004: 1589). These ten principles are:

1.The Internet and other ICTs are central technologies for literacy within a global community in an information age.

2.The Internet and other ICTs require new literacies to fully access their potential.

3.New literacies are deictic.

4.The relationship between literacy and technology is transactional.

5.New literacies are multiple in nature.

6.Critical literacies are central to the new literacies.

7.New forms of strategic knowledge are central to the new literacies.

8.Speed counts in important ways within the new literacies

9.Learning often is socially constructed within new literacies.

10.Teachers become more important, though their role changes, within new literacy classrooms. (Leu et al. 2004: 1589)

These ideas clearly connect to the ten concept categories discussed above (applied skill; cognitive abilities; communication ability; demographics; emotional and psychological factors; methodology; social context; structural resources; and technological context).

Leu and colleagues (2004) presented their New Literacies Perspective within an educational viewpoint where teachers aim to teach traditional reading and writing literacy in conjunction with emerging literacies. Their focus was thus not the development of CMC competence theory. Nonetheless, the authors’ ideas have many connections with Spitzberg’s original model in which he emphasized the communicative nature of CMC competence. Unfortunately, neither Spitzberg’s model nor Leu and colleagues’ work seems to have stimulated much theory testing.

3.2Plethora of methodological work

The vast majority of work done in the broad area of CMC competence is of quantitative nature. Spitzberg’s original measurement scale (2003; 2006) used Likert-scale items, and consequently so did the derivations of the scale developed and tested by Bunz (2003) and Harper (2000). Bunz’s (2005) listing of more than 100 measurement scales consisted predominantly of such survey questionnaire-type instruments where participants self-report. The volume of instruments is large and continues to grow as researchers develop new scales for new and emerging technologies (e.g., Appel 2012).

While not providing a comprehensive review, Litt (2013) attempted to summarize the various types of methodologies employed by researchers in the area of “internet skills”. She, too, pointed to the variety of terminology used including skill, competence, knowledge, and fluency (Litt 2013: 613). Further, Litt separated the scholarship she reviewed into three main methodological categories: survey/ self-report measures, performance/observation measures, and combined/unique assessments (Litt 2013: 615–617).

The self-report measures reviewed by Litt, including Spitzberg’s (2006) CMC competence scale, vary greatly in the number of items included but are almost all set up with Likert-type response options, as is common for self-report measures. Consequently, these types of measurement instruments lend themselves to high numbers of participants, increasing reliability. Validity, on the other hand, is less certain due to concerns that actual competence and perceived or desired competence may be conflated in self-reported measures (e.g. Bradlow, Hoch, and Hutchinson 2002). Similar concerns about self-report measures such as faulty recall, fear of stigma or desire to please the researcher, response rates, whether the person completing the survey is the intended respondent, response fatigue in the case of long questionnaires, and other concerns are long-standing and not confined to the area of CMC competence.

Included in Litt’s performance/observation category are methodological approaches such as ethnography and interviews (e.g., Holloway and Valentine 2001; Tripp 2011). This category also includes experimental and laboratory studies (e.g., Hargittai 2002a), though these seem to be more rare. Litt pointed out that observational or experimental studies often lack generalizability as they tend to make use of very specific subject pools, but do provide high validity and “robust accounts of human behavior” (Litt 2013: 619).

Litt’s final category, combined/unique assessments, reported on studies that combined self-report with laboratory studies (e.g. Bunz, Curry, and Voon 2007). It is unclear what defined Litt’s “unique” characteristics. Litt predicts that the combination of quantitative and qualitative measures will increase in the future, basing this prediction on two cited sources from 2008. The articles reviewed for this chapter, some of which overlap with the work reviewed by Litt, do not show such a trend clearly but instead still show dominance of quantitative self-report measures. Even Hargittai (Litt’s mentor), together with Hsieh, worked towards creating an index of self-report measures that the authors then used in telephone surveys with over 5000 respondents (Hargittai and Hsieh 2012: 100).

One observation resulting from the review of the literature for this chapter was the use of more multivariate approaches in more recent research. More researchers now integrate several variables into their research and also speculate on the variables’ inter-relationships while research two decades ago tended to be conceptualized more uni-directionally, looking for direct effects.

4Priorities for future work

Going forward, two recommendations emerge from the review above, one relating to measurement, the other to theory. The first recommendation concerns the need for more consistent use of terminology, as well as the operationalization of such terminology. It seems as if each researcher feels the need to develop his/her own scale based on slightly different interpretations of similar concepts. With such fragmentation, meta-analysis of the body of research is virtually impossible. Similarly, researchers still focus on specific technological platforms, treating each as if it required re-invention of the CMC competence “wheel”. Such focus ignores the accumulation of knowledge gained from prior research and seems ill-directed, unless one’s interest lies solely in usability issues or technical components. A competence-based measurement instrument should be designed for valid and reliable use across platforms.

Spitzberg’s original scale included “CMC factors” that addressed the qualities of the medium itself. It may not have foreseen the development of Twitter, Vimeo, or whichever other current computer-based technology one chooses. However, the “richness”, “user-intuitive design”, or “interactivity” of any tool can be assessed by similar measurement items. Thus, rather than focusing on technology-specific criteria or treating each new technological platform as a new environment, researchers should focus on the underlying processes enabled by the technology. Using consistent measures in this manner, research efforts can build on each other more effectively. This recommendation is exactly opposite to Litt’s conclusion that led her to recommend “creating and updating more nuanced measures” (2013: 624) so that one can “account for changes in technology” (2013: 625). Litt’s suggestion is exactly what has happened in the area of CMC competence for the past twenty or more years, and it has contributed to the existing conceptual and operational fragmentation of the field. Instead, an instrument focusing on processes and broad characteristics of technologies would endure and be applicable to many platforms and contexts.

The second recommendation evolves directly out of the first one and concerns the need for theory-building. As was mentioned repeatedly throughout the previous sections, the research body broadly related to CMC competence lacks efforts at theory-building or testing and is almost entirely measurement-driven. Spitzberg’s original model (1997) was built on his and his colleagues’ work on communication competence (outside of the CMC context). That area, reviewed in the other chapters of this handbook, is well-established and includes both theory testing and development. The fragmentation of measurement instruments used in the CMC competence-related literature, based on a diverse definition of literacy, competency, fluency or similar terms, prevents meta-analysis and complicates theory-building as noted by Leu, Kinzer, Coiro and Cammack (2004: 1571–1572) a decade ago. Furthermore, few researchers even seem to perceive the need for theory in their work.

Once we better understand, define, and efficiently measure CMC competence, that concept in itself can be applied to a variety of contexts. In a technology-mediated context, technological competence is the prerequisite for effective and strategic mediated interactions. Working towards such an understanding, patterns of relationships between the large number of individual concepts were investigated and the broader categories defined above were identified. Such patterns help describe the underlying processes that lead to technological competence, and ultimately will contribute to the development of theory. As a step towards such theory-building, Figure 2 was constructed to represent patterns both observed in the literature and hypothesized. The figure is explained below. The model contains nine of the ten concept categories reviewed in the body of this chapter. The “methodology” category was excluded because a theoretical model should be cogent and independent of the specific methods used.

A basic foundation for technological or CMC competence is one’s social context, which is often related to but not to be equated with demographic categories. This foundation, which describes one’s situation in society, affects at least four conditions. One of these conditions concerns cognitive aspects. Often but not always related to educational opportunities, critical thinking, learning ability, and opportunity to confront complex situations hone and affect one’s communication ability (written, verbal, with or without technology), and one’s usage experience in the form of openness to change and new experiences, personal involvement with various technologies, etc.

The socio-demographic elements also influence three other conditions. These are, 1. availability of structural resources from the user’s perspective, such as access to technology, which is interrelated with 2. the variety of technology contexts a person is exposed to, which is also interrelated with 3. the development of one’s actual, applied skill level. It is hypothesized that the greater the exposure to technological resources, the higher the chance to experience a variety of technological contexts and develop higher-level applied skills. However, such greater applied skill is not necessarily connected to greater usage experience in a linear way, due to emotional and psychological factors. Some people fear technology, a fear that may be raised by having to maneuver a variety of technological contexts, but may also abate as greater applied skill is acquired. Other people may not have the opportunity to experience a variety of technological contexts, but due to their motivation, their applied skills soar. Thus, emotional and psychological, people-internal factors present a buffer between the actual use of technology, and the usage experience one gains. Finally, both the usage experience and one’s communicative ability influence the ultimate outcome, one’s level of CMC competence. Future researchers are invited to test and refine this model so that patterns can be observed over time and a more solidified theory of CMC competence can emerge.

Fig. 2: Contributing factors in determining level of computer-mediated or technological competence.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.104.120