Brian H. Spitzberg

22Assessing the state of assessment: Communication competence

Abstract: The key concepts, dimensions and decisions involved in assessing communication competence are reviewed. Competence assessment depends first upon locating competence in skills or abilities, or in impressions of those skills and abilities. The criteria of competence, such as accuracy, appropriateness and effectiveness, also require inclusion. A taxonomy of communicative competence measurement dimensions is explicated, and its implications for developing and validating competence assessments are explored. The three basic dimensions are locus, directness and generalization. Key issues and challenges are identified through the exploration of the questions: What, who, where, how, and why assessments are being made.

Keywords: ability, competence, criteria, curvilinearity, reliability, validity

“And measure for measure”

Shakespeare’s (1623) play, Measure for Measure, speaks of a fundamental principle of reciprocity, in this case a crime of equal measure for a crime already committed. The idea of attempting to achieve a direct correspondence between one thing and another is at the heart of all measurement. Shakespeare’s inspiration for this notion is often associated with the Bible, Matthew 7:2: “For in the same way you judge others, you will be judged and with the measure you use, it will be measured to you.” As with meeting justice, however, seeking direct correspondence between one thing and another thing representing it, correspondence is difficult to achieve in practice. When one measure fails to represent the other, it renders a secondary failure, doing its own injustice to the original idea or action. In general, in the social sciences the ultimate value of concepts can only be judged in the marketplace of measurement and operationalization. This is no exception in regard to communicative competence.

This chapter examines the nature of measuring and operationalizing communication competence. In the process, it will establish a vocabulary of basic concepts related to such assessment and to the scope of communication competence itself. This boundary-setting will circumscribe the relevant domains of communication competence that require measurement. In the process, the chapter will examine some of the more challenging issues involved in assessing communication competence and identify measurement exemplars in various domains of communicative competence. It draws upon both prior considerations of such issues (e.g., Spitzberg 1987, 1988, 2000, 2003; Spitzberg and Cupach 1989), as well as prior compendia and reviews of measures related to communicative competence (i.e., Backlund and Wakefield 2010; Breen, Donlon, and Whitaker 1977; Byram 1997; Christ 1994; Daly 1994; Hargie 1997a; Larson et al. 1978; Morreale and Backlund 2007; Morreale et al. 1994; Nangle et al. 2010; Rubin and Mead 1984; Rubin, Palmgreen, and Sypher 2004; Rubin et al. 2009; Wrench, Jowi and Goodboy 2010).

1A synoptic view of communication competence

A synoptic overview of communication competence reveals that it has been contemplated in a scholarly way at least since the time of Aristotle (Spitzberg and Cupach 1984, 2011). Aristotle’s Rhetoric represented an attempt to systematically identify the available means of persuasion in any situation. Aristotle’s interest was in part a response to his mentor Plato, who distrusted the role that rhetoric could play in distorting the search for truth. Aristotle reasoned that a science of rhetoric would be the best protection and palliative from such potential rhetorical abuses.

Post-renaissance scholarship began to expand beyond Aristotle’s focus on just persuasive communication in public venues and his rationalist approaches to investigating the phenomenon. By the 1700s, an elocutionist school of instruction attempted to link psychological faculties and emotions to overt and specific learnable behaviors. These skills would be taught to people seeking to communicate specific emotions in their oratory. By the time of the mid-twentieth century, scientific methods began establishing a foothold in the study of communication. Combined with the statistical and psychometric assumptions developed in areas such as the study of intellectual competence, the study of social competence (Doll 1935; Gilli-land and Burke 1926; Hunt 1928; Thorndike 1920) began to emerge as a conceptual and empirical model for measuring social and communicative competence. Behaviorist methodologies emerged to exemplify the existence of basic abilities and their tractability in response to environmental stimuli. Other more qualitative methods emerged in sociology and symbolic interactionism in which naturalistic, ethno-graphic, and qualitative methods provided models for interpreting individual native competencies and behavioral norms. These various academic tributaries have today carved a landscape providing a methodological panoply for measuring and operationalizing communicative competence.

Communicative competence is commonly considered from one of two broad perspectives: abilities or impressions. From an abilities perspective, competence is the potential to perform certain repeatable, goal-directed sequences of overt behaviors (Hargie 1997a; Spitzberg 2003). Within this perspective, competence is similar to concepts of potentiality, capability, or faculty. In almost all domains of human endeavor, however, the brute fact of a person’s ability to perform an act is seldom as important as the question of how well such an act is performed. The ability to make eye contact probably matters far less in most circumstances to whether or not a person makes eye contact appropriate to the context. Such concerns are less about the brute ability and more about the skill, facility, adeptness, expertise, prowess, talent, proficiency, or mastery of applying the ability in context. These inherently subjective terms reflect value judgments, inferences and impressions of competence – whether or not an ability has been manifested in a way that results in positive judgments of a person’s communicative ability.

There are, for example, disabilities that restrict whether or not a person can see and therefore make eye contact, or speak and therefore orally verbalize questions. Because of the equifinality and multifinality of communication (Spitzberg 2013), people with such disabilities can generally find ways of communicating competently, because they find substitute ways of expressing an idea or accomplishing a particular function or outcome (equifinality), they experience unanticipated yet preferable outcomes using their available familiar communication skills (multifinality), or because those making judgments of competence compensate for the person’s disabilities, either in their own skill adaptation or by adjusting their expectancies and evaluations. Thus, in general, an impression perspective of competence locates the construct in the evaluative judgments made of a person’s communication.

These two perspectives can be integrated in a comprehensive model (Spitzberg 2013), in which the question is how objective skills (i.e., repeatable functional behaviors) are correlated or functionally related to judgments of competence (i.e., quality). There are many dimensions of quality and some may be particular to a given context. For example, a dimension of quality for a salesperson might be that person’s unit or service sales or profits for a given period of time. At a more general level, however, three dimensions of quality are generally considered relevant to communication: accuracy, effectiveness, and appropriateness.

Accuracy refers to the degree to which the symbols and behaviors used in a communicative encounter or set of encounters renders fidelity in representing a given text or meaning, or the degree of fidelity in the reproduction of similar meanings or understandings in at least one communicator’s mind in the minds of other communicators. Accuracy is related to such concepts as mutual understanding, empathic understanding, co-orientation, basic communication fidelity, uncertainty reduction, information transfer, and clarity. Accuracy is highly relevant to certain contexts involving complex tasks with relatively little margin of error, such as cockpit communication, surgical and medical encounters, military operations, and paired computer programming. In the everyday pragmatics of communication encounters, however, people are often constrained by concerns about impression management and relationship management. Thus, in many communicative encounters, pure accuracy can be substantially subordinate to other interactant objectives. Indeed, much of the universal pragmatic of politeness (Brown and Levinson 1978) and relational level of communication (Watzlawick, Beavin, and Jackson 1967) involve the strategic use of ambiguity, equivocation, taken-for-granted suppositions, unspoken implications, and nonverbal demeanor, all of which limit the value of accuracy as a universal criterion of competence.

Effectiveness refers to the degree to which relatively preferable outcomes are accomplished through a process of communication. The relative nature of effectiveness is important to consider, given that there may be true conversational dilemmas in which any conceivable form of communication nevertheless yields punishing or dissatisfying outcomes (e.g., the delivery of bad news). In such situations, effectiveness is gauged by the degree to which the least costly communication tactic is taken. What constitutes the substance or content of effectiveness may vary from context to context. For example, in the cockpit and in medical operations, effectiveness might encompass clarity or accuracy as one of its key indicators. As such, effectiveness can encompass accuracy and clarity as subordinate criteria.

Appropriateness refers to the degree to which communication in a given encounter is evaluated as legitimate to, or an adequate fit to, the context in which it is enacted. Appropriateness is often equated with conformity to the rules of a situation, but it can be distinguished in an important sense. The most appropriate behavior in a given context may require behavior that violates the existing rules, which may be found overly restrictive. Highly competent communicators may be able to renegotiate, bend, or otherwise re-define the rules of a given context and encounter.

Given that effectiveness is a cognitive or affective criterion that is tethered to a communicator’s own objectives and intentions, it generally is best referenced by the communicator making such judgments. In contrast, the others who populate a communicative encounter represent those who are in the best position to evaluate whether or not their own sense of propriety has been violated or fulfilled. Thus, appropriateness is a judgment generally best made by others. Despite this obvious notion, relatively few studies and measures of competence explicitly recognize the need to differentiate the locus of perception in these criteria of judgment.

2A taxonomy of measures of communicative competence

Communicative competence is a human construct – unlike the function of an organ or the use of an opposable thumb, the quality of a person’s communication is not an inherently objective phenomenon. What constitutes quality may vary from situation to situation, community to community, era to era, and relationship to relationship. As such, the measurement of communicative competence must always flow to some substantial degree from a conceptual perspective grounded in the priorities of a given context. Assuming that communicative competence has been theorized, modeled, or conceptualized, issues of measurement can proceed. It is small surprise that with many approaches to conceptualizing competence, there are also many approaches to measuring it.

Fig. 1: The behavioral assessment grid (BAG; adapted from: Cone 1978; Spitzberg 2003; Spitzberg and Cupach 1989).

In order to examine the approaches to measuring competence, a grounding taxonomy is needed. Spitzberg and colleagues (Spitzberg 2003; Spitzberg and Cupach 1989) adapted a behavioral assessment grid by Cone (1978), which provides one way of organizing behaviorally-based measures. It is laid out along three dimensions, which intersect to identify potential forms of measures. The three dimensions are locus, directness and generalization (see Figure 1). Locus refers to whether the primary focus of measurement is on cognition, affect, behavior, or outcomes of some communicative process. Any given measure may cross over these foci. Directness refers to the degree to which the measure focuses on objective, discrete levels of behavior, or depends on reports about such behavior mediated by the interpretations of self or other(s). Generalization refers to the degree to which the items or data of measurement apply, or are expected to apply, consistently across various domains of interest, such as time, context, perceiver, and method. The intersection of these dimensions define operational spaces, such that measures can be compared based on their unique features along these dimensions. The grid also provides a heuristic function by indicating measurement spaces that may not have been explored by existing measures.

The locus dimension recognizes that some approaches to communicative competence focus on different conative domains. For example, research on communication apprehension, social anxiety, and shyness all reflect attention toward the affective or motivational domain of communicating competently. Similarly, research on the accuracy of nonverbal sensitivity and expressiveness, mirror neurons, and empathy may focus in part on the inner emotional experience of a communicator. Other measures of communicative competence may focus more on the cognitive planning, interpretive schemata, goal formulation, cognitive complexity, rule knowledge, perspective-taking, or linguistic or role knowledge relevant to an interactional context. Such constructs tend to fit more in the cognitive domain of communicative competence. Given that there is no communication without behavior, measures that focus at least in part on skills and their behavioral constituents represent a common denominator across communicative competence measures. Finally, some measures focus on evaluative judgments of a communicative process, or some measure of outcomes. For example, using sales figures or lack of recorded medical errors might be taken as proxies of the competence of communication. Although it is possible to have a measure of communicative competence that is exclusively behavioral (e.g., the number of times a person makes eye contact in a conversation could be viewed as a measure of competence), it seems reasonable to conclude that no theoretical account of communicative competence could be complete without some constructs representing motivation, knowledge, and outcomes in addition to the behavioral domain.

The directness dimension is relatively open-ended and is not considered a comprehensive representation of all potential methods. For example, the discovery of mirror neurons opened the possibility of using FMRi and other techniques of measuring aspects of communicative competence. Such constructs and such measurement technologies, could not be anticipated a priori and thus reveal the potential for future innovations in measurement methods. To date, most measures of competence can be aligned broadly along a dimension of directness, reflecting the degree to which interpretive processes mediate the behavioral data taken to reflect communicative competence. Interview and projective techniques tend to involve structured theoretical constructs that are overlaid upon a communicator’s self-reports of impressions. Self-reference measures reflect by far the most common approach, influenced heavily by the psychological trait paradigm, in which people self-report their own behavior and abilities, which are taken as proxies of underlying characteristics representing competence. Self-reports are often compared to other-reference measures, whether in studies of empathic accuracy, workplace 360° assessments, or expert or peer judgments of candidate social skills. Case records reflecting certain communicative achievements may be available for inspection and coding, such as medical records demonstrating that certain information was successfully elicited during a health appraisal interview. Such records depend, nevertheless, on reporting of what occurred in a given communicative encounter.

Role-play and simulation approaches involve scenarios in which confederates or other interactants are given a scenario to act as if it were a naturally occurring encounter. Typically such encounters are recorded and later evaluated by third parties who have expertise or a stake in the outcome. Three common applications of such techniques involve interview role-plays, social skills training and assessment role-plays, and simulated or standardized patient assessments in health education contexts. There may be more elaborate approaches, such as table-top exercises for military or emergency response scenarios. Future assessments will likely rely significantly more on interacting in cyberspace with artificially intelligent avatars. Obviously there is still generally a need for self-reference and/or other-reference in addition to such simulation methods.

Artifact and signal-based methods represent approaches that would use some residual products of a communicative encounter to indicate communication competence. Although there are few instances of this approach, methods that investigate voting outcomes, turnover, or health clinic visits after a political, organizational, or persuasive campaign can be taken as measures of communicative competence. Although indirect indicators of the competence of communication itself, these are direct observations of explicit and discrete behaviors and therefore fall on the direct side of the continuum.

Naturalistic approaches to studying competence generally come out of conversation analytic methods, in which naturally-recorded interactions are examined in detail to identify specific instances in which native speakers sequentially co-construct interactional achievements. Identifying the communication competencies of expressing hope, or avoiding sensitive topics, in a medical encounter illustrate an attempt to locate the competence as directly as possible in the interactional behavior itself.

The final type of direct behavioral measurement is physiological. Developments in the study of facial affect expression and recognition, mirror neuron and fMRI signals of empathic activity (Gerdes, Lietz, and Segal 2011) and eye movement desensitization and reprocessing (emdr; Jeffries and Davis 2013) are exemplary of approaches that may have direct implications for measuring communicative competence. A variety of developments in the study of communicative disorders and their underlying physiological components may also provide direct implications for a more physiologically-grounded theory of communicative competence.

The generalization dimension arrays ways in which the data of a given measure of competence is expected to, or actually, demonstrates consistency across various domains. The three major areas of generalization are: external, internal, and observer. External generalization refers to the extent to which measures of competence in one application correspond to other applications. Specifically, method or format generalization examines whether measuring in one method (e.g., self-report) generalizes to other methods (e.g., role-play performance). Generalization across settings, contexts, or relationships involves correspondence across types of situations or encounter. For example, research using the same measure that finds that a husband and wife communicate incompetently with one another, but competently with friends or strangers (Levenson and Gottman 1978) may be taken as a limitation of the measure, or as an indication that competence is relationally moderated. Generalization across times or episodes concerns the state-versus-trait issue of the degree to which competence is a stable individual characteristic. The setting and time facets of generalization are clearly correlated, given that measuring competence across situations necessarily entails measuring it across time. However, there may be generalized competence within certain contexts (e.g., a married couple may communicate incompetently in a consistent way within their relationship encounters, but may be inconsistently competent in encounters outside of their relationship).

Internal forms of generalization refer to psychometric issues typically addressed through analyses such as internal reliability, Rasch analysis, item response theory analysis, factor analysis, cluster analysis, and multidimensional scaling. Item abstraction refers to the extent to which items at very molecular or granular levels of abstraction correspond to items cast at a more molar or general level. For example, the item “made eye contact” is fairly molecular and “paid attention” is fairly molar, but making eye contact may be one of the important indicators a rater looks for when making the judgment about paying attention.

The last two domains of generalization concern the extent to which a measure of a given interactant’s competence produces similar results across raters or observers and the extent to which a given observer or rater produces similar evaluations across interactants. The latter issue gets at aspects of rater biases and judgment stability, whereas the former gets at aspects of trait and dispositional consistency of performance as perceived across observers.

The intersection of these three dimensions create a matrix within which existing measures can be categorized and compared and within which new measures might be devised. There may be cells, however, that are likely to remain null as a result of paradigmatic constraints. For example, conversation analysis typically avoids mentalistic constructs as a source of interpretation, so it might be unlikely for there to be a naturalistic measure of cognition expected to generalize across settings or time.

3Key issues in developing and validating measures of communicative competence

The primary decisions involved in selecting, or developing and validating, a measure of communicative competence revolve around the prototypical heuristic questions of what, who, when, where, and how (Spitzberg 1987): What motivations, knowledge, and behaviors need to be assessed? Who needs to perform the assessment? What time frame of communication (i.e., when) needs to be assessed? Where should the communication be contextualized or situated for assessment? How should the operationalization be comprised and configured? Addressing each of these questions, in turn, tend to turn on the broader question of why – why is this particular measure needed? Each of these decisions has significant implications for function and validity of a measure and most ought to derive from theoretical assumptions about the nature of communication and competence.

3.1What?

The question of “what” refers to the conceptual constituents of communicative competence. What physiological/affective, cognitive, and behavioral components comprise competence? Answering this question helps address another question – what is the scope of assessment? Because the determination of what comprises a “skill” is an inherently interpretive venture and given a potentially infinite scope of contexts to which communication might potentially apply, there is an infinite number of potential communication skills. This quandary is somewhat analogous to Chomsky’s (1965) claim that a finite language is logically capable of producing an infinite number of sentences. There may be a finite number of communication modes (e.g., verbal and nonverbal), but the constituents of these modes are capable of an infinite range of sequential variations and potential applications to communicative situations. An interesting example of the potential of coding new skills not previously envisioned is demonstrated in the work by Pentland (2012) and others (Pentland et al. 2005; Curhan and Pentland 2007; Eagle and Pentland 2009) using automated sociometric badge technologies to automatically assess who talks to whom how often. From such structural indices, four basic social signals were quantified: activity level (the frequency of participation in conversation), engagement (the influence of one person’s turns on other people’s speaking turns), stress (variation in prosodic vocal pitch), and mirroring (degree to which a person engages in short interjection sequences when interacting with others). Such skills are not particularly intuitive and depend significantly on the particular method of behavioral capture for their feasibility and potential validity.

If the potential domain of communicative skills is infinite, then some basis must be formulated for determining what constitutes an adequate corpus of communicative content for a measure. This is far from a straightforward task. For example, whenever a systematic attempt is made to identify all of the relevant communicative competencies or skills in a given domain, the list turns out to be extensive. Such efforts in the area of communicative competence in general (Spitzberg 2011; Spitzberg and Cupach 1989), interpersonal competence (Spitzberg and Cupach 2002), marital communication (Spitzberg and Cupach 2011), and intercultural competence (Spitzberg and Changnon 2009) have each demonstrated well over 100 skills that could be identified in the existing literature. In each of these domains, however, the authors argued that these were not all likely to be different skills,so much as separate labels for similar underlying functional skills. The vast majority of the skills identified in these domains could be interpreted along dimensions of higher-order skills such as attentiveness, composure, coordination, and expressiveness.

Other efforts to capture the content and scope of competence in a given arena reveal similar types of hierarchical organization, in which a wide variety of molecular skills are identified as organized according to a more molar level of dimensionality. Klein, DeRouin, and Salas (2006; see also: Klein 2009) meta-analyzed interpersonal skills and identified five communication skills (active listening, oral communication, written communication, assertive communication, nonverbal communication) and seven relationship-building skills (cooperation/coordination, trust, intercultural sensitivity, service orientation, self-presentation, social influence, conflict/negotiation). In a systematic attempt to identify interpersonal performance in military education contexts (Wisecarver, Carpenter, and Kilcullen 2007; see also: Carpenter and Wisecarver 2004; Carpenter et al. 2005), four higher-order team-building performance competencies were identified, each of which could be defined by more molecular skills: energizing others (influencing others, rewarding), directing others (coordinating, training and developing, managing perceptions, managing others’ relationships, establishing and maintaining control, role modeling, managing personnel), exchanging information (informing, gathering information) and building relationships (courtesy, helping others, networking/maintaining relationships, adapting to the social environment). Each of these more molecular skills could obviously be specified in the form of even more molecular skill composites. These skills seem almost non-overlapping with the teamwork skills identified by the meta-analysis by LePine et al. (2008), who identified: transition processes, mission analysis, goal specification, strategy formulation and planning, monitoring progress toward goals, systems monitoring, team monitoring and backup behavior, coordination, conflict management, motivation and confidence building, and affect management. The difference between these approaches illustrate that different skills maps can nevertheless represent the same territory. No single map is the territory and any communicative territory can be represented usefully by multiple maps.

There are two typical approaches to identifying the requisite territory that needs mapping: inductive and deductive. Inductive (bottom-up) approaches survey existing research literature and/or survey people in relatively open-ended manner and extract the relevant behaviors and skills that occur with sufficient regularity as a representative sample of the competence domain. The resulting inductively-generated list then is refined by seeking sensible and higher-order items or coding categories into which the list can be fit. For example, in the domain of interpersonal communication, Spitzberg and Hurt (1987; see also: Spitzberg 2007) initially surveyed hundreds of existing social skills measures, dozens of factor-analysis studies of communication competence and social skill measures and conducted an analysis of open-ended surveys of participant nominations of communication skills. The result was a list of 25 interaction behaviors that became the Conversational Skills Rating Scale (Spitzberg 2007).

The deductive (top-down) approach identifies an existing, or formulates a new, conceptual schema or model to guide the assessment content. For example, in the domain of health professional communication, professionals may formulate a consensus statement on the skills that are important (e.g., Bachmann et al. 2013; von Fragstein et al. 2008; Kiessling et al. 2010; Makoul and Schofield 1999). Street and De Haes (2013) conceptualize seven important communication functions in medical encounters and identify the specific communication skills and their respective outcomes that would represent a reasonable curriculum for communication instruction and assessment. A priori theory, models, or expert conceptions of necessary or important competencies can be used as a framework from which to derive the contents of a competence measure.

3.2Who?

Measures have to be administered and completed by someone, about someone. Research demonstrates that different loci and foci of perception and judgment represent distinct perspectives (Spitzberg and Cupach 1985). For example, self-estimates of cognitive ability correlate .33 with more objective estimates (Freund and Kasten 2012). Research often only finds a correlation between .25 and .50 between an interactant’s self-assessment and others’ perceptions of that interactant (Achen-bach, Dumenci, and Rescorla 2002; Achenbach et al. 2005; Blanchard et al. 2009; Carrell and Willmington 1996, 1998; Conway and Huffcutt 1997; Cummings et al. 2010; Fletcher and Kerr 2010; Harris and Schaubroeck 1988; Kenny 1994; Lanning et al. 2011; Leising et al. 2011; Mabe and West 1982; Ready et al. 2000; Renk and Phares 2004; Swami, Waters, and Furnham 2010; Viswesvaran, Ones, and Schmidt 1996). Careful analysis of such research indicates that much of the lack of correlation between distinct third-parties (e.g., between peers and supervisors) is due to measurement error (Viswesvaran, Schmidt, and Ones 2002) and features of the assessment design and context (Ostroff, Atwater, and Feinberg 2004) rather than sheer discrepancy of perspective. Furthermore, factors such as the objectivity and social desirability of the rating dimension systematically influence the correspondence among ratings (Heine and Renshaw 2002; Jawahar and Wilson 1997). Perhaps the most insidious prospect is that biased self-assessment is itself a marker of communicative incompetence. That is, those whose self-perceptions are inflated are the most likely to be communicatively incompetent (Dunning, Heath, and Suls 2004; Kruger and Dunning 1999; Schlösser et al. 2013). It is apparent that any single perspective is subject to biases and that multiple assessment loci are preferable. Yet, having multiple perspectives does not resolve the question of which perspective(s) to trust as the most valid, or how to aggregate such perspectives.

3.3When?

When competence is assessed has less to do with clock time and more to do with the state-trait, or episodic vs. dispositional, issue of assessing communicative competence. If competence in communicating is assumed to be grounded deeply in the interactional context itself, then there is little reason to assess it as a dispositional trait. In contrast, grounding competence assessments in a particular context, episode, or encounter may substantially limit the generalizability of the measurement. Certain high-stakes assessments, such as interviews, medical handoff exchanges, auditions, speed dating, and so forth, simply presume that the encounter is a sufficient indicator of a person’s competence. In most assessment contexts, however, there is an objective of extending the measure to future behavior of the person or persons. This time continuum from past to present to future will have substantive implications for measurement. The initial issue is simply in the tense of an item: – Past: “I have generally had difficulty initiating conversations with strangers.” – Present (state, episodic): “I experienced difficulty initiating conversation with this stranger in this encounter.” – Future: “I expect I will have difficulty initiating conversations with strangers in this type of situation.” – Contextual: “I have difficulty initiating conversations with strangers in large social affairs, such as parties and receptions.” – Trait: “I have difficulty initiating conversations with strangers.”

This is just the operational aspect of the time continuum. The additional question is whether or not there is a theoretical or empirical assumption that the past or present should be predictive of the future. Thus, many trait measures assume that the best predictor of future behavior is past behavior and that people’s self-perceptions or their behavior in a situation is representative of their future behavior. Other aspects of the time continuum have to do with potentially important psycho-metric issues that receive relatively little attention in the communicative competence measurement literature (Spitzberg 1987). For example, as time goes by, do people’s memories of their own behavior reveal systematic biases in favoring certain types of information rather than other types of information? Do people tend to forget molecular behaviors by mentally coding them into higher-order mental constructs or dimensions? If so, placing time frames on people’s reports of their communication becomes an important methodological parameter. For example, should people’s self-reports of their communication competence be bounded to the last decade, the last year, the last month, or the last week?

3.4Where?

The question of “where” concerns the role of context. That “communicative competence is contextual” is one of the most axiomatic assumptions in the literature and ironically, one of the least directly addressed. The first problem is that the concept of context is generally neither formally defined nor directly assessed. Interest in contexts and their theoretical integration into implicit or explicit models of communication competence have certainly been addressed (e.g., Argyle, Furnham, and Graham 1981; Forgas 1983; Heise 1979; Spitzberg and Brunner 1991), but most measures presume the relevance of context in relatively non-systematic ways.

The most typical way in which context is incorporated into communicative competence measurement is by drilling down into a context and formulating the communicative competencies found or expected to be relevant to that context. Thus, there are dozens of measures of general health practitioner communicative competence (e.g., Blanch-Hartigan 2011; Boon and Stewart 1998; Duffy et al. 2004; Hobgood et al. 2002; Klakovich and Dela Cruz 2006; Makoul 2001; Nuovo, Bertakis, and Azari 2006; Schirmer et al. 2005) and measures of health practitioner communicative competence in empathy (e.g., Fields et al. 2011; Hojat et al. 2011), patient-centeredness (e.g., McCormack et al. 2011), active listening (e.g., Fassaert et al. 2007), respect (Beach et al. 2006), relational communication (e.g., Gallagher, Hartung, and Gregory 2001), oral/aural competence (Helitzer et al. 2012), and nonverbal competence (Gallagher et al. 2005). Context may also be operationalized as a function of the medium (e.g., Spitzberg 2006). These just illustrate the diversity in a single broad context – health care. Measures have proliferated in most such contexts. Whereas research often examines the degree of shared variance of measures within a given context, only rarely are measures compared across contexts. In some instances, this is sensible because of the particular competencies implied by the context (e.g., taking a patient’s history), but there is certainly substantial potential that many measures are merely reinventing wheels that do not need reinvention.

3.5How?

The question of “how” refers to the actual operationalization decisions involved in converting a set of concepts into an actual measurement instrument. It addresses many of the other questions by applying answers to these questions to the actual measurement design. If it is a “report” instrument, the items must be written in a form appropriate to the person doing the reporting. If it is specific to a particular context, then the skills relevant to that context need to be translated into items or coding categories or other measures that validly reflect these skills. For example, consider the following items from a popular measure of communicative competence (Wiemann 1977):

People can go to S with their problems.

S generally knows how others feel.

S generally knows what type of behavior is appropriate in any given situation.

S interrupts others too much.

S is “rewarding” to talk to.

S is an effective conversationalist.

S is easy to talk to.

S is generally relaxed when conversing with a new acquaintance.

S is not afraid to speak with people in authority.

S likes to use his/her voice and body expressively.

S pays attention to the conversation.

S treats people as individuals.

S usually does not make unusual demands on his/her friends.

S won’t argue with someone just to prove he/she is right.

These items tend to be either summed across, or factor-analyzed. Yet, consider the various ways in which these items imply important yet unarticulated measurement assumptions. First, although the measure has been converted to self-report locus, it clearly is written primarily as an other-rated locus measure. Yet, some items presume little knowledge beyond the observation of a given episode of interaction (e.g., “easy to talk to”, “pays attention to the conversation”), whereas others presume ample opportunity to observe and know the person being assessed (e.g., “people can go to S with their problems”, “generally knows how others feel”).

Second, the items vary considerably in their abstraction. Any behavior may contribute to being an “effective conversationalist” in any situation, but behaving in a “relaxed” manner specifically when conversing with a new acquaintance is a particular skill in a particular situation with a particular type of person. Similarly, any behavior or skill may contribute to being “rewarding” or “easy” to talk to, but avoiding arguments “just to prove” oneself right is a specific behavior (ignoring for the moment if performing a skill should be defined by something that is not done as opposed to something that is done – not doing something necessarily doing something else instead; see Spitzberg, Chapter 10).

Third, notice that evaluative subjective judgments are built into some items (e.g., “too much”, “easy”, “appropriate”, “effective”), whereas other items are relatively descriptive (e.g., “won’t argue”, “relaxed”, “does not make unusual demands”). Some items seem relatively descriptive (e.g., “speak with people in authority”, “use his/her voice and body expressively”) but interject unobservable mental attributions into such judgments (e.g., “not afraid to …,” “likes to use …”).

Fourth, the role of context is obscured by being diffused in selective manner. For example, whereas some items are explicitly context-independent (e.g., “S is ‘rewarding’ to talk to”, “S is an effective conversationalist”, “S is easy to talk to”), others refer only to fairly particular contexts (e.g., hearing others’ “problems”, “conversing with a new acquaintance”, and speaking “with people in authority”).

Finally, some items seem only somewhat distantly related to actual communication behaviors or skills. Some items refer to knowledge (“S generally knows how others feel”), outcomes (e.g., “S is ‘rewarding’ to talk to”, “S is an effective conversationalist”) and others are relatively vague regarding what behaviors might be involved (e.g., “S treats people as individuals”). Other items say something about communication, but say nothing about the communication that describes what about the communication is competent (e.g., “rewarding” or “easy to talk to”). Thus, devising a valid measure of communicative competence involves clearly and well-conceived answers to the questions of what, who, when, where, and how. Assuming such questions have been thought through and satisfactorily addressed, the question remains if the measure is a valid measure.

4Validity issues

Validity generally refers to whether a measure is measuring what it is intended to measure. The most obvious problem with determining the validity of measures of communicative competence is that the core concept itself is often poorly defined conceptually. Many measures derive from a “list” technique in which authors either borrow skills or constructs from other measures, which themselves have not been well-validated, or generate their own list of intuitively relevant skills. Even when a broad list of skills is identified, seldom are the interrelationships of those skills conceptualized in advance. Thus, skills often reveal factor structures, which are taken as evidence that the skill is constitutive of competence, which is merely a statistically foregone result of selecting those skills to begin with.

Assuming that early development generates a broadly representative set of items or skills to measure and that the design aspects reflect sound answers to the questions identified above, the question of validity still presents significant problems. The first is to identify what qualifies as a justifiable criterion of competence. In a particular context, it may be evaluated by criteria very specific to that context. In compliance-gaining situations (e.g., persuasive health or political campaigns, sales situations, date request, etc.), the relevant criteria may be tangible outcomes (e.g., engaging in healthier behavior, voting, purchase decisions, date acceptance, etc.). If more subjective criteria are considered justified, then measures for those constructs need to be identified or developed. For example, competence has been related generally to clarity, understanding, accuracy, attractiveness, credibility, trust, satisfaction, appropriateness, and effectiveness. Measures exist for each of these criteria, but just like measures of communicative competence, they vary in the care and validity of their design and development as well.

Assuming that criteria measures are available, the question often still arises who can best serve to adjudicate the measures. For example, in determining whether a medical student has achieved adequate communicative competence in a standardized patient encounter, should those students’ own instructional faculty, independent practicing physicians, nurses, trained raters, or peers be considered the most appropriate judges? If separate evaluators reach different or discrepant views of the student’s competence, is there a reasonable algorithm for reaching a summary judgment?

5Reliability issues

In regard to reliability, it is not a foregone conclusion that competence measures need to achieve high levels of inter-item consistency. Streiner (2003), for example, emphasizes that there is a difference between scales and indexes. A scale is comprised of items or indicators that are theoretically intercorrelated, whereas an index is a composite of items or indicators that may not themselves be related, but accumulate to provide a valid indicator of something. The causal connection between items and the construct are reversed. For example, test measures of vocabulary, problem solving and counterfactual reasoning may be understood as behavioral manifestations of an underlying or latent theoretical construct of critical thinking. These measured manifestations are psychometrically viewed as caused by the latent unobserved construct of intelligence or critical thinking ability. In this instance, vocabulary, problem solving and abstract reasoning are expected to correlate with each other as behavioral manifestations of the same underlying construct and are therefore expected to display reasonable inter-item correlations (i.e., high internal consistency). In contrast, in an index, various indicators are viewed as independent causes of an unobserved outcome. Thus, for example, lack of eye contact, nonempathic assertiveness, and closed body orientation might independently cause a person to view a communicator as incompetent, whereas a lack of speaking turns, excessive eye contact, and facial inexpressiveness might cause another person to conclude a communicator is incompetent. In both instances, disparate behaviors are cues of incompetence, so an index might need to include such disparate indicators even if they might not be consistently correlated with one another. Summing across these varied indicators may still provide a valid index of a person’s perceived inability to create a competent impression on others.

The issue of comparing scales and indexes is important for additional psycho-metric reasons other than reliability. For example, factor analysis depends on the assumption that factors (unobserved latent variables) account for consistent inter-correlations among their constituent items. In regard to behavioral measures, however, it is not obvious that competencies would be reflected in particular composites of intercorrelated behavioral items. For example, what skill or competence does eye contact load on in regard to competence: empathy, turn-taking, nonverbal expressiveness, flirtation, humor, attentiveness, aggression, conflict management, or some other competence? Eye contact can be an important compositional predictor of any or all of these competencies, but so should the ability to ask questions. Yet, in general, there is relatively little reason to expect eye contact to correlate consistently with asking questions across all these competencies. Furthermore, if it cross-loads, it would often be excluded from measures for being a noisy item. If this reasoning is extended, therefore, factor analysis is not always a valid representation of underlying unobserved variables, nor a valid framework for deriving ideal measures of competence (Spitzberg, Brookshire, and Brunner 1990). There may be many instances in which behaviors and evaluations of behaviors are more validly viewed as indexes rather than scales.

6Subtleties and controversies in measuring communicative competence

The basics of who, what, where, when, and how are relatively well-established as assessment concerns. Issues such as whether competence is viewed as a state or trait, or who is in the best position to make judgments of competence of whom, are far from resolved. Among the more subtle and challenging remaining issues in assessing communication competence are the roles of context, ideology, optimality, and curvilinearity.

6.1Contextuality

There may be an infinite variety of contexts in which communication competence could be assessed, but some are likely to present more salient features than others. Among the most salient features in communication evaluation is culture (Spitzberg 1989; Spitzberg and Brunner 1989; Spitzberg and Changnon 2009). Cultures vary across a wide variety of dimensions and may develop somewhat unique beliefs, values, norms, rules, expectations, and contextual patterns of behavior that are constitutive of communication competence. Eye contact may be universal, but the ways in which eye contact is expected to be used and the weight such behavior has on various dimensions of evaluation may differ from culture to culture (Lustig and Spitzberg 1993). The development and validation of competence assessments ultimately need to formulate either culturally-sensitive item content, or validate the cross-cultural psychometric and structural generality of item content.

6.2Ideology

Most scholars of communication competence avoid issues of ideology, but such issues inevitably underlie assessment decisions. Ideology refers to the degree to which values either implicitly or explicitly inform the development of an assessment (Spitzberg 1993, 1994a). Gender is an ideological feature of assessments in regard to the gendered nature of item selection and proposed conceptual components. For example, adaptability, attentiveness, empathy, perspective-taking, person-centeredness, dialogic orientation, other orientation, listening, nonverbal sensitivity, and collaboration are all communal in approach, which loads higher on feminine values. In contrast, assertiveness, confidence, composure, control, influence, goal-setting, time management, structuration, and opinion expression all load more highly on instrumental dimensions, which are more closely aligned with masculine orientations to the world. This gendered composition can interact with cultural dimensions as well, as some cultures may prefer more feminine communication in certain contexts and more masculine communication in other contexts (Spitzberg and Brunner 1989; Spitzberg 1993, 1994a).

6.3Optimality

A fundamental issue facing educational institutions and professional societies is whether to view competence as a minimal bar, or an optimal bar, of performance. If appropriateness and effectiveness are crossed as dimensions, four possibilities arise: minimizing, sufficing, maximizing, and optimizing. Minimizing locates someone who is incompetent (i.e., inappropriate and ineffective). Sufficing communicators are viewed as having behaved appropriately but not effectively. This is akin to a normative perspective that communicators have to at least meet the bar of behaving at some normal appropriate level compared to others. Maximizing approaches operationalize competence by the outcome, even when achieving that effectiveness violated standards of appropriateness. Optimizing approaches seek a balance and ideally a full achievement of, communication that is perceived as both appropriate and effective. Optimizing communication is particularly difficult to achieve in some contexts that pit appropriateness and effectiveness against one another, such as transgressions and conflicts. For example, in a conflict encounter, an actor’s attempt to achieve a particular goal (i.e., to be effective) is perceived as prevented by a co-actor’s behavior, in which the co-actor views the actor’s behavior as inappropriate. These four domains illustrate that using a singular criterion of competence in communication limits the possibilities of conceptualizing the full implications of measurement.

6.4Curvilinearity

A final challenge of developing assessments is that at the granular level of measuring communication, competence consists of the performance of behaviors that tend to be curvilinear to impressions of competence. In general, it is always possible, if not probable, that there can be “too much of a good thing” (Grant and Schwartz 2011). Textbooks and assessments that simply assume that more eye contact, questions, listening, active feedback, and so forth, are more competent are clearly incomplete. Research indicates that many behaviors associated with communication competence, such as eye contact, speaking time, questions, nonverbal immediacy, and interpersonal closeness, are curvilinear to impressions of competence (Spitzberg 1993, 1994a, 2013). One of the assessment solutions to such curvilinearity is to separate the behavioral content of items from the evaluation scales applied to those behaviors. Similar to the social skills validation paradigms, rating the extent to which a person engaged in objective behaviors can be correlated to the evaluation of that person’s appropriateness and effectiveness, competence, or quality of performance. By separating the evaluation from the behavior, the role of curvilinearity can be explored explicitly.

7Measure for measure

Assessing the state of communication competence assessment is a daunting task. Despite hundreds and hundreds of measures of communication competence (Spitzberg 2003; Spitzberg and Cupach 1989, 2011), numerous limitations and uncertain validity plague most assessment approaches. There are no simple solutions to the valid assessment of communication competence. There are no complex solutions, yet. As the value of communication is increasingly recognized in certain high impact and profile contexts (e.g., business, diplomatic, political, health care, etc.), intensive efforts have begun to be pursued with appropriate resources and pro-grammatic approach. Such intensive efforts will be required for valid assessment to result.

8Future measures for measure

The lore and legacy of social scientific measurement has long understood the principle of “garbage in, garbage out” (GIGO). What is often overlooked by this aphorism is that the data that comprise the “garbage in” can result from impoverished grounding principles rather than the relative poverty of the data sources themselves. Much of the problems identified herein are a result of scholars’ either a) ignorance of the lessons and research of the past, both within and across disciplines (i.e., it is not possible to know what needs to be done until it is known what has been done), or b) inattention to basic a priori intentional conceptual decisions, such as “why this particular skill and how does this particular skill relate to the other skills that will comprise the measure of interpersonal skill or communication competence”. Addressing these two issues – conducting comprehensive background research on the domain and deciding what the nature of the skills are that will be assessed – would go far in improving the state of assessment.

A second important improvement in assessment will begin to arise when scholars begin to move beyond a constant process of confusing ability and inference, macro-level skills with micro-level skills and confusing the literature by redundantly proliferating labels for skills. The term “empathy” is a good example of a skill that has largely lost its conceptual value because of the multitude of ways in which it has been defined, conceptualized, and operationalized. It is not clear if it is cognitive, emotional, or behavioral, or some combination and what its distinctiveness is vis-à-vis listening, attentiveness, other-orientation, concern, interest, altercentrism, regard, perspective-taking, or decentering. If these issues are not determined prior to measurement development, it is unlikely that GIGO will be avoided. Advancements along these lines will depend on professional associations to appoint task forces to facilitate consensual statements on such core competencies and solid comprehensive approaches and research programs to develop and validate corresponding assessments.

References

Achenbach, Thomas M., Levent Dumenci and Leslie A. Rescorla. 2002. Ten-year comparisons of problems and competencies for national samples of youth: Self, parent and teacher reports. Journal of Emotional and Behavioral Disorders 10. 194.

Achenbach, Thomas M., Rebecca A. Krukowski, Levent Dumenci and Masha Y. Ivanova. 2005. Assessment of adult psychopathology: Meta-analyses and implications of cross-informant correlations. Psychological Bulletin 131. 361–382.

Argyle, Michael, Adrian Furnham and Jean Ann Graham. 1981. Social Situations. London: Cambridge University Press.

Aristotle. [2007]. On Rhetoric: A Theory of Civic Discourse (George A. Kennedy, transl., 2nd ed.). New York: Oxford University Press.

Bachmann, Cadja, Henry Abramovitch, Carmen Gabriela Barbu, Afonso Miguel Cavaco, Rosario Dago Elorz, Rainer Haak, Elezabete Loureiro, Anna Ratajska, Jonathan Silverman, Sandra Winterburn and Marcy Rosenbaum. 2013. A European consensus on learning objectives for a core communication curriculum in health care professions. Patient Education and Counseling 93. 18–26.

Backlund, Phil and Gay Wakefield (eds.). 2010. A Communication Assessment Primer. Washington DC: American Psychological Association.

Beach, Mary Catherine, Debra L. Roter, Nae-Yuh Wang, Patrick S. Duggan and Lisa A. Cooper. 2006. Are physicans’ attitudes of respect accurately perceived by patients and associated with more positive communication behaviors? Patient Education and Counseling 62. 347– 354.

Blanch-Hartigan, Danielle. 2011. Medical students’ self-assessment of performance: Results from three meta-analyses. Patient Education and Counseling 84. 3–9.

Blanchard, Victoria L., Alan J. Hawkins, Scott A. Baldwin and Elizabeth B. Fawcett. 2009. Investigating the effects of marriage and relationship education on couples’ communication skills: a meta-analytic study. Journal of Family Psychology 23. 203–214.

Boon, Heather and Moira Stewart. 1998. Patient–physician communication assessment instruments: 1986 to 1996 in review. Patient Education and Counseling 35. 161–176.

Breen, Paul, Thomas Donlon and Urban Whitaker. 1977. Teaching and Assessing Interpersonal Competence–ACAEL Handbook. Columbia, NJ: CAEL.

Brown, Penelope and Stephen C. Levinson. 1987. Politeness: Some Universals of Language Usage. Cambridge, UK: Cambridge University Press.

Byram, Michael. 1997. Teaching and Assessing Intercultural Communicative Competence. Clevedon, UK: Multilingual Matters.

Carpenter, Tara D. and Michelle M. Wisecarver. 2004. Identifying and Validating a Model of Interpersonal Performance Dimensions (Technical Report 1144). Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.

Carpenter, Tara D., Michelle M. Wisecarver, Edwin A. Deagle III and Kip G. Mendini. 2005. Special Forces Interpersonal Performance Assessment System (Technical Report 1833). Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.

Carrell, Lori J. and S. Clay Willmington. 1996. A comparison of self-report and performance data in assessing speaking and listening competence. Communication Reports 9. 185–191.

Carrell, Lori J. and S. Clay Willmington. 1998. The relationship between self-report measures of communication apprehension and trained observers’ ratings of communication competence. Communication Reports 11. 87–95.

Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.

Christ, William G. (ed.). 1994. Assessing Communication Education: A Handbook for Media, Speech and Theatre Educators. Hillsdale, NJ: Lawrence Erlbaum Associates.

Cone, John D. 1978. The Behavioral Assessment Grid (BAG): A conceptual framework and a taxonomy. Behavior Therapy 9. 882–888.

Conway, James M. and Allen I. Huffcutt. 1997. Psychometric properties of multisource performance ratings: A meta-analysis of subordinate, supervisor, peer and self-ratings. Human Performance 10. 331–360.

Cummings, Jordan A., Adele M. Hayes, John-Phillipe Laurenceau and Lawrence H. Cohen. 2010. Conflict management mediates the relationship between depressive symptoms and daily negative events: Interpersonal competence and daily stress generation. International Journal of Cognitive Therapy 3. 318–331.

Curhan, Jared R. and Alex Pentland. 2007. Thin slices of negotiation: Predicting outcomes from conversational dynamics within the first 5 minutes. Journal of Applied Psychology 92(3). 802–811.

Daly, John A. 1994. Assessing speaking and listening: Preliminary considerations for a national assessment. In: Addison Greenwood (ed.), The National Assessment of College Student Learning: Identification of the Skills to be Taught, Learned and Assessed (Report of the Proceedings of the Second Study Design Workshop, November 1992), 113–180. Washington, DC: U.S. Department of Education.

Doll, Edgar A. 1935. The measurement of social competence. American Association on Mental Deficiency 40. 103–126.

Duffy, F. Daniel, Geoffrey H. Gordon, Gerald Whelan, Kathy Cole-Kelly and Richard Frankel. 2004. Assessing competence in communication and interpersonal skills: The Kalamazoo II report. Academic Medicine 79: 495–507.

Dunning, David, Chip Heath and Jerry M. Suls. 2004. Flawed self-assessment: Implications for health, education and the workplace. Psychological Science in the Public Interest 5. 69–106.

Eagle, Nathan and Alex Sandy Pentland. 2009. Eigenbehaviors: identifying structure in routine. Behavioral Ecology and Sociobiology 63. 1057–1066.

Fassaert, Thijs, Sandra van Dulmen, François Schellevis and Jozien Bensing. 2007. Active listening in medical consultations: Development of the Active Listening Observation Scale (ALOS-global). Patient Education and Counseling 68. 258–264.

Fields, Sylvia K., Pamela Mahan, Paula Tillman, Jeffrey Harris, Kaye Maxwell and Mohammadreza Hojat. 2011. Measuring empathy in healthcare profession students using the Jefferson Scale of physician empathy: Health provider – student version. Journal for Interprofessional Care 25. 287–293.

Fletcher, Garth J. O. and Patrick S. G. Kerr. 2010. Through the eyes of love: Reality and illusion in intimate relationships. Psychological Bulletin 136. 627–658.

Forgas, Joseph P. 1983. Social skills and the perception of interaction episodes. British Journal of Clinical Psychology 22. 195–207.

Freund, Phillip A. and Nadine Kasten. 2012. How smart do you think you are? A meta-analysis on the validity of self-estimates of cognitive ability. Psychological Bulletin 138. 296–321.

Gallagher, Timothy J., Paul J. Hartung and Stanford W. Gregory Jr. 2001. Assessment of a measure of relational communication for doctor–patient interactions. Patient Education and Counseling 45. 211–218.

Gallagher, Timothy J., Paul J. Hartung, Holly Gerzina, Stanford W. Gregory Jr. and Dave Merolla. 2005. Further analysis of a doctor–patient nonverbal communication instrument. Patient Education and Counseling 57: 262–271.

Gerdes, Karen E., Cynthia A. Lietz and Elizabeth A. Segal. 2011. Measuring empathy in the 21st century: Development of an empathy index rooted in social cognitive neuroscience and social justice. Social Work Research 35. 83–93.

Gilliland, A. R. and Ruth S. Burke. 1926. A measurement of sociability. Journal of Applied Psychology 10. 315–326.

Grant, Adam M. and Barry Schwartz. 2011. Too much of a good thing: The challenge and opportunity of the inverted U. Perspectives on Psychological Science 6. 61–76.

Hargie, Owen D. W. 1997a. Communication as skilled performance. In: Owen D. W. Hargie (ed.), The Handbook of Communication Skills (2nd ed.), 7–28. New York: Routledge.

Harris, Michael M. and John Schaubroeck. 1988. A meta-analysis of self-supervisor, self-peer and peer-supervisor ratings. Personnel Psychology 41. 43–62.

Heine, Steven J. and Kristen Renshaw. 2002. Interjudge agreement, self-enhancement and liking: cross-cultural divergences. Personality and Social Psychology Bulletin 28. 578–587.

Heise, David R. 1979. Understanding Events: Affect and the Construction of Social Action. New York: Cambridge University Press.

Helitzer, Deborah, Christine Hollis, Margaret Sanders and Suzanne Roybal. 2012. Addressing the “other” health literacy competencies – knowledge, dispositions and oral/aural communication: Development of TALKDOC, an intervention assessment tool. Journal of Health Communication 17. 160–175.

Hobgood, Cherri D., Ralph J. Riviello, Nicholas Jouriles and Glen Hamilton. 2002. Assessment of communication and interpersonal skills competencies. Academic Emergency Medicine 9. 1257–1269.

Hojat, Mohammadreza, John Spandorfer, Daniel Z. Louis and Joseph S. Gonnella. 2011. Empathic and sympathetic orientations toward patient care: Conceptualization, measurement and psychometrics. Academic Medicine 86. 989–995.

Hunt, Thelma. 1928. The measurement of social intelligence. Journal of Applied Psychology 12. 317–334.

Jawahar, I. M. and Williams, Charles R. 1997. Where all the children are above average: The performance appraisal purpose effect. Personnel Psychology 50. 905–925.

Jeffries, Fiona W. and Paul Davis. 2013. What is the role of eye movements in eye movement desensitization and reprocessing (EMDR) for post-traumatic stress disorder (PTSD)? A review. Behavioural and Cognitive Psychotherapy 41. 290–300.

Kenny, David A. 1994. Interpersonal Perception: A Social Relations Analysis. New York: Guilford.

Kiessling, Claudia, Anja Dieterich, Götz Fabry, Henrike Hölzer, Wolf Langewitz, Isabel Mühlinghaus, Susann Pruskil, Simone Scheffer and Sebastian Schubert. 2010. Communication and social competencies in medical education in German-speaking countries: The Basel Consensus Statement. Results of a Delphi survey. Patient Education and Counseling 81. 259–266.

Klakovich, Marilyn D. and Dela Cruz, Felicitas A. 2006. Validating the interpersonal communication assessment scale. Journal of Professional Nursing 22. 60–67.

Klein, Cameron R. 2009. What do we know about interpersonal skills? A meta-analytic examination of antecedents, outcomes and the efficacy of training. Ph.D. dissertation, University of Central Florida, Orlando, FL.

Klein, Cameron R., Renée E. DeRouin and Eduardo Salas. 2006. Uncovering workplace interpersonal skills: A review, framework and research agenda. In: Gerald P. Hodgkinson and J. Kevin Ford (eds.), International Review of Industrial and Organizational Psychology, vol. 21. 79–126. New York: John Wiley and Sons.

Kruger, Justin and David Dunning. 1999. Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology 77. 1121–1134.

Lanning, Sharon K., Tegwyn H. Brickhouse, John C. Gunsolley, Sonya L. Ranson and Rita M. Willett. 2011. Communication skills instruction: An analysis of self, peer-group, student instructors and faculty assessment. Patient Education and Counseling 83. 145–151.

Larson, Carl E., Phil Backlund, Mark Redmond and Alton Barbour. 1978. Assessing Functional Communication. Falls Church, VA: Speech Communication Association.

Leising, Daniel, Sabrina Krause, Doreen Köhler, Kai Hinsen and Allan Clifton. 2011. Assessing interpersonal functioning: views from within and without. Journal of Research in Personality 45. 631–641.

LePine, Jefferey A., Ronald F. Piccolo, Christine L. Jackson, John E. Mathieu and Jessica R. Saul. 2008. A meta-analysis of teamwork processes: Tests of a multidimensional model and relationships with team effectiveness criteria. Personnel Psychology 61. 273–307.

Levenson, Robert W. and John M. Gottman. 1978. Toward the assessment of social competence. Journal of Consulting and Clinical Psychology 46. 453–462.

Lustig, Myron W. and Brian H. Spitzberg. 1993. Methodological issues in the study of intercultural communication competence. In: Richard L. Wiseman and Jolene Koester (eds.), Intercultural Communication Competence, 153–167. Newbury Park, CA: Sage.

Mabe, Paul A., III. and Stephen G. West. 1982. Validity of self-evaluation of ability: A review and meta-analysis. Journal of Applied Psychology 67. 280–296.

Makoul, Gregory and Theo Schofield. 1999. Communication teaching and assessment in medical education: An international consensus statement. Patient Education and Counseling 137. 191–195.

Makoul, Gregory. 2001. The SEGUE framework for teaching and assessing communication skills. Patient Education and Counseling 45: 23–34.

McCormack, Lauren A., Katherine Treiman, Douglas Rupert, Pamela Williams-Piehota, Eric Nadler, Neerag K. Arora, William Lawrence and Richard L. Street Jr. 2011. Measuring patient-centered communication in cancer care: A literature review and the development of a systematic approach. Social Science and Medicine 72: 1085–1095.

Morreale, Sherwyn P. and Phil M. Backlund. 2007. Large Scale Assessment in Oral Communication P-12 and Higher Education (3rd ed). Washington, DC: National Communication Association.

Morreale, Sherwyn, Megan Brooks, Roy Berko and Carolyn Cooke. 1994. Assessing College Student Competency in Speech Communication (1994 SCA Summer Conference Proceedings and Prepared Remarks). Annandale, VA: Speech Communication Association.

Nangle, Douglas W., Rachel L. Grover, Lauren J. Holleb, Michael Cassano and Jessica Fales. 2010. Defining competence and identifying target skills. In: Douglas W. Nangle, David J. Hansen, Cynthia A. Erdley and Peter J. Norton (eds.), Practitioner’s Guide to Empirically Based Measures of Social Skills, 3–19. New York: Springer.

Nuovo, Jim, Klea D. Bertakis and Rahman Azari. 2006. Assessing resident’s knowledge and communication skills using four different evaluation tools. Medical Education 40. 630–636.

Ostroff, Cheri, Leanne E. Atwater and Barbara Feinberg. 2004. Understanding self-other agreement: A look at rater and rate characteristics, context and outcomes. Personnel Psychology 57. 333–375.

Pentland, Alex, Tanzeem Choudhury, Nathan Eagle and Push Singh. 2005. Human dynamics: Computation for organizations. Pattern Recognition Letters 26. 503–511.

Pentland, Sandy “Alex”. 2012. The new science of building great teams. Harvard Business Review 90(4). 61–70.

Ready, Rebecca E., Lee Anna Clark, David Watson and Kelley Westerhouse. 2000. Self- and peer-related personality: Agreement, trait ratability and the “self-based heuristic”. Journal of Research in Personality 34. 208–224.

Renk, Kimberly and Vicky Phares. 2004. Cross-informant ratings of social competence in children and adolescents. Clinical Psychology Review 24. 239–254.

Rubin, Don L. and Nancy A. Mead. 1984. Large Scale Assessment of Oral Communication Skills: Kindergarten Through Grade 12. Annandale, VA: Speech Communication Association/ERIC. Rubin, Rebecca B., Philip Palmgreen and Howard E. Sypher. 2004. Communication Research Measures: A sourcebook. Mahwah, NJ: Lawrence Erlbaum Associates.

Rubin, Rebecca B., Rubin, Alan M., Graham, Elizabeth E., Perse, Elizabeth M. and Seibold, David R. 2009. Communication Research Measures II: A Sourcebook. New York, NY: Routledge.

Schirmer, Julie M., Larry Mauksch, Forrest Lang, M. Kim Marvel, Kathy Zoppi, Ronald M. Epstein, Doug Brock and Michael Pryzbylski. 2005. Assessing communication competence: A review of current tools. Family Medicine 37. 184–192.

Schlösser, Thomas, David Dunning, K. L. Johnson and Justin Kruger. 2013. How unaware are the unskilled? Empirical tests of the “signal extraction” counterexplanation for the Dunning-Kruger effect in self-evaluation of performance. Journal of Economic Psychology 39. 85–100.

Shakespeare. Circa 1603–04 [1993]. The Yale Shakespeare: The Complete Works. Edited by Wilbur L. Cross and Tucker Brooke. New York: Barnes and Noble.

Spitzberg, Brian H. 1987. Issues in the study of communicative competence. In: Brenda Dervin and Melvin J. Voigt (eds.), Progress in Communication Sciences, Vol. 8. 1–46. Norwood, NJ: Ablex.

Spitzberg, Brian H. 1988. Communication competence: Measures of perceived effectiveness. In: Charles H. Tardy (ed.), A Handbook for the Study of Human Communication: Methods and Instruments for Observing, Measuring and Assessing Communication Processes, 67–106. Norwood, NJ: Ablex.

Spitzberg, Brian H. 1989. Issues in the development of a theory of interpersonal competence in the intercultural context. International Journal of Intercultural Relations 13. 241–268.

Spitzberg, Brian H. 1993. The dialectics of (in)competence. Journal of Social and Personal Relationships 10. 137–158.

Spitzberg, Brian H. 1994. Ideological issues in competence assessment. In: Sherwyn Morreale, Megan Brooks, Roy Berko and Carolyn Cooke (eds.), Assessing College Student Competency in Speech Communication (1994 SCA Summer Conference Proceedings), 129–148. Annandale, VA: Speech Communication Association.

Spitzberg, Brian H. 2000. What is good communication? Journal of the Association for Communication Administration 29. 103–119.

Spitzberg, Brian H. 2003. Methods of skill assessment. In: John O. Greene and Brant R. Burleson (eds.), Handbook of Communication and Social Interaction Skills, 93–134. Mahwah, NJ: Erlbaum.

Spitzberg, Brian H. 2006. Toward a theory of computer-mediated communication competence. Journal of Computer-Mediated Communication 11: 629–666. http://jcmc.indiana.edu/vol11/issue2/spitzberg.htm

Spitzberg, Brian H. 2007. CSRS: The Conversational Skills Rating Scale – An Instructional Assessment of Interpersonal Competence (NCA Diagnostic Series, 2nd ed.). Annandale, VA: National Communication Association.

Spitzberg, Brian H. 2011. The Interactive Media Package for Assessment of Communication and Critical Thinking (IMPACCT©): Testing a programmatic online communication competence assessment system. Communication Education 60. 145–173.

Spitzberg, Brian H. 2013. (Re)Introducing communication competence to the health professions (Special issue: Interdisciplinary Perspectives on Medical Error). Journal of Public Health Research 2. 126–135.

Spitzberg, Brian H., Robert G. Brookshire and Claire C. Brunner. 1990. The factorial domain of interpersonal skills. Social Behavior and Personality 18. 137–150.

Spitzberg, Brian H. and Claire C. Brunner. 1989. Sex, instrumentality, expressiveness and interpersonal communication competence. In: Cynthia M. Lont and Sheryl A. Friedley (eds.), Beyond Boundaries: Sex and Gender Diversity in Communication, 121–138. Fairfax, VA: George Mason University Press.

Spitzberg, Brian H. and Claire C. Brunner. 1991. Toward a theoretical integration of context and competence inference research. Western Journal of Speech Communication 56. 28–46.

Spitzberg, Brian H. and Gabrielle Changnon. 2009. Conceptualizing intercultural communication competence. In: Darla K. Deardorff (ed.), The SAGE Handbook of Intercultural Competence, 2–52. Thousand Oaks, CA: Sage.

Spitzberg, Brian H. and William R. Cupach. 1984. Interpersonal Communication Competence. Beverly Hills, CA: Sage.

Spitzberg, Brian H. and William R. Cupach. 1985. Conversational skill and locus of perception. Journal of Psychopathology and Behavioral Assessment 7. 207–220.

Spitzberg, Brian H. and William R. Cupach. 1989. Handbook of Interpersonal Competence Research. New York: Springer-Verlag.

Spitzberg, Brian H. and William R. Cupach. 2002. Interpersonal skills. In: Mark L. Knapp and John Daly (eds.), Handbook of Interpersonal Communication (3rd ed.), 564–611. Newbury Park, CA: Sage.

Spitzberg, Brian H. and William R. Cupach. 2011. Interpersonal skills. In: Mark L. Knapp and John A. Daly (eds.), Handbook of Interpersonal Communication (4th ed.), 481–524. Newbury Park, CA: Sage.

Spitzberg, Brian H. and H. Thomas Hurt. 1987. The measurement of interpersonal skills in instructional contexts. Communication Education 36. 28–45.

Street, Richard L., Jr. and Hanneke C. J. M. De Haes. 2013. Designing a curriculum for communication skills training from a theory and evidence-based perspective. Patient Education and Counseling 93. 27–33.

Streiner, David L. 2003. Being inconsistent about consistency: When coefficient alpha does and doesn’t matter. Journal of Personality Assessment 80. 217–222.

Swami, Viren, Lauren Waters and Adrian Furnham. 2010. Perceptions and meta-perceptions of self and partner physical attractiveness. Personality and Individual Differences 49. 811–814.

Thorndike, Edward. 1920. Intelligence and its uses. Harper’s Magazine 140. 227–235.

Viswesvaran, Chockalingam, Deniz S. Ones and Frank L. Schmidt. 1996. Comparative analysis of the reliability of job performance ratings. Journal of Abnormal Psychology 81. 557–574.

Viswesvaran, Chockalingam, Frank L. Schmidt and Deniz S. Ones. 2002. The moderating influence of job performance dimensions on convergence of supervisory and peer ratings of job performance: Unconfounding construct-level convergence and rating difficulty. Journal of Applied Psychology 87. 345–354.

von Fragstein, Martin, Jonathan Silverman, Annie Cushing, Sally Quilligan, Helen Salisbury and Connie Wiskin. 2008. UK consensus statement on the content of communication curricula in undergraduate medical education. Medical Education 42. 1100–1107.

Watzlawick, Paul, Janet Beavin Bavelas and Don D. Jackson. 1967. Pragmatics of Human Communication. New York: W. W. Norton.

Wiemann, John M. 1977. Explication and test of a model of communicative competence. Human Communication Research 3. 195–213.

Wisecarver, Michelle M., Tara D. Carpenter and Robert N. Kilcullen. 2007. Capturing interpersonal performance in a latent performance model. Military Psychology 19. 83–101.

Wrench, Jason S., Doreen M. S. Jowi and Alan K. Goodboy. 2010. NCA Directory of Communication Related Mental Measures: A Comprehensive Index of Research Scales, Questionnaires, Indices, Measures and Instruments. Washington, DC: National Communication Association.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.42.163