Phil Backlund

28Assessment of College-Level Communication Programs

Abstract: When done well, assessment can be a powerful force for positive change in the educational process. This chapter details the components of effective assessment of communication programs. Beginning with the background of assessment and its role and purpose, the chapter describes the place of assessment in student learning and its goal of improving the educational product delivered to students. Assessment is also at times confused with similar concepts, so the chapter compares and contrasts accreditation, accountability, and program review with assessment. The chapter then covers the tools and essential steps in developing an effective assessment. As communication has unique aspects not found in other disciplines, the chapter addresses these unique dimensions and gives advice on factoring them into the assessment process. Each of the typical assessment tools is described along with the issues of validity, reliability, and bias. Finally, the chapter describes the challenges to effective assessment including faculty resistance. The information in this chapter can help departments develop an assessment program that improves the quality of instruction they deliver to students.

Keywords: assessment, accountability, accreditation, program review, faculty resistance, administration, student outcomes

The assessment movement was born in the middle 1970s and is still a key component of education systems from the pre-school to the university. Though this fact may dismay some faculty members who resist or merely tolerate assessment efforts, the assessment movement has proved to be a positive force within higher education. The assessment movement was born out of the desire to develop databased answers to such questions as “Are our students learning what we want them to learn?” and “Can we provide evidence to that effect?” These questions are basic to the teaching-learning process, but prior to the 1970s had not often been addressed in any systematic manner. In the decades that followed, various governing bodies and accreditation organizations began calling for answers to these questions. For example, in 1990, the National Governor’s Association asserted that performing competent assessment required defining what students need to know, determining whether they know it, and utilizing accurate, comparable, appropriate, and constructive measurements (Chesebro, 1990). Gradually, faculty members in higher education became aware of the significance of assessment and its relationship to accreditation, state legislatures, and other sources of accountability. Campuses received pressure from several directions, challenging (and requiring) them to look carefully at student learning, articulate measurable learning objectives, set high standards for accomplishment, and assess whether students have met those standards (Palomba & Banta, 1999). These pressures have not abated in the intervening years (Fain, 2015).

In this context, the assessment of communication programs and courses was developed and has matured over the past four decades. The National Communication Association (NCA) has actively developed a national assessment agenda since 1970. The Speech Communication Association Task Force on Assessment and Testing was formed in 1978 and charged with gathering, analyzing, and disseminating information about the testing of speech communication skills (Backlund & Morreale, 1994). This task force has evolved into the Communication Assessment Division. The Division’s work includes activities such as defining communication skills and competency, publishing summaries of assessment procedures and instruments, publishing standards for effective oral communication programs, and developing guidelines for program review (Backlund & Morreale, 1994). During this time, assessment in communication developed two strands: assessment of student learning, and assessment of the overall communication program. The former focuses on the degree to which students in communication programs of instruction learn what faculty want them to learn, and the latter focuses on overall communication program assessment that requires departmental members to examine curriculum, student educational experiences, and the amount of student learning that occurred (Morreale, Brooks, Berko, & Cooke, 1994).

Given this background, this chapter elaborates on the main idea that assessment is and should be an integral component of an effective program of communication education. This chapter will cover the role and purpose of assessment in the educational process, including developing an effective communication assessment program, selecting appropriate tools for assessment, overcoming challenges to effective assessment, and finally, applying assessment results to improve the educational experience of students.

Role and Purpose of Assessment

The Role of Assessment in Education

Given the fact that assessment has been a part of higher education for over 40 years, one can reasonably ask, “What are current assessment trends?” and more importantly, “Has assessment produced positive results?” In response to the first question, Morreale, Backlund, Hay, and Moore (2011), in their article Assessment of Oral Communication: A Major Review of the Historical Development and Trends in the Movement from 1975 to 2009, charted the arc of the assessment movement over that time period in United States higher education and in the Communication discipline. This report identified two important trends: First, although assessment publications and convention papers peaked in the 1990s, research and writing about assessment has maintained steady interest within our discipline. The authors suggest this is due to the fact that assessment has shown its value and that governing bodies in higher education (legislatures, accrediting commissions, institution administration, etc.) still want evidence educational efforts impact student learning. Second, the majority of the articles reviewed describe assessment practices, but few present evidence of its effects. This is partly due to the inability of institutions to run true experiments on assessment and the difficulties involved in developing comparable data between institutions. Institutions are certainly compared at times by legislatures, but the majority of these comparisons are not highly defensible methodologically. Nevertheless, these issues have not discouraged wide-spread interest in assessment.

More recent sources such as Griffin and Care’s Assessment and Teaching of 21st Century Skills (2015), Astin and Antonio’s Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education (2012), and Cox, Imrie, and Miller’s Student Assessment in Higher Education: A Handbook for Assessing Performance (2013) are noteworthy for two reasons. First, the advice given to higher education practitioners has not appreciably changed in the past decades. Good practices in assessment are still good practices. Next, Astin and Antonio ask the question, “What has all this effort produced?” They report that positive results related to student learning and faculty satisfaction can be shown, but only where faculty and administrators have taken it seriously and treated assessment systematically. However, the evidence is more anecdotal than systematic, an observation that echoes Morreale et al.’s (2011) findings. From this brief review, it can be concluded that assessment has lingering methodological issues but is no passing fad, as it does make good academic sense.

Over the years, a primary force behind the assessment movement has been the desire for more accountability in education. Legislatures, accrediting bodies, state boards of education, and internal review processes all want to know if the education our students are receiving is having the desired effect. While the form of the questions and requirements posed by these groups may vary, they seem to come down to six fundamental questions (Morreale & Backlund, 1998) that help position assessment in the overall educational endeavor:

  1. Who are you, and why do you exist (Mission)?
  2. What do you want to accomplish (Goals and Objectives)?
  3. What procedures will you use to determine if the goals/objectives have been met (Assessment)?
  4. What are the results of your assessment (Analysis)?
  5. What changes will you make to your goals/objectives/outcomes/processes based on these results (Application of Results)?
  6. What evidence do you have that this is a continuous cycle (Continuous Improvement)?

These questions form the basis for virtually all accountability and assessment efforts and provide a useful starting point in developing an effective assessment program. As can be seen from this list, assessment is an integral part of an overall process of educational program definition, development, execution, and review. These processes make good academic sense. Effectively answering these questions provides advantages for students, faculty, and departments for a number of reasons.

First, answering the questions results in a better education for students. When faculty have a clear idea of their institution’s mission (and you may be surprised how many different conceptions of “why we exist” are present on the average college campus), faculty are more able to act in concert with each other to meet the mission of the school. When schools, departments, and faculty can clearly describe their educational outcomes, everyone has a much better sense of what students are to learn. This leads to more effectively designed educational programs and strategies.

Next, answering the questions results in a better informed faculty. One clear but unintended effect of the assessment movement has been increased conversations and coordination among faculty and their colleagues. In the past (and this is still true for some colleagues), faculty members taught their classes with little or no regular conversation with faculty in the same department. The assessment movement has spawned a great number of conversations between previously separate individuals. Some of these conversations were uncomfortable as differences between faculty members were uncovered, but at least the differences were brought out in the open for discussion.

Last, answering the questions results in a better institution. All of us in education are involved in a process that is explainable and defensible. Educational institutions that have answered these questions are far better prepared to meet the demands of accrediting bodies, legislatures, and other stakeholder groups.

To summarize, it can be said that assessment reveals how we know what we know. Assessment provides the ability to develop data-based evidence that will show our students, ourselves, and our stakeholders that our educational programs are helping students learn what we want them to learn; and if we conscientiously apply the results that assessment provides, we can continually monitor and improve that endeavor.

Common Terminology Related to Educational Assessment

Assessment is not the only word used in regard to educational analysis and improvement. Accreditation, accountability, and program review are also processes by which an institution, department, or course of study may be analyzed for effectiveness. For example, a primary issue regarding the purpose of assessment rests in the question, “Are we testing the student, or are we testing the institution (or department/unit)?” These are two clearly different purposes for assessment and constitute a fundamental question for any state and/ or institution that is contemplating an assessment program related to planning.

Accreditation

Regional accreditation is the bedrock of American higher education. No higher education institution can have access to any federal funds, including financial aid, without being regionally accredited. Six regional accrediting commissions that operate under the authority of the U.S. Department of Education govern the accreditation process. Each accrediting commission has its own accreditation standards, yet all include forms of the six questions posed earlier in this chapter. The primary distinction between accreditation and assessment is one of focus. Assessment is directly related to the evaluation of the learning process. Accreditation evaluates the entire institution, including the assessment of educational programs (undergraduate and graduate), but also its mission and goals, finances, physical facilities, administration, library, and all other aspects of the institution. Assessment is usually seen as a subset of the overall accreditation process. Accreditation is an externally driven process – governed by a regional accrediting commission. Assessment is usually an internally developed and managed process. Accreditation standards require that assessment be done on a campus but do not dictate how the institution develops and manages that process.

Accountability

Although the terms assessment and accountability are often used interchangeably, they have important differences. In general, when educators assess their own performance, it is assessment; when others assess their performance, it is accountability. That is, assessment is a set of initiatives to monitor the results of our actions and improve ourselves; accountability is a set of initiatives others take to monitor the results of our actions, and to penalize or reward us based on the outcomes. Ideally, assessment and accountability work in concert, but the reality can be different. Resources can be given or withheld based on accountability results. This reward/ threat causes some individuals to shade the results to favor a positive outcome. It also causes some faculty and administrators to distrust the process, sabotage it, or ignore it (sometimes all three!). Assessment procedures that serve only the purpose of improving instruction are rare, but they do exist. In these circumstances, faculty are less reluctant to become involved; but when accountability is a factor, problems with accuracy do arise.

Program Review

Program review is similar to accreditation in that it involves analysis of all aspects of a given program. Most frequently in higher education, the program being reviewed is an individual department (such as a Communication department). There are situations in which the program under review is a specific academic program, such as an undergraduate major in Communication Studies. This might occur in a department that has multiple majors such as a Communication department that includes a Communication Studies major, a Journalism major, and a Public Relations major. Each of the three programs might be reviewed separately, or they might be reviewed as part of the overall program review of the department. The National Communication Association has developed a set of guidelines for program review that can be found on the NCA website (www.natcom.org). These guidelines include the following areas: (1) Missions, Goals, and Learning Outcomes; (2) Administration and Governance; (3) Resources and Personnel; (4) Faculty and Professional Staff; (5) Hiring and Evaluation of Faculty, Promotion and Tenure; (6) Curriculum; and (7) Student Advising (see Backlund et al., 2011).

The previous brief review of the different types of evaluations applied to analyzing the higher educational process may help the reader separate the processes from each other. The four processes are similar to each other, but each has a specific purpose. As faculty and administrators contemplate educational process analysis, they would be well served to select the appropriate process for the given purpose.

Terminology

Assessment, like other specialized areas, has generated its own terminology. Knowing the purpose of analysis can help one select the most appropriate method, and knowing the terms of assessment can help one more effectively plan and execute an assessment program. Following are some of the most important terms related to assessment (adapted from Backlund, Detwiler, Arneson, & Danielson, 2010).

Assessment. The systematic process of determining educational objectives, gathering, using, and analyzing information about student learning outcomes to make decisions about programs, individual student progress, or accountability.

Benchmark. A criterion-referenced, objective performance definition that is used for comparative purposes. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark.

Direct assessment. Direct measures of student leaning require students to display their knowledge and skills as they respond to the instrument itself. Objective tests, essays, presentations, and classroom assignments all meet this criterion.

Evaluation. This term broadly covers all potential investigations, with formative or summative conclusions, about institutional functioning. It may include assessment of learning, but it might also include non-learning-centered investigations (e.g., satisfaction with recreational facilities).

Formative assessment. An assessment used for improvement (individual or program level) rather than for making final decisions or for accountability. Formative assessments are also used to provide feedback to improve teaching, learning, and curricula, and to identify students’ strengths and weaknesses.

Indirect assessment. Indirect methods such as surveys and interviews ask students to reflect on their learning rather than to demonstrate it.

Measurement. The systematic investigation of people’s attributes.

Norm. An interpretation of scores on a measure that focuses on the rank ordering of students – not their performance – in relation to criteria.

Objectives. Specific knowledge, skills, or attitudes that students are expected to achieve through their college experience; expected or intended student outcomes.

Outcomes. Results of instruction, i.e., the specific knowledge, skills, or developmental attributes that students actually develop through their college experience; assessment results.

Performance-based assessment. Assessment technique involving the gathering of data though systematic observation of a behavior or process, and then evaluating that data based on a clearly articulated set of performance criteria to serve as the basis for evaluative judgments.

Rubric. A scoring tool that lists the criteria for a piece of work, or “what counts” (e.g., purpose, organization, and mechanics are often what count in a piece of writing); it also articulates gradations of quality for each criterion, from excellent to poor.

Summative assessment. A sum total or final product measure of achievement at the end of an instructional unit or course of study.

Value-added. The effects educational providers have had on students during their programs of study. The impact of participating in higher education on student learning and development above that which would have occurred through natural maturation, usually measured as longitudinal change or difference between pretest and posttest.

These terms form the core vocabulary of assessment and are necessary to the effective understanding and development of an assessment program.

Developing an Assessment Program

An effective assessment program is not easy to develop, nor is it guaranteed to gain wide acceptance for the program. The major sources of resistance to assessment will be described in a later section, but a significant portion of resistance can be overcome with a valid and methodologically defensible assessment program. This section describes the components of such a program. Ideally, when such a program is present, everyone sees more benefits than costs. Developing such a system is based on four dimensions.

  1. Integration. Assessment needs to be integrated into the overall planning and budget allocation process of the institution. Though this seems like a natural link, some individuals (both faculty and administrators) have viewed assessment as something disconnected from the rest of the university’s processes. The administration needs to make clear the links between strategic planning, budget allocations, curriculum improvement, and the assessment process. If those links are present and well understood, then faculty are more likely to participate; and if these links are evident and systematically followed, the likelihood of faculty seeing little benefit or use for the results will be minimized.
  2. Promotion. Another essential dimension is promoting assessment as a means to foster collegiality and link faculty members together. It is entirely possible for a faculty member to live out a professional career relatively separated from his or her colleagues. One can become involved in research, students, and specific courses, and spend very little time working with or even talking to colleagues in the same department – let alone faculty from other departments. Some faculty prefer it this way; others find their work isolating. While the assessment process will not eliminate feelings of isolation, it can and has served as a vehicle for greater faculty-to-faculty communication. Assessment in the department almost requires faculty to talk about curriculum objectives, to see how courses fit with each other, and to closely examine the overall purposes of their instructional programs. Such collaboration is almost always beneficial.
  3. Valuation. Another essential dimension is that assessment be valued and useful for students as the ultimate end-users. One Department of Communication Studies developed an assessment program that included both entry and exit assessment, portfolio development, and portfolio defense. Students strongly valued this experience. They reported that the assessment process made departmental learning expectations very clear and that they would graduate with a well-defined and useful knowledge/skill base. More importantly, they felt that the department truly cared about their education and was doing its best to insure that its students were effectively taught. When the department began the assessment process, they had not foreseen this as an outcome, but they certainly welcomed it. The student response served to reinforce the faculty in their efforts. The student response also led some initially resistant faculty to become more enthusiastic about assessment. Ideally, assessment does focus on student learning with the basic goal of helping the department deliver the best possible education to its students. If this truly is the focus, then students will recognize that and respond accordingly.
  4. Conclusion. Assessment extends the learning process to its natural conclusion. Faculty members routinely assess student learning with various classroom assignments, tests, and papers. An effective in-class assessment process is part of an overall instructional program that includes well-defined instructional objectives, teaching methods/materials/exercises that develop student learning of those objectives, a testing program that assesses student learning related to the objectives, and a feedback process that lets the students know their progress. Thus, it can be said that the teaching-learning process is truncated or incomplete unless it extends through to meaningful assessment at the end.

These four essential dimensions form the basis for an assessment program and provide a way of making assessment more approachable for faculty and administrators (Morreale and Backlund, 1996). Next, as the program is developed, it should consist of five basic components as described below.

  1. Educational Outcomes. The first step is for the department to identify the educational (student learning) outcomes of the department’s degree program(s). Many departments have already done so. Educational outcomes consist of the skills, knowledge/information, values/attitudes/perspectives, and behaviors that graduates are expected to possess as a result of having completed the degree program. Developing this list can be an interesting and illuminating process.
  2. Expected Results. The expected result is the key component in the assessment equation. It is the bridge between the outcomes and the assessment method. An example of an expected result is, “Seventy-five percent of our students will be able to identify correctly the three purposes of a speech introduction.”
  3. Assessment Methods. These are the methods by which the faculty gathers the data about the expected results. There are a wide range of methods including written tests, performance tests, portfolios, capstone courses, final projects, and the like. It is the job of the faculty to select the method that will most closely match their expected results.
  4. Observed results. After the assessment procedure has been administered, the achieved results are compared to the expected results. This usually gives the faculty something to talk about and a basis for modifying the curriculum, determining which classes meet which outcomes, and seeing where holes and overlaps exist in the course work. The general goal is for the observed results to match the expected results.
  5. Application of results. This is the step that completes the loop and ideally informs possible changes to the educational outcomes assessed. However, if there is no step that needs development, this concludes the assessment process.

In sum, one determines the overall learning objectives for a course or a program and develops a curriculum based on that plan. Appropriate evaluation tools are selected to gather information regarding the context and the actual student learning. The next step is to collect and manage the vital information yielded by the instruments. Finally, the results are analyzed to see what (if any) changes need to be made in the educational curriculum.

Communication Assessment

The four dimensions and five components of assessment can and should be applied to the evaluation of communication programs and degrees of study. It is not the purpose of this chapter to detail the concept of communication competence, the essential outcome of most communication training, but this outcome can be evaluated by examining three aspects of communication learning: the communicator’s affect toward communication, cognitive development, and skill development (Morreale, Spitzberg, & Barge, 2000). The affective domain of learning examines students’ attitudes and feelings regarding their cognition and skills in a content area. Cognitive assessment examines their knowledge and understanding of the content under consideration (Neer & Aitken, 1997; Rubin, 1994). Skills assessment focuses on their behavioral development in a content area (Lane, 1998). Cognitive and skills development are related – students need knowledge (theory) to undergird a skilled, appropriate performance. This distinction is important.

Several volumes discuss various approaches to examining communication competence (Christ, 1994; Morreale & Backlund, 1996; Morreale et al., 1994; Morreale et al., 2000). The literature includes information related to the areas noted above as well as competencies in the areas of public speaking, interpersonal communication, listening, intercultural communication, group communication, and organizational communication. Assessing communication learning is unique in at least three ways.

First, in comparing oral communication skill assessment with assessment in other academic areas, Mead (1982) asserted that oral communication assessment must be assessed with performance measures, as it does not make much sense to solely assess students’ knowledge about communication. Many disciplines use methods such as aptitude tests, standardized achievement tests, and objective and subjective paper-and-pencil tests of disciplinary content. These methods do work for communication to some extent when assessing cognitive and affective learning outcomes, but it is also important to know what students can do with communication knowledge. The process for examining oral communication is unique among academic areas. Clear, behavioral terms need to be defined for both nonverbal and verbal aspects of communication (such as the amount of eye contact and desired language patterns needed in a given situation). Criteria of competence that include cultural and situational differences can also be identified, as a communication behavior may be “right” in one situation and not in another. If developed effectively, assessment methods can work in a variety of situations with a variety of raters.

Second, the assessment of communication performance has certain limits that are not as relevant in other subject areas. Communication is an interactive process that occurs in particular types of situations, like interpersonal conversations, small groups, and speaking publicly. Thus, what is right or wrong communication behavior is generally based on perceptions of the shared situation or context, with more than one right or appropriate answer or action to be taken. Thus, the evaluation of a student’s communication competence may depend on criteria that are culturally, situationally, and contextually specific. These factors make the assessment of communication more difficult than in some other subjects, but not impossible.

Finally, another issue in communication assessment is the ambiguous definitional nature of some communication skill variables. One excellent example is listening. Listening is an important skill, yet researchers have disagreed about virtually every aspect of the listening process – definition, dimensions, methods of assessment, and methods of improving listening ability. Thus, validity is one of the greatest difficulties posed by listening tests. Because there is no agreement about what listening is, researchers cannot be certain that listening tests actually measure listening. This is not to say that tests of listening are useless. They are highly useful, but the key is the match between the objectives and the test. If the listening (and other communication) objectives are clearly defined, then a listening test can be selected that assesses those objectives.

Aligning assessment with student learning outcomes and classroom assignments

As noted above, assessment is a natural extension of the learning process. The classroom learning process generally consists of four components – educational outcomes, development of course content based on those objectives, classroom activities and lectures to teach those outcomes, and testing (assessment) of student acquisition of the objectives. Faculty already assesses student learning with various classroom assignments, tests, papers, and the like. Thus an effective in-class assessment process is part of an overall instructional program and ideally links all of the courses in a particular program (major) to each other. The in-class instructional model should mirror the department’s overall instructional efforts. Curriculum objectives can be developed for each major the department offers, faculty can discuss and decide upon the best teaching methodologies to support those objectives, the assessment program can be directly tied to both the objectives and the methods, and the results can be used for overall quality improvement. By following this model, assessment truly becomes an extension of the teaching and learning process. Generally, once faculty see it integrated in this fashion, resistance tends to fade and acceptance improves. If all concerned truly see that the assessment results are used to improve the educational program and the educational experience of students, then an assessment program will serve the purpose for which it was designed.

Tools of Oral Communication Assessment

A number of different procedures can be used to assess oral communication behavior. In fact, the wide range of communication assessment student learning outcomes can sometimes confuse the selection of assessment tools. Possible outcomes to be tested include knowledge outcomes (general as well as specific), skills outcomes (basic, higher-order cognitive, knowledge building, occupational), attitudes and values outcomes, and behavioral outcomes. All these purposes can be viewed as important, but a single program cannot assess them all (Ewell, l987). Because assessment usually involves both knowledge about and skill in the particular behavior, the most common communication procedures vary somewhat from the procedures found in most other academic subjects. The most common procedures used in assessing communication are standardized tests, self-report instruments, written tests, interviews, performance-rating scales, and program review procedures. As performance ratings scales are the most commonly used in communication, they will be described in greater detail.

Local and Standardized Tests

The recommended type of assessment procedure has changed considerably in recent years. In the early 1980s, the primary goal was to use standardized tests, the advantages of which include the following: (a) a high level of technical development, (b) contemporary expert view of field, and (c) ease and readiness of use. Disadvantages include the following: (a) narrow focus of the tests, (b) questionable student motivation, (c) lack of faculty ownership, and perhaps most importantly, (d) not always well matched to the school’s goals and objectives. In spite of the disadvantages, many commercial tests continue to be in use. A variety of tests developed by companies such as ACT and the College Board were and are being used to assess students’ knowledge and skill. These standardized tests allowed for comparison to other institutions and to national norms with the goal of determining what changes need to be made in the curriculum.

The National Communication Association attempted to follow this line of reasoning by supporting the development of assessment instruments designed to be used as standardized tests. The first was Rubin’s Communication Competency Assessment Instrument (Rubin, 1985), Similar instruments such as The Competent Speaker (Morreale, Moore, Taylor, Surges-Tatum, & Hulbert-Johnson, 1993) and the Conversation Skills Rating Scale (Spitzberg, 1995) were developed with the same goal. Though each of these instruments has seen wide use, none has achieved the intent of standardized tests – to allow for general comparisons across institutions.

Locally produced instruments have virtually the opposite advantages and disadvantages of standardized tests. They can be inexpensive, easily adapted to the specific learning outcomes of the department or institution, and applied to a specific student population; moreover, faculty are more likely to be involved in this testing method. However, validity and reliability issues can be significant, and results are not usually generalizable to other contexts. Nevertheless, the trend seems to be to use more locally produced tools, as faculty and administrators want to be able to be very precise about focusing on what is locally taught and how it is assessed. Most Communication departments have developed different types of local instruments to track student progress and knowledge/ skill gain.

Self-Report Instruments

Self-report instruments are particularly useful in gathering attitudinal and affective information about how students view communicative behavior, whether it is their own, their professor’s, or perhaps a department’s. Obviously, self-report scales do not assess skills directly, but they can provide much valuable information about a student’s attitude toward various types of communicative situations. Students may not respond truthfully to these scales, and care may need to be taken to insure reliable responses. Student self-report scales are particularly useful in determining the quality of an instructional program. Though student input is certainly not the only source in program review, students can give important information on factors such as (a) their own increased knowledge and comprehension, (b) their own changed motivation to subject matter and to learning, (c) their perceptions of teaching style indicators, and (d) their perception of the match between course material and testing. These are useful pieces of information in any program (or personal) evaluation. However, students are not in a position to comment on items such as (a) quality of academic content, (b) justification for course content against eventual needs of the student and society, (c) quality of test construction, (d) professionalism of professor towards teaching, and (e) evidence of professor’s out-of-class, teaching-related activities (Backlund, 1993). Used appropriately, student self-report is a valuable tool in any assessment procedure. A key in this type of information gathering is the perception on the part of the students that their opinions matter. Properly used, self-report scales can generate valuable information about student achievement and about program quality.

Oral Interviews and Focus Groups

Oral interviews are not used extensively as they are very time-consuming, but they can be very useful in testing certain types of skills. For example, the state of Massachusetts used a one-on-one interview between teacher and individual students to rate speaking skill. If there are enough well-trained raters, this procedures can work quite well, but the key to success is the reliability of raters. Face-to-face contact between student and rater allows the rater to sample the student’s ability directly. However, the interview is a contrived situation, it causes most students to experience a higher than normal level of anxiety, the procedure is quite time-consuming, and its reliability is hard to establish.

Performance Rating Scales

Performance rating scales form the bulk of communication skill assessment instruments. Performance tests are generally defined as those that require the student to apply previously acquired knowledge and/ or skills to complete some performance task. A real-world or simulated exercise is presented, eliciting an original response by the student, which is observed and evaluated by the teacher. Performance tests have four important characteristics (Stiggins, Backlund, & Bridgeford, 1985). First, students are called upon to apply the skills and knowledge they have learned. Second, performance involves completion of a pre-specified task according to criteria in the context of real or simulated assessment exercises. Third, whatever task or product is required by the exercise, it must be observable and measurable. Fourth, the performance must be directly observed and rated by a trained, qualified judge.

There is a continuum of performance rating instruments. On one end are obtrusive procedures that ask a student to stand and speak (or perform in some way) and then rates the student’s performance on specific criteria. Rating forms and the skills evaluated vary, but these are easy to use and construct, and they can be applied to just about every communication context. They can be constructed to fit most communication educational objectives, they make feedback to students relatively easy to give, and they can be easily constructed to fit the criteria and objectives developed. Nevertheless, this method of assessment has some disadvantages: The students being rated almost always feel some anxiety, reliability is hard to achieve, and in many cases only one student can be rated at a time.

Unobtrusive rating scales occupy the other end of the continuum. A rater uses these scales, but students are unaware that they are being rated. Teachers can use such scales in a classroom setting during or after a class session to rate a student’s informal or conversational communicative behavior. This method has several advantages: A large number of students can be rated in a relatively short period of time; the speaking sample rated is natural, not made up or contrived; the method causes little anxiety; and the rating scale can be constructed to fit most speaking educational objectives. However, reliability is hard to establish, and the type of communication behavior desired does not always appear naturally.

Communication educators have been using various forms of performance testing for years, particularly in public speaking ratings. In theory, performance testing gives an excellent opportunity to reduce potential bias and other problems. For example, unlike traditional paper and pencil tests, performance procedures have far greater face validity, that is, the behaviors called for closely approximate the array of communication behaviors under examination. For example, in a performance assessment of speaking skills, a sample of actual behavior is far more likely to predict success in future speaking contexts than is a response to a multiplechoice question. In effect, the logical link between a sample of behavior and the criterion (future performance) seems stronger for the performance test than the logical link between objective paper and pencil test score (i.e., selecting the correct response to test items) and future performance.

In addition, performance-based measures, which can be conducted unobtrusively, present the possibility of reducing evaluation anxiety. These can be very useful strategies in assessing communication skills in the classroom. Given the focus in communication classes on performance of one type or another, performance assessment is an obvious and useful tool in gathering assessment data about student education acquisition.

Direct and Indirect Assessment Methods

As a department faculty consider different data gathering methods, it may be useful to consider both direct and indirect methods of data gathering. Direct methods of assessment include a wide range of options. Pre- and post-testing (courses, majors, student experiences) with both self-report and observational instruments can chart student change in all three areas of learning – cognitive, affective, and psychomotor. Course-embedded evaluation of student learning (classroom assignments) is a traditional and time-honored way assessing student acquisition of course learning objectives. In addition, capstone courses can be used as a means of end-of-program assessment of student learning when they include tests of student information learning over the course of the major, student performance and behavioral change, and student changes in affective variables such as confidence, apprehension, and efficacy. Internships can be assessed as a means of student application of course learning to a professional environment. Portfolios can be used both as a compilation and a summary of the work students have done over the course of their studies. Juried review of student work usually brings in outside evaluators to judge student work. This method is common in the arts (e.g., music, art, theater), but it can be used with communication performance as well. Standardized national exams and locally developed tests can also be used to directly assess student cognitive knowledge of particular areas of communication information. These can most effectively be used to assess student knowledge acquisition in areas such as communication theory, rhetoric, and other knowledge-based dimensions of communication. The advantages of direct assessment are many – including designing (or adopting) a procedure that focuses specifically on the student learning outcome under study. Disadvantages include possible time taken, expense, and the potential that the student respondent may not perform accurately dues to apprehension, test fatigue, or other factors.

Indirect methods of assessment include a number of data-gathering techniques. Exit interviews are used widely in academic departments to gather qualitative information about the student experience in an academic program. Alumni surveys can gather former student opinion about their academic program and its impact on their professional lives. Employer survey data can assess the ability of a department’s students to perform the needed job competencies. Graduation rates can be used to determine whether or not a department’s students are graduating at a rate comparable to similar departments. Retention/transfer student analysis can provide information on how well a department does at attracting and keeping its students. External expert evaluation (advisory committees, accreditation evaluators, or program review evaluators) can provide an outside opinion on a department’s effectiveness. Job placement data can provide comparisons to other departments/majors in success rates of students’ job placement. Student attitude/ perception surveys can be the best method of gathering student affective changes for the major. In sum, indirect methods of assessment can provide rich data about students and a program. Care must be taken in how the data are gathered and interpreted, but done effectively, these methods can be invaluable in providing feedback.

Challenges to Effective Assessment

Assessment, of course, is not without its challenges. A casual review of most universities reveals that less than half of American higher education institutions have an effective, embraced, and well-used assessment program. To examine the question of why some institutions are successful at assessment and others are not, this section reviews some of the primary challenges to assessment and suggests methods of meeting those challenges.

Faculty Resistance

It may be obvious to say that some faculty have not been enthusiastic adopters of the assessment of student learning programs. The reasons are varied and include the fact that few have professional educational training and thus find the subject of objectives, instruments, and curriculum stranding to be a bit like a foreign language. Others find the task of assessment time-consuming and not related to the tasks one must do in order to be considered for tenure and promotion. Some faculty have philosophical resistance to the practice of standardizing student learning in a modern attempt to quantify something as humanly ineffable as the act of education. Still others balk at doing something someone or a regulatory bureau is forcing them do. Here are four faculty beliefs that can impede the development of an assessment program.

  1. Belief in faculty autonomy. Many faculty, especially tenured full professors, believe that their classroom is their own domain. This belief leads these faculty members to resist attempts to integrate their course content with the overall departmental curriculum. There is historical justification for this viewpoint: For most of the history of an institution, faculty did run their own courses in any way they saw fit. The only time someone called that into question was when a faculty member violated the norms of appropriateness.
  2. Belief that “If it can be assessed, it can’t be important.” This belief is held by some faculty in the arts and humanities. These faculty members believe that assessment might work fine for simple objectives that involve basic facts, comprehension objectives, and other similar learning outcomes, but not for higher-order thinking such as analysis, synthesis, and evaluation. Though higher-order thinking objectives are more difficult to assess, meaningful evaluation is not impossible.
  3. Belief that “If we hold out long enough, this will go away.” Unfortunately, this belief has been reinforced a number of times by the university administration. Through administrative turnover and various pedagogical and procedural “fads” that have appeared and disappeared in higher education, some faculty have ridden out the “storms” of change by just biding their time. When this strategy has worked, the issue has gone away. Some faculty believe assessment will go away, as well.
  4. Belief that “This really has no benefit.” This is probably the most powerful of the beliefs. Some faculty really do not see any value to assessment separate from their own in-class testing procedures. These faculty simply are not convinced that assessment is worth the effort and believe that few benefits will result for them or their students from the time and effort put into the process.

These four beliefs form the core of the points of resistance. Overcoming this resistance is critical to the success of any assessment program. For individuals who believe in assessment, these beliefs do not make a great deal of sense. However, the beliefs are strong and prevalent on some campuses. Strong leadership is needed to emphasize the benefits, minimize the costs, and facilitate the development of an effective assessment program. Regardless of the reason, faculty need to come to grips with the assessment process and acknowledge that it is both required by a variety of constituent groups and beneficial for students. In addition, a deeper understanding of assessment and its links to the educational process can help faculty see the role of assessment. Grounding of the work of assessment in educational principles can provide a basis to the practical activities of objective writing and the list of tasks required to effectively evaluate student learning. As one is able to embed the practice and craft of teaching in an established orientation to learning, it can only enrich the process of socializing and educating our students.

Issues of Validity, Reliability, and Bias

Assessment programs need to develop valid and defensible data. Any assessment program is no better than the data it collects and analyzes. There are many factors that can distort assessment results and generate an inaccurate picture of the student’s skill or the program’s effectiveness. There are a few simple considerations that, when taken into account, can greatly increase the chances that the results of communication assessment will be reliable and meaningful.

The term reliability refers to the consistency with which a test measures what it is supposed to measure. One very important form of reliability is inter-rater reliability. Ideally, two or more raters, using the same scale and looking at the same behaviors, would give the student the same score. Perfection is usually not possible, but the two ratings should be close. This is particularly true in assessing communication behavior. Two teachers of public speaking should be quite similar in their ratings of a student speech. With clear objectives and some training, this can be accomplished.

Another form of reliability that is useful to consider is temporal reliability. This is the stability of test scores over time. If a test produces one score the first time it is given and a very different score when given again three days later, there is cause to doubt its reliability. A test must provide assurance that whatever is measured is not a one-time event. Ideally, student communication behavior does improve over time and with instruction. Thus, the student’s scores on the same test should not remain the same but improve over time. Overall, then, reliability is achieved by attempting to improve consistency between people using the test and by making sure that the test is consistent over time.

Validity refers to the necessity that a test measures what it is supposed to measure. This is where clear curriculum objectives apply. If objectives are clearly defined, it will be relatively easy to select or design a test that will tap those objectives. This is referred to as content or face validity. The usual reason for testing speaking and listening skills is to predict whether the student will be able to speak and listen effectively in real life, or at least in situations other than those inherent in the test. Assessment officers want to be able to predict whether or not a student will be successful. If these officers can predict future results, then they know the test works. For example, if a student is assessed to be an effective public speaker in the classroom, and then also becomes a successful public speaker in professional life, the officers can have confidence in the validity of the classroom assessment procedure.

Bias is another source of distortion in test results. Since the “rightness” of a response in a speaking and listening skill test depends frequently on the situation and the people involved in the test, bias can be a real problem. The teacher or tester should work to remove the effects of cultural, racial, or sexual bias from the assessment procedure so that each student has an equal chance at being successful. Bias is described below in more detail as it can easily distort the results of any assessment process, and it is a factor that many educators do not consider as closely as validity and reliability.

Teachers of communication have traditionally relied heavily on their own observations and subjective judgments of student development. At the same time, research on those subjective faculty judgments suggests there may be some important problems with the accuracy of those judgments (Stiggins et al., 1985). Further, faculty are often not adequately trained in assessment and, with a few exceptions, they are not provided with technical assistance to help them with their assessment efforts. There are many potential sources of bias in assessment that can creep into the educational testing and decision making process if the rater (faculty member) is not trained to avoid cultural and/ or sexual bias. For these reasons, communication educators (and all educators) need to know about test bias and methods of overcoming it in assessing performance.

Bias in assessment tests occurs when some characteristic of the test interacts with some characteristic of the test taker in such a way as to distort the meaning of the test score for a particular group or examinee (Stiggins et al., 1985). Distortions can result when assessment items, procedures, or exercises are more familiar and understandable to members of one group – with its cultural and linguistic experience – than to members of another group; such distortions can negatively impact assessment results. Oral communication skill assessment procedures have to be particularly sensitive to cultural bias.

In general, characteristics of assessment that may distort test scores and bias test results include ambiguous test items, items developed without attention to cultural differences, the test administration environment, and test scoring procedures. Ambiguous items are most problematic when individuals from diverse cultural groups differ in their assessment procedure familiarity and their ability to understand the objective of poorly designed items. Those who understand task requirements of tests due to prior experience will score higher, not because they know more, but because they have more highly developed test-taking skills.

Bias can occur in a procedure if subjective judgments play a role (as they do in virtually all communication assessment). This is particularly problematic when rating speaking skills. The potential for rater bias is a function of experiences, training, and attitudes of the rater, clarity and precision of the scoring criteria and standards, and the extent to which the scorer has internalized those standards. As noted earlier, rater training is critical to reducing bias in judgments. The training usually involves familiarizing the raters with the rating scale, then repeated trials with communication samples (a public speech, for example), so the raters begin to view the same behavior in the same way. Stated simply, raters can be trained to rate behaviors reliably. Otherwise, different levels of experience, comfort with testing, and/ or tension due to evaluation anxiety may occur across assessment situations, creating the potential for the students’ real competence to be misjudged. Reducing the possibility of test bias can be achieved by having minority group members carefully review test content and exercises, conducting a technical test score and/or item analysis in order to identify differential patterns of examinee responses (Jensen & Harris, 1999), and/or using alternative modes of assessment as a means of reducing bias.

Cultural Factors in Assessment of Communication

In many countries of the world, including the United States, schools educate students from a wide range of cultures. Given that fact, cultural factors in assessment must be considered as faculty devise an assessment program. Though it is beyond the scope of this chapter to fully detail all the cultural factors that can impact an assessment program, a few salient points must be made. For an extensive description of cultural factors that could be considered, see Gipps (1999). She describes in detail the social, political, and economic forces that drive assessment. She argues that assessment is a social activity and that it can only be understand by taking account of the social, cultural, economic, and political contexts in which it operates. She concludes that faculty and administrators need to think widely as they plan and develop an assessment program.

Next, assessment often occurs in programs where there are students for whom the language of the classroom is not their first language. This multi-cultural dimension obviously needs to be considered in providing a fair and accurate assessment. In the United States, more research on this factor has been done at the K-12 level (kindergarten through twelfth grade) than in higher education. For example, Abedi (2004) and Banks (2004) both describe cultural and linguistic factors in K-12 and provide strategies for addressing them during assessment.

Assessment of Communication Around the World

Assessment is a world-wide phenomenon, and comparing indicators of educational effectiveness among various countries of the world can result in consternation or even alarm, if one’s own ratings do not compare favorably. For example, multiple sources of evidence (e.g., Lederman, 2007) indicate that the United States is falling behind other countries in a wide range of educational assessments. Apparently, American students know less math, are less literate in reading, and lack understanding in other subject areas as compared with students in some other countries (Ryan, 2013). For example, Rotberg (2006) described large-scale testing in eight other countries and compared both the content and results with those of the United States. Because the U.S. ranged toward the bottom of the list, many American educators have become involved in developing a set of educational reform programs (e.g., Goals 2000: Educate America Act, Improving America’s Schools Act, School to Work Opportunities Act, No Child Left Behind, etc.) with a renewed commitment to national and international assessments that monitor progress and improve the nation’s comparative standing.

Indeed, a debate is raging among American higher education officials and state and federal policy makers about the wisdom and practicality of creating a system to allow for public comparison (both nationally and internationally) of how successfully individual colleges and/or programs are educating their students (see Tremblay, Lalancett, & Roseveare, 2012). These authors describe an effort by the Organization for Economic Cooperation and Development (OECD) that convened a group of testing experts and higher education policy makers to discuss the possibility of creating a common international system to measure the learning outcomes of individual colleges and university systems, along the lines of the test that OECD countries now administer to 15-year-olds, the Program for International Student Assessment.

From this description, two points can be drawn. First, the educational assessment movement, both nationally and internationally, is expanding, not contracting. Second, oral communication is not currently a part of these efforts. Few countries address the teaching of oral communication as intentionally as the United States and other English-speaking countries. For example, speaking, listening, and communication are part of the Oxford Cambridge and RSA Examination (2011) administered to secondary and university students in the United Kingdom. In fact, assessment tests such as these are more systematic and more fully developed than those in the United States. Similar assessment procedures are used in other British Commonwealth countries such as Australia, Canada, and New Zealand. Beyond these countries, little evidence can be found of communication assessment in non-English speaking countries.

Applications of Assessment Results

The purpose of assessment should be continuous improvement through self-evaluation of a class, a major program, a department, or a university. Carrying out assessment procedures is not sufficient unless the final step is also taken, the step that accrediting commissions and governing bodies call “closing the loop.” At its core, this means that the last step of the loop of assessment is the application of results to move toward improvement. This improvement can take a number of forms, and this section explores the application of assessment results.

Program Development

Individual communication programs can be more effectively developed through a comprehensive examination of curriculum, educational experiences, and the amount of student learning that occurs. This evaluation and development may be part of a campus-wide effort (Backlund, Hay, Harper, & Williams, 1990) and/or focus on departmental initiatives (Makay, 1997; Shelton, Lane, & Waldhart, 1999). Work in this area is increasingly associated with mandates from state agencies and accreditation boards. Program improvement demands a close relationship between program goals, measurement of the progress toward these goals, analysis of these measurements, and a feedback mechanism for communicating the results back to the goal setters (Graham, Bourland-Davis, & Fulmer, 1997). Program assessment provides an opportunity for departmental members to exhibit to their administrators the unique contribution of their departments and to fend off threats of budget cuts or program elimination.

Program Planning

Assessment also should be part of the overall institution planning and budget allocation process. While this seems like a natural link, some individuals (both faculty and administrators) have viewed assessment as something disconnected from the rest of the university’s processes. The administration needs to make clear the links between strategic planning, budget allocations, curriculum improvement, and the assessment process. Assessment does support these other areas by providing data to (1) support budget requests, (2) track progress on plan implementation, and (3) point out the needs for improved curriculum. If these links are evident and systematically followed, assessment results will be seen as significant to both faculty and students.

Faculty Development

Faculty members have not always responded well to feedback. A primary goal of assessment is to gather data (both quantitative and qualitative) about the quality of an educational program. A crucial factor in a program’s quality is the people who are responsible for instruction in that program – the faculty – so it is essential that they be on board with assessment efforts. One major source of faculty resistance to assessment is the distinct possibility that the assessment process may turn up data that are not complimentary toward an individual faculty member. It is true that all assessment results are not positive, but if that information can be used in a positive way to improve the educational product delivered to students, then perhaps faculty can embrace it. The very tricky part of this task is the distinction between using feedback information for improvement versus using it for rewarding faculty through promotion, tenure, and other reviews. If it is done well, faculty can and will reflect individually and collaboratively on the quality of instruction delivered to students.

General Education

Recognition of the important role of communication in general education has shown a marked increase in the past decade. The Association of American Colleges and Universities (AACU) recognizes this in its LEAP initiatives. Liberal Education and America’s Promise (LEAP) is a “national advocacy, campus action, and research initiative” that advocates liberal education (An Introduction, n.d., p. 1). Its goal is to help students develop a sense of social responsibility as well as strong intellectual and practical skills that span all areas of study, such as communication. Two of the four learning outcomes, Intellectual and Practical Skills, and Personal and Social Responsibility, are well-suited to the typical oral communication course in general education. In fact, written and oral communication is specifically identified under Intellectual and Practical Skills.

This renewed national interest in communication as part of general education provides an opportunity for the communication discipline to expand its range in university general education programs. With that expansion comes an opportunity to increase the application of assessment principles and tools. The National Communication Association publishes The Competent Speaker (Morreale et al., 1993) that can be used to assess the public speaking ability of general education students. Similar assessment procedures exist for interpersonal communication in the Conversation Skills Rating Scale (Spitzberg, 1995). The development of other assessment procedures and tools for other communication student learning outcomes is needed, and communication faculty can fill that need.

Pleasing Internal and External Audiences

The final and probably the most problematic application of assessment results is for the purpose of accountability. Accrediting commissions, state legislators, university administrators, and other groups want know if they are getting the most efficient and effective use of educational dollars. These groups expect data to support the effectiveness of educational programs. The data can be and will be used to justify funding (or lack of it) for program approval and expansion, as well as for faculty support. In addition, assessment results could easily be used to compare institutions, departments, and even faculty performance. This process, of course, can lead to resistance and stress related to the assessment process. It could also lead to rewards. This issue needs to be handled with clarity and forthrightness on a campus so that faculty develop and maintain trust in the process. To increase the trust level in the assessment program, individuals need to be informed as to how the results will be used (Manning, l986).

The point of any assessment program is to use the results to develop a more effective educational program. The results of a single assessment process can serve multiple applications. Care in specifying these applications is absolutely necessary to making sure the process is developed in a manner that will achieve that application, and that the concerned stakeholders fully understand and trust the process.

Conclusion

Assessment has received mixed reviews from faculty and members of the communication discipline. The rhetorical framing, by some, has reflected disinterest and avoidance. Some see it as interference by some big-brother who does not understand what is being accomplished in the discipline. Others have embraced assessment and used it to improve both their teaching and student learning. Whichever viewpoint one adopts, it must be acknowledged that assessment is now and will remain a visible and powerful factor in higher education and in the Communication discipline. Thus, embracing the process and making the most of communication assessment seems to be a good strategy.

University administrators and faculty should be proactive in learning how to use discipline-appropriate assessment strategies and techniques, whether or not a legislative mandate requires it (Adelman, 1985). This is a time for critical analysis, not blind enthusiasm or deaf rejection. Once faculty members become convinced of its utility, they will not only endorse assessment but will use and sustain it as a vital part of academic discourse. Then institutions will be free to actually improve instruction, not just document it.

References

Abedi, J. (2004). The No Child Left Behind Act and English language learners: Assessment and accountability issues. Educational Researcher, 33, 4–14. doi:10.3102/0013189X033001004.

Adelman, C. (l985). To imagine an adverb. In C. Adelman (Ed.), Assessment in American higher education: Issues and contexts (pp. 73–80). Washington, DC: Office of Educational Research & Improvement, US Department of Education.

American Association of Colleges and Universities (n.d.). An Introduction to LEAP: Liberal Education America’s Promise, Retrieved from http://www.aacu.org/leap/introduction-to-leap.

Astin, A. W., & Antonio, A. L. (2012). Assessment for excellence: The philosophy and practice of assessment and evaluation in higher education, (2nd ed.). Plymouth, UK: Rowman & Littlefield Publishers.

Backlund, P. M. (1993). Using student ratings of faculty in the instructional development process. Association of Communication Administrators Bulletin, 22, 7–13.

Backlund, P., Bach, B., Procopio, C., Mello, B., & Sypher, H. (2011). NCA program review standards: Background, application, and data. Communication Education, 60, 279–295. doi:10.1080/03634523.2011.553723

Backlund, P., Detwiler, T., Arneson, P., & Danielson, M. (2010). Assessing communication knowledge, attitudes, and skills. In P. Backlund & G. Wakefield (Eds.), A communication assessment primer (pp. 1–14). Washington, DC: National Communication Association.

Backlund, P., Hay, E. A., Harper, S., & Williams, D. (1990). Assessing the outcomes of college: Implications for speech communication. Journal of the Association for Communication Administration, 72, 13–20.

Backlund, P., & Morreale, S. P. (1994). History of the Speech Communication Association’s assessment efforts and present role of the committee on assessment and testing. In S. Morreale, M. Books, R. Berko, & C. Cooke (Eds.), 1994 SCA summer conference proceedings and prepared remarks (pp. 9–16). Annandale, VA: Speech Communication Association Publications.

Banks, J. A. (Ed.). (2004). Handbook of research on multicultural education. San Francisco, CA: Jossey- Bass.

Chesebro, J. (1990, July). A national context. Paper presented at the meeting of the Speech Communication Association Conference on Assessment, Denver, CO.

Christ, W. G. (Ed.). (1994). Assessing communication education: A handbook for media, speech, and theatre educators. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Cox, K., Imrie, B. W., & Miller, A. (2013). Student assessment in higher education: A Handbook for assessing performance. New York, NY: Routledge.

Elliott, J. L., Shin, H., Thurlow, M. L., & Ysseldyke, J. E (1995). A perspective on education and assessment in other nations: Where are students with disabilities? (Synthesis Report No. 19). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://education.umn.edu/NCEO/OnlinePubs/Synthesis19.html

Ewell, P. T. (l987). Assessment: Where are We? Change, 19, 23–28.

Fain, P. (2015, September 10). Keeping Up With Competency. Retrieved from https://www.insidehighered.com/news/2015/09/10/amid-competency-based-education-boom-meeting-help-colleges-do-it-right.

Gipps, C. (1999). Socio-cultural aspects of assessment. Review of Research in Education, 24, 355– 392. doi:10.3102/0091732X024001355

Graham, B., Bourland-Davis, P. G., & Fulmer, H. W. (1997). Using the internship as a tool for assessment: A case study. Journal of the Association for Communication Administration, 5, 198–205.

Griffin, P., & Care, E. (Eds.) (2015). Assessment and teaching of 21st century skills: Methods and approach. New York, NY: Springer Dordrecht.

Jensen, K. K., & Harris, V. (1999). The public speaking portfolio. Communication Education, 48, 211–227. doi:10.1080/03634529909379170

Lane, S. D. (1998). The development of a skill-mastery assessment for a basic speech course. Journal of the Association for Communication Administration, 6, 97–107.

Lederman, D. (2007, September 19). A worldwide test for higher education? Retrieved from https://www.insidehighered.com/news/2007/09/19/international.

Makay, J. J. (1997). Assessment in communication programs: Issues and ideas administrators must face. Journal of the Association for Communication Administration, 5, 62–68.

Manning, T. E. (l986). The why, what and who of assessment. The accrediting association perspective. Proceedings of the ETS conference on assessing the outcomes of higher education (pp. 31–38). New York, NY: Educational Testing Service.

Mead, N. (1982, April). Assessment of listening and speaking performance. Paper presented at the National Symposium on Education Research, Washington, DC.

Morreale, S. P., & Backlund, P. (Eds.). (1996). Large scale assessment in oral communication: K-12 and higher education (2nd ed.). Annandale, VA: Speech Communication Association Publications.

Morreale, S. P., & Backlund, P. (1998). Assessment: You‘ve come a long way, baby. Popular Measurement, 1, 20–24.

Morreale, S., Backlund, P., Hay, E., & Moore, M. (2011). A major review of the assessment of oral communication. Communication Education, 60, 255–278. doi:10.1080/03634523.2010.516395

Morreale, S., Brooks, M., Berko, R., & Cooke, C. (Eds.). (1994). 1994 SCA summer conference proceedings and prepared remarks. Annandale, VA: Speech Communication Association Publications.

Morreale, S. P., Moore, M. R., Taylor, K. P., Surges-Tatum, D., & Hulbert-Johnson, R. (Eds.). (1993). The competent speaker: Speech performance evaluation form. Annandale, VA: Speech Communication Association Publications.

Morreale, S., Spitzberg, B., & Barge, K. (2000). Human communication: Motivation, knowledge, and skills. Belmont, CA: Wadsworth.

Neer, M. R., & Aitken, J. E. (1997). Using cognitive assessment testing to evaluate and improve a university program in communication studies. Journal of the Association for Communication Administration, 5, 95–109.

Oxford Cambridge and RSA Examination (2011). Cambridge, England: Oxford Cambridge and RSA Examination Company Limited.

Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey-Bass.

Rotberg, I. C. (2006). NCLB: Taking stock, looking forward. Assessment Around the World, 64, 58–63.

Rubin, R. (1985). The validity of the communication competency assessment instrument. Communication Monographs, 52, 173–185. doi:10.1080/03637758509376103

Rubin, R. B. (1994). Assessment of the cognitive component of communication competence. In S. Morreale, M. Brooks, R. Berko, & R. Hulbert-Johnson (Eds.), 1994 SCA summer conference proceedings and prepared remarks (pp. 73–86). Annandale, VA: SCA Publications.

Ryan, J. (2013, December 3). American schools vs. the world: Expensive, unequal, bad at math. The Atlantic. Retrieved from http://www.theatlantic.com/education/archive/2013/12/american-schools-vs-the-world-expensive-unequal-bad-at-math/281983/

Shelton, M. W., Lane, D. R., & Waldhart, E. S. (1999). A review and assessment of national education trends in communication instruction. Communication Education, 48, 228–237. doi:10.1080/03634529909379171

Spitzberg, B. H. (1995). Conversational skills rating scale: An instructional assessment of interpersonal competence. Annandale, VA: Speech Communication Association Publications.

Stiggins, R. J., Backlund, P. M., & Bridgeford, N. (l985). Avoiding bias in the assessment of communication skills. Communication Education, 34, 135–141. doi:10.1080/03634528509378595.

Tremblay, K., Lalancett, D., & Roseveare, D. (2012). Assessment of Higher Education Learning Outcomes. Feasibility Study Report Volume 1 – Design and Implementation. Paris, France: Organization for Economic Cooperation and Development.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.152.71