Jeff Kerssen-Griep and Clayton L. Terry

12Communicating Instructional Feedback: Definitions, Explanations, Principles, and Questions

Abstract: Feedback is instructional communication that references learners’ performance relative to a desired performance standard and helps them correct, affirm, and restructure what they know of their subjects and themselves. A complex social intervention, feedback’s effects are achieved and its meanings interpreted by participants relative to their cultural, organizational, situational, and relational contexts. Although little one-size-fits-all guidance has emerged after decades of research, feedback modeling and guidance have developed in sophistication and nuance as scholars increasingly account for the braided influences of many forces that matter to its success in particular contexts. Applying a communication lens (message, source, perceiver, process, and context) to organize and re-examine the feedback scholarship corpus, this chapter unpacks the theories and mechanisms, findings, principles, and challenges evident in this scholarship’s evolution. Topics include feedback’s changing conceptual definitions and its evolving theoretical explanations and research exigencies, as well as the feedback principles and communication guidelines best supported by current explanations.

Keywords: instructional feedback, formative, summative, directive, facilitative, self-regulation, student motivation, feedback intervention theory, information processing

Being able to help direct and shape another’s learning is the communicative heart of teaching. By definition, instruction must include adding information designed to improve a learner’s understandings and abilities; skilled instruction means being able to do so in ways that are genuinely insightful for, useful to, and ultimately welcomed and integrated by that student.

Translating that intent into effect proves no simple task (Sargeant, Mann, Sinclair, Vleuten, & Metsemakers, 2008; Taras, 2003). Among other issues, instructors routinely puzzle over how to manage the tension between the need to evaluate their students’ work and yet develop harmonious mentoring relationships with them, all within challenging educational contexts that prioritize teachers’ frequent summative assessments of students over messages that guide their formative development (Boud & Molloy, 2013). Orrell (2006), for instance, found only 22% convergence between instructors’ mainly summative/defensive feedback practices and their espoused beliefs that feedback primarily should guide and facilitate students’ own learning and self-evaluation practices. Teachers working under such conditions can feel that offering feedback is a frustrating zero-sum game: critique performance and dim the motivation or relationship, or protect longer-term mentoring and thus sacrifice some of a learner’s potential self-regulation or learning gains.

Feedback is a key means to guide activity, motivate learning, focus outcomes, explain misunderstandings, identify and correct errors, establish goals, and promote critical reflection, thus improving learners’ knowledge and skill acquisition and developing independent, curious learners (Bandura, 1991; Bangert-Drowns, Kulik, Kulik, & Morgan, 1991; Bartram & Roe, 2008; Ferguson, 2011; Hattie & Timperley, 2007; Narciss & Huth, 2004; Orsmond & Merry, 2011; Shute, 2008; Thurlings, Vermeulen, Bastiaens, & Stijnen, 2013). Done well, instructional feedback generally improves individuals’ learning and self-regulation (Azevedo & Bernard, 1995; Guzzo, Jette, & Katzell, 1985; Kluger & DeNisi, 1996), often outperforming incentives or simple training in that regard (Gilbert, 1978). Its effects extend to interactions among peers and within teams. Students’ peer feedback is increasingly touted in teacher education as means to help students develop self-regulated engagement in learning itself, for instance (Cartney, 2010; Maringe, 2010; Nicol, 2010). On teams, effective performance and process feedback given to individuals can aid team process, performance, and satisfaction (Gabelica, Van den Bossche, Segers, & Gijselaers, 2012; London & Sessa, 2006).

Still, the full scope of effective feedback’s communicative process has proven difficult to capture conceptually and advise practically, with surprisingly little one-size-fits-all guidance available after all this time (Cohen, 1985; Evans, 2013; Krause-Jensen, 2010; Shute, 2008). Especially given evidence that students often do not treat feedback as its givers intend (Bloxham & Campbell, 2010; Fisher, Cavanaugh, & Bowles, 2011; Handley & Cox, 2007), feedback’s effects on performance have understandably been questioned (Perera, Lee, Win, Perera, & Wijesuriya, 2008). For example, Kluger and DeNisi’s (1996) exhaustive meta-analysis of the feedback research corpus famously demonstrated that feedback interventions actually reduced learner performance around one-third of the time (see also Balzer, Doherty, & O’Connor, 1989; Bangert-Drowns et al., 1991; Martens, de Brabander, Rozendaal, Boekaerts, & van der Leeden, 2010; Mory, 2004; Smith & King, 2004; Turner, Husman, & Schallert, 2002). Students prefer detailed, personalized responses to their academic performance, often finding fault with feedback’s content, clarity, focus, and timing (Higgins, Hartley, & Skelton, 2001; Huxham, 2007), while instructors tend to blame students for not internalizing or applying the feedback they receive (Higgins, Hartley, & Skelton, 2002; Lew, Alwis, & Schmidt, 2010). A truer assessment of causes is more complicated.

In trying to establish which forces affect feedback’s ultimate success, researchers have taken critique for being absorbed by a too-narrow “transmission” model blind to consequential aspects of the feedback process (Evans, 2013; Nicol & Macfarlane-Dick, 2006). Much recent literature shows researchers grappling to integrate a widening host of variables and explanations governing instructional feedback’s impacts. For example, learners’ willingness to engage and act on feedback messages may depend on those interventions’ emotional or self-identity impacts; on aspects of the relationship between teacher and learners; on that learning context’s complexity; on the students’ and teachers’ abilities to receive and give skilled feedback; on the students’ own experiences and beliefs about teaching and learning; or on students’ lack of requisite domain knowledge, willingness to persist, or access to metacognitive templates for dealing with failure, to name just a few such forces (Kluger & DeNisi, 1996; Price, Handley, Millar, & O’Donovan, 2010; Quinton & Smallbone, 2010; Värlander, 2008; Vermeer, Boekaerts, & Seegers, 2001; Young, 2000). Acknowledging feedback’s complex host of variables, Krause-Jensen (2010) noted that because any effective intervention is responsive to the particular people and circumstances involved, feedback researchers ultimately need to examine and inform teachers’ intelligent choice-making agency in the face of conditions that shift as much as they remain constant.

Given its communicative essence, the feedback process is best viewed as a complex social accomplishment rather than as a purely personal message or simple structural reality (Butler & Winne, 1995; Cavanaugh, 2013). Yet, because people are complicated agents, there is much variation in whether and how learners make use even of skillfully designed and effectively communicated feedback intervention (Sargeant et al., 2008; Taras, 2003). Complications arise in part because all communication invokes an often tacit mix of identity, relational, contextual, and cultural negotiations amidst whatever content is being overtly discussed (Hecht, Warren, Jung, & Krieger, 2005; Imahori & Cupach; 2005; Ting-Toomey, 2005). The facethreatening status, power, and risk elements inherent in most teaching-learning situations tend to complicate rather than simplify such negotiations during feedback as all participants exercise their agency while enmeshed, enabled, and constrained by their particular identities and contextual forces.

This chapter unpacks how feedback scholarship has been evolving over time and details what currently is understood about feedback’s process, best practices, and continuing questions, including key conceptual definitions and explanations, research exigencies, and guidelines supported by existing explanations. By detailing current feedback theories, findings, principles, and challenges, this chapter reveals how feedback modeling has become increasingly sophisticated and feedback guidance more learner-focused as scholars continue accounting for the complex, braided influences of so many forces consequential to its success.

Definitions: Which Phenomena Have Been Examined as “Feedback?”

Instructional feedback is an interactive process whose workings are culturally, organizationally, relationally, and cognitively bound. Initially, feedback was the name given the process of monitoring a system’s output and putting that information back into the system as a means to adjust and control it (Boud & Molloy, 2013). Thus, although rooted in cybernetics and applicable to human and non-human systems operations alike, educational researchers routinely examine feedback within the context of instruction (Kowitz & Smith, 1987; Mory, 2004).

Feedback long has been viewed as a means to prompt better accuracy and response from learners (Kulhavy & Wager, 1993), though definitional understandings of that process have gained sophistication over time as students’ interpretive role has become better understood and incorporated (Mory, 2004; Sadler, 1989). At its most basic level, instructional feedback is information provided about an aspect of another’s understanding or performance, often noting the accuracy of a learner’s response to an instructional prompt (Cohen, 1985; Hattie & Timperley, 2007). Feedback in education originally was portrayed as a simple control mechanism involving expert information transfer, as a process of “telling,” as sufficient and automatic in itself for students’ improvement to occur if only the message itself was unambiguous enough to be adopted. Feedback models assumed that students “know what action to take when provided with diagnostic information about their performance” (Boud & Molloy, 2013, p. 703). These early understandings presumed that simply noting students’ correctness for them should cause improvements in students’ learning; later modeling incorporates many forces as consequential to feedback’s effectiveness (Boud & Molloy, 2013).

Although some thus have defined as “feedback” any dialogue that supports learning (Askew & Lodge, 2000) or any message presented to a learner after a learner’s response (Wager & Wager, 1985), most researchers today argue that to qualify as feedback, a message must reference not only a learner’s actual performance but also the gap between that performance and some desired performance standard (Johnson & Johnson, 1993; Ramaprasad, 1983; Shute, 2008). Some argue that a message must be received mindfully in order to truly qualify as feedback, that whatever is offered must cue the learner’s awareness of that feedback-standard gap (Bangert-Drowns et al., 1991; Nicol & Macfarlane-Dick, 2006; Sadler, 1989). Perhaps the strictest definitional standard is applied by those who insist a message becomes feedback only when it impacts students’ learning by reducing the distance between their performance and the standard (Draper, 2009; Wiliam, 2011). Over the longer term, for example, corrective feedback messages that facilitated students’ self-repair of errors outperformed feedback that simply corrected errors for them (Nassaji, 2011), bolstering this view that learners’ active response is a key condition of interactions that merit the “feedback” label (Thurlings et al., 2013).

From stimulus message to complicated social accomplishment

Definitions and modeling also are coming to portray the feedback process as a complex social intervention rather than simply as helpful information provided to students about their work, highlighting the nuanced communicative work associated with feedback’s complicated, often impositional role in students’ learning processes (Boud & Molloy, 2013; Kerssen-Griep, 2001; Kluger & DeNisi, 1996). Feedback increasingly is defined as a process learners use to facilitate their own learning, rather than as a one-way stimulus that shapes and controls outcomes in others (Boud & Molloy, 2013). With these general parameters in mind, scholars currently note several key feedback distinctions (e.g., see Shute, 2008) that help frame more specific questions and findings.

Formative and Summative Feedback

Despite recent calls for less distinction and greater harmony between the principles applied to the two forms (Black & McCormick, 2010), formative feedback traditionally has been distinguished from summative feedback (Espasa & Meneses, 2010; Martens et al., 2010). Whereas summative feedback such as a grade or test score shows students how they ultimately measured up to the standards applied to a learning experience, formative feedback aims to increase students’ knowledge and ability, ideally by verifying whether a student’s performance is on or off-target and by providing enough information for them to modify their existing understandings or actions (Gibbs & Simpson, 2004). Formative feedback’s effectiveness depends on students being able and willing to receive and utilize it with enough lead time and opportunity to implement it (Shute, 2008). Mastering how to convey both feedback intentions is key to skilled instructional communication at any level.

Corrective/ Directive, Facilitative, and Sustainable Feedback

Many also note the distinction traditionally drawn between corrective feedback that directly tells students what to revise or fix, and facilitative feedback whose suggestions are shaped to inform and guide students’ own re-workings of their understandings and performance (Black & Wiliam, 1998; Shute, 2008). Research generally indicates that corrective/directive feedback better suits novice, struggling, or low-achieving learners, whereas facilitative feedback is the better choice with more advanced learning tasks and motivated, higher-achieving students (Moreno, 2004; Vygotsky, 1987).

According to Thurlings et al. (2013), other scholars have reframed corrective vs. facilitative feedback instead as a continuum between poles of feedback message directness versus politeness (McLaren, DeLeeuw, & Mayer, 2011), or as degrees of message explicitness versus implicitness, with directly stated feedback generally leading to slower but more accurate error correction (Baker & Bricker, 2010). Other research in this realm has distinguished degrees to which feedback intervention exacerbate or mitigate identity threats (i.e., face threats) for the people involved (Kerssen-Griep, Trees, & Hess, 2008; Trees, Kerssen-Griep, & Hess, 2009).

Recently, some have recast the “corrective versus facilitative” distinction as a matter of intent or perception rather than as reflecting truly different types of feedback messages, claiming that all attempts to refine students’ understandings through feedback – regardless of how directly or indirectly stated – are actively processed by learners who each make of them whatever sense they will (Archer, 2010). Acknowledging that learners are complex choice-making agents rather than simply passive reactors to feedback indeed argues for defining all feedback as a facilitative “challenge tool” (Evans, 2013, p. 71) regardless of the giver’s intent or message strategy, rather than viewing any feedback as only a simple corrective stimulus merely because it was intended to be one.

Labeled sustainable feedback (Hounsell, 2007), this conception does not encourage viewing any feedback as directly corrective in its effect, nor as having only short-term effects. Rather, feedback is examined for its degree of impact on equipping students to learn for their lifetime. Sustainable feedback done well pictures students in dialogues with teachers and others where they learn about quality performance standards, engage in feedback with varied sources at multiple learning stages, care about and learn to monitor and evaluate their own learning, and develop the learning-planning and goal-setting skills they need (Boud & Molloy, 2013; Carless, Salter, Yang, & Lam, 2011).

Feedback’s evolving definition

Scholars recently have acknowledged that learners actively construct their own understandings via multi-faceted communication rather than simply internalizing a teacher’s message as a stimulus. This conceptualization has contributed to the evolution of feedback’s definition in at least four ways. First, feedback shifts from being studied as a teacher’s unilateral message-making act to instead being understood as the meanings students co-construct in communicating with their teachers and others about their learning and work. Second, feedback is seen as coming from the multiple learning sources that color how students understand their work relative to a standard; feedback no longer is seen as the sole purview of teachers. Third, feedback is not seen as happening individualistically, but rather as occurring in context and subject to students’ collective sense-making about what they experience within that setting. Finally, feedback gets seen cumulatively, as a designed sequence of development happening over time rather than only as a discrete, behavioral act (Boud & Molloy, 2013). Understanding feedback’s perceivers in this constructivist way also helps reveal how teachers learn alongside students through the feedback process, as well as how communities of practice and learning emerge through what can be seen as feedback-centered dialogues about learning (Evans, 2013).

Differing Feedback Targets: Performance, Process, Self-Regulation

Whether intended as formative or summative or pictured as directive, facilitative, or sustainable, feedback also can vary regarding which learner phenomena it overtly addresses. Perhaps the most obvious target, performance or task feedback involves an external agent providing at least knowledge-of-results information, often in real time, about learners’ responses to a task or problem compared against a standard (Cavanaugh, 2013; Shute, 2008).

Feedback interventions tend to have greater direct effect on cognitive than on physical learning tasks (Shute, 2008). Any such feedback has been shown most effective when it is received mindfully, which is facilitated best when the feedback is heard as task-focused rather than person-focused (Bangert-Drowns et al., 1991; Kluger & DeNisi, 1996). Performance feedback is offered to teams and among peers as well as between teachers and individual students (Gabelica et al., 2012; Gielen, Dochy, & Onghena, 2011; Van der Pol, Van den Berg, Admiraal, & Simons, 2008).

More tacit learning phenomena also are subject to reinforcement and change via feedback. Process feedback, for example, helps students learn about the cognitive, motivational, and procedural underpinnings of their task performance, either individually or as part of a group (Bartram & Roe, 2008). Offering information about both a group’s performance and its interpersonal decision-making process can improve members’ identification with a group, for example (Sivunen, 2006). Adding feedback that addresses the validity of students’ task, cognitive, and achievement perceptions relative to outside standards also improves performance more than outcome-only feedback can do (Balzer et al., 1989). Offering students feedback about the learning process itself (e.g., about their progress in using learning strategies) helps learners diagnose their development, perform better, and enhance their sense of self-efficacy (Butler & Winne, 1995; Schunk & Swartz, 1992).

Finally, self-regulation feedback helps learners address how they engage tasks, set and adjust goals, deliberate learning strategies, manage motivations, and monitor effects of their engagement. Learners’ plans for engaging a task create desired performance criteria against which they monitor their own achievements (Boekaerts, 2006; Butler & Winne, 1995; Vermunt & Verloop, 1999). Goal-directed feedback (Shute, 2008) can target individual students’ motivations by giving them information about their progress toward a desired target rather than commenting on how they performed an individual task. Externally provided feedback is “an inherent catalyst” for learners’ self-regulated actions: attending to it offers students grounds to monitor the gap between their desired and actual learning engagement (Evans, 2013, p. 87).

Note that thinking about feedback’s overt “targets” is a bit misleading here as learners certainly interpret process and self-regulation meanings from performance feedback messages, for example. Still, it remains useful to define feedback as containing multi-faceted messages for students’ sense-making about their performance, learning process, and self-regulation, self-efficacy, and self-identity.

Conceived this way, feedback interactions clearly offer learners much more information than simply corrected answers: they note timeliness and precision, effort and ability; they comment on identity, intelligence, relationship, and status; they motivate and guide subsequent learning efforts (Mory, 2004). Feedback thus helps learners affirm, augment, correct, and restructure information they hold regarding themselves and their task, domain knowledge, cognitive tactics, and metacognitive awareness (Alexander, Schallert, & Hare, 1991), evoking emotional as well as cognitive and behavioral engagements from participants (Värlander, 2008). Together these processes present an intriguing puzzle for researchers to comprehend and apply, as they have done in a rich corpus of work over many decades.

Theories and Research Findings Grouped by Research Focus

Mirroring wider trends in social science research, feedback investigations over time generally have progressed from being componential, behavioral, and causal in their assumptions to being more systemic, social, and interpretive in orientation. Invoked to serve these evolving examinations, theoretical explanations themselves have moved from rhetorical and psychological realms to integrate interactional, relational, organizational, and socio-cultural forces as scholars make sense of feedback’s operations in light of people’s agency exercised in context. Although not a homogenous trend in the literature, feedback scholarship is shifting from examining its isolated components to exploring more intricately systemic explanations of the process as scholars work to understand feedback’s consequential interconnections with other forces and social systems.

Understanding feedback as a type of instructional communication helps frame it as an interactive learning process involving standards-based advisory messages and shared meanings created among participants operating within the cultural, organizational, interpersonal, and situational contexts they negotiate. Applying this communication frame thus highlights traditional process components that feedback scholars have analyzed alone and together: feedback interventions (Kluger & DeNisi, 1996) can be seen as messages offered with particular aims by sources and interpreted by perceivers in light of process and context dynamics that matter to their meaning-making. This section unpacks research findings and explanations that reflect this ongoing evolution in thought about feedback’s mechanisms and effects, sorted according to which aspect of the feedback process was examined as the primary “lever” regulating its overall enactment and success. The section reports more componential research first, followed by later, more systemic investigations of feedback’s mechanisms and concepts.

Message-Centered Feedback Research

Much feedback research has focused on feedback message characteristics themselves as most consequential. Such scholars have investigated feedback messages’ valence, content, and delivery variables in some detail.

Feedback message valence

Feedback’s impacts may be influenced by whether the message is positive, neutral, or negative in nature. Some research has found no performance differences due to this factor (e.g., Martens et al., 2010), but many others have sought to explain key nuances among these message valence distinctions.

Negative feedback seems the riskiest territory, although, when done correctly, it can motivate learners to accomplish tasks they do not want to do (Van-Dijk & Kluger, 2001). Negative feedback – along with positive feedback – can be a potent motivating force when learners hear it as reinforcing an aspect of their identity they seek to have affirmed (Hattie & Timperley, 2007). Still, offering negative feedback demands substantial social and rhetorical skills to navigate successfully with students (Hattie & Timperley, 2007; Trees et al., 2009).

Positive feedback may be fraught with just as many challenges, however. Thurlings and colleagues (2013) noted that while research perspectives from behaviorism to social constructivism have been consulted to justify positive (e.g., Cavanaugh, 2013; Ferguson, 2011) or at least neutral, non-hurtful, data-based feedback balanced with the grade (e.g., Fund, 2010; Li, Liu, & Steckelberg, 2010; Pokorny & Pickford, 2010; Shute, 2008), the use of praise has shown mixed or even damaging impacts on student performance, motivation, and learning (Kluger & DeNisi, 1996). Praise’s sometimes limiting effects have been blamed on how it may interfere with students’ attributions to effort (key to motivation) or amplify their unhelpful self-attention (Evans, 2013). Positive feedback works when it offers good information value about task performance efforts themselves, but is counterproductive when it diverts students’ finite cognitive energies away from fueling task performance improvement and onto defending their self instead (Kluger & DeNisi, 1996). For this cognitive reason, praising exactly how a student completed a task functions better than simply telling students what good learners they are (Cohen, Steele, & Ross, 1999; Deci, Koestner, & Ryan, 1999; Senko & Harackiewicz, 2005). Overall, the effects of feedback message valence generally depend on how valence operates in conjunction with other forces in the feedback situation (Hattie & Timperley, 2007).

Feedback message content features: Explicitness, elaboration, and complexity

Although students often may distinguish feedback’s content based on whether it addresses their work’s substance or merely its grammatical form (Higgins et al., 2002), scholars have examined many finer distinctions among feedback’s contents. These include investigating how explicit, how elaborative, and how complex feedback messages are (Dempsey, Driscoll, & Swindell, 1993). Ashwell (2000) found a form-plus-substance feedback combination most helped students improve their writing, for example.

Feedback messages can be distinguished on a hierarchy of message explicitness tactics, from least to most explicit about desired responses. Least explicitly informative is simple verification of a learner’s response correctness or incorrectness, known variously as knowledge of results (KOR), knowledge correction response (KCR), recasts, or try-again feedback. Slightly more explicit error-flagging (or location of mistakes) feedback points out particular mistakes in students’ solutions, and a prompting answers strategy (PAS, or elicitations strategy) uses further tactics and hints that spur learners to come up with better responses without offering them model or correct answers (Shute, 2008). Providing learners with model or correct answers is the most explicitly informative of these feedback tactics.

Though an important component of feedback, verification-only messages such as KCR and KOR generally produce less learning than more informative messages do (Bangert-Drowns et al., 1991; Ferreira, Moore, & Mellish, 2007; Mory, 2003). Even early research noted learning benefits accrued when answer verification was accompanied at least by an explanation of what makes it correct (Gilman, 1969). Although some have proposed a threshold hypothesis in favor of offering explicit, minimal feedback (i.e., correct answers only) containing little added cognitive load to distract students (Phye, 1979), feedback combinations that prompt (i.e., PAS) rather than give correct answers generally have produced best learning outcomes by offering a motivating combination of reference information and autonomy (Shute, 2008). Most research on explicitness today is finding that combinations of such tactics work best, often including elaboration and complexity strategies (discussed next) applied in particular contexts.

In addition to information directness, feedback content researchers also have examined message elaboration tactics alone and in conjunction with other message variables as essential to effective feedback (Kulhavy & Stock, 1989). Elaboration tactics offer several types of information as cues guiding students’ attempts to review instruction and self-correct (Butler & Winne, 1995; Dempsey et al., 1993; Van der Kleij, Eggen, Timmers, & Veldkamp, 2012). Elaborations can vary in being task-, instruction-, or information-based; whether or not they are response-contingent or offer correct answers; and regarding whether they provide examples and/ or elaborate primarily on the learning’s topic, the learner’s response, or the type of error made (Kulhavy & Stock, 1989; Shute, 2008). More elaborated feedback can involve response verification and error flagging along with cues about proceeding, usually without correct answers provided (Narciss & Huth, 2004).

Message elaboration tactics, especially response-specific feedback, often spur gains in motivation, learning efficiency, and learning itself when combined with verification messages (Bangert-Drowns et al., 1991; Butler & Winne, 1995; Shute, 2008). For example, Murphy (2010) found that giving language-learning students elaborated feedback and then KCR and opportunity to revise led to better learning outcomes than came from providing only KCR. Models by Narciss and Huth (2004) and Butler and Winne (1995) explained such effects cognitively, noting that elaborated feedback better helps students analyze their tasks and errors and come to recognize the important task cues and activities they engage around performance, boosting their ability to self-regulate their learning. These researchers and others caution, however, that shaping effective feedback also depends on tailoring it to the instructional context and learner characteristics, complicating the picture. Elaboration tactics may be more important for low-ability than for high-ability students’ learning, for example (Hanna, 1976).

Finally, message complexity also operates as a feedback content variable. Complexity sometimes is combined with explicitness and (especially) elaboration tactics (Dempsey et al., 1993). In reality, either of those tactics sometimes can simplify a received message’s complexity – see Shute (2008) regarding how elaboration tactics aid in understanding complex feedback interventions – meriting complexity’s exploration here as a distinct message variable. Feedback messages can be more elaborate and/ or explicit without necessarily being more complex as well, as when a teacher “elaborates” to aid students’ learning by offering a simply stated insight about their performance. Complexity may overlap elaboration in referring to the type and amount of information included for learners’ formation (Shute, 2008), but it diverges from elaboration in also denoting how multifaceted a feedback intervention might be. The most complex intervention in one study, for example, combined answer verification with correct answer provision, explained why the incorrect answer was wrong, and pointed to the answer-relevant part of the text passage being examined (Kulhavy, White, Topp, Chan, & Adams, 1985).

Shute’s extensive (2008) review of feedback complexity research mainly highlights how difficult it can be to attribute performance to any one variable; scholars report feedback complexity as both a factor and a non-factor in feedback’s impact on learning (Mason & Bruning, 2001; Shute, 2008). Sleeman, Kelly, Martinak, Ward, & Moore (1989) found that more and less complex feedback interventions used to tutor algebra did not significantly differ in their impacts on students’ performance, although both outperformed a “no tutoring” condition. Inconclusive research about feedback complexity may be due to factors such as learners’ cognitive and non-cognitive needs and the topic or skill type being taught (Shute, 2008).

Feedback message delivery mode

Setting aside message content concerns, feedback’s mode of transmission also has been examined as consequential to its effects. Although some have found no difference in the effects of written versus spoken (on tape) feedback delivery (Morra & Asis, 2009), much of the literature advises using written feedback mode as potentially less at risk of misinterpretation by its recipients (Shute, 2008). Others correctly note the need for greater research focus on skilled oral feedback practices (Black & McCormick, 2010; Trees et al., 2009). With students increasingly seen as agents actively harvesting feedback information from their instructor, it is not realistic to simply advise teachers away from oral feedback modes (see Shute, 2008, e.g.), since much more oral than written advisory communication is available to perceive in most learning situations. Oral delivery mode’s effects on the feedback process merit greater attention and understanding with an eye toward guidance.

Source-Centered Feedback Research

Since all feedback providers must have sufficient standing with learners to offer effective instruction, source-focused feedback research has examined whether feedback’s success might be affected by aspects of a provider’s identity, such as whether that person is a supervisor, peer, or consultant. Many such studies have discovered other feedback variables that trump or influence any pure effect of a provider’s identity (e.g., Brinko, 1993; Ivancevich & McMahon, 1982).

One exception is source credibility research. Rooted in the Aristotelian ethos and composed of competence, character, and caring components (McCroskey & Teven, 1999), credibility represents a person’s perceived believability and status to correct and advise another’s performance (Frymier & Thompson, 1992). Perceptions of a feedback provider’s credibility rely on a host of social variables (Finn et al., 2009), and spurring such perceptions can be essential to that encounter’s success (Cusella, 1984; Schrodt et al., 2009; Witt & Kerssen-Griep, 2011). Using affiliative and identity-supportive communication tactics can help enhance feedback providers’ credibility in students’ eyes (Trees et al. 2009) by helping students devote cognitive energy to the task rather than divert it to manage identity concerns (Kluger & DeNisi, 1996).

Perceiver-Centered Feedback Research

Much feedback research acknowledges the important interpretive role played by its “receivers,” more aptly thought of as “perceivers” to cue how learners exercise agency in attending and responding to advisory communication. Butler and Winne (1995) wrote that:

… considering feedback merely in terms of the information it contains is too simplistic. Rather, learners interpret such information according to reasonably stable and relatively potent systems of beliefs concerning subject areas, learning processes, and the products of learning. These beliefs influence students’ perceptions of cues, their generation of internal feedback, and their processing of externally provided feedback. In the last case, beliefs filter and may even distort the message that feedback is intended to carry. Moreover, characteristics of information in elaborated feedback … influence how a learner will use feedback. (pp. 253–254)

In particular, scholars have examined feedback recipients’ emotions and attitudes, cognitions, goals, and motivations as consequential to feedback’s success.

Feedback perceivers’ attitudes and emotions

In part due to students’ psychological and emotional investment in creating work to be evaluated (Fritz & Morris, 2000), the strong emotions often associated with feedback can easily impair relevant cognition and affect performance, resilience, and self-regulation (Boud & Falchikov, 2007; Poulos & Mahoney, 2008; Värlander, 2008; Yorke, 2003). Emotions manifest in individual students’ affective receptivity to feedback, including attitudes governing whether they feel the feedback is useful, retainable, confidential enough, and conveyed without intimidation or threat (King, Schrodt, & Weisel, 2009).

DeNisi and Kluger (2000, Kluger & DeNisi, 1996) explained emotion’s effects on feedback by conceiving of a three-level performance goal hierarchy, powered by the learner’s new awareness that a gap exists between actual and desired performance or understanding. Instructional feedback meant to activate a productive task-level cognitive response first must run the gauntlet of self-level and goals-level cognitive processes that assess the feedback for perceived threats to self-identity or to important goals. Thus, they note that negative emotional responses such as self-doubt and frustration divert finite attention when learners perceive feedback as a critique of the self, leaving little remaining cognitive energy to activate the intended task-level improvement response. Understanding whether and why students attribute successes and failures to their effort, ability, luck, or circumstance has emotional and performative consequences for their learning, with effort the best target (Dweck, 2000; Evans, 2013).

Feedback perceivers’ cognitions

In addition to noting learners’ cognitive attention (previous section), scholars have examined several other learner-cognitive keys to feedback’s effectiveness. Bangert-Drowns and colleagues’ (1991) five-cycle model, for example, highlights the importance of feedback’s mindful reception by the learner. Although a host of demographic and other mundane-seeming factors can interfere with that quality of feedback reception (see e.g. Evans & Waring, 2011; Hounsell, McCune, Hounsell, & Litjens, 2008; Shute, 2008), scholarship has focused especially on cognitive features associated with learners’ degrees of expertise in a content area as well as their expertise as learners per se.

Feedback’s success depends in part on how well it is tailored to the learner’s existing knowledge level about its content or topic. Novice learners have been shown to benefit most from directive, correct-response, explanatory feedback messages, for example (Moreno, 2004; Shute, 2008). Some research also shows content-area novices may benefit more from feedback that is immediate rather than delayed, while more accomplished learners can do better with delayed, facilitative feedback better matched to their more sophisticated cognitive learning template for that content (Shute, 2008).

Aside from content-specific knowledge, feedback messages interact with degrees of experience and expertise as a learner per se. Previous learning experiences and intellectual maturity play roles in feedback’s reception, perhaps due to the presence of more advanced cognitive schemas for processing information and accomplishing learning in accustomed topic areas (Bloxham & Campbell, 2010; Evans, 2013; Vickerman, 2009; Weaver, 2006). Prior knowledge and other factors can complicate as well as facilitate feedback’s reception, however, as entrenched misconceptions can be more resistant to change than novices’ cognitions might be (Chinn & Brewer, 1993; Fritz & Morris, 2000). Whereas students’ beliefs about the correctness of their work (i.e., their “response certitude”) have shown little impact on feedback’s effects (Shute, 2008), students’ beliefs about the learning environment and about the feedback’s necessity, appropriateness, and feasibility indeed can affect their reception and processing of it (Smither, London, & Reilly, 2005).

Feedback perceivers’ goals

Learners’ goals help them frame and respond to learning events that occur; they have cognitive and affective components (Yorke, 2003). Learning goals should match learners’ expectations (e.g., Bloxham & Campbell, 2010). According to expectancy value theory (Bandura, 1991; Shute, 2008), when learners feel self-efficacious and perceive themselves capable of attaining them, they invest in pursuing goals of achievement and avoidance to the degree that they value the task and expect success at it. Easily attaining goals pitched too low or failing to attain goals that are pitched too far beyond a learner’s capabilities may discourage the continued effort needed to learn, and in fact can cue the opposite effect (Fisher & Ford, 1998; Shute, 2008; Wingate, 2010). Personally meaningful, appropriately challenging goals help motivate optimal learning when performance feedback reveals learners’ progress toward attaining those goals (Cavanaugh, 2013; Gabelica et al., 2012; Malone, 1981). The most effective goal combination may be students’ judgments that their effort is adequate and that they need to improve; Draper (2009) found such learners demonstrated the most rational engagement with feedback’s contents.

Feedback messages themselves give information key to setting new goals and recalibrating existing ones (Bartram & Roe, 2008). Learners monitor their progress with goals and expectations in mind, which can engender interesting affective and motivational responses to feedback. Carver and Scheier (1990) theorized that even learners progressing exactly at the rate they expected to sometimes can experience neutral or even negative affect about their achievement, affecting their subsequent engagement. “[S]tudents’ interpretations of tasks influence the goals they establish and the cues they attend to and act on as they engage with those tasks” (Butler & Winne, 1995, p. 255).

Feedback perceivers’ motivations and self-regulation

Feedback interacts with learners’ goals, beliefs, and emotions to shape their self-regulated learning (SRL), which itself helps guide feedback’s success. A more sophisticated conception of feedback’s effects, SRL attends to how feedback messages interact with students’ development as independent, expert learners who monitor and adjust their activities and effort to reduce discrepancies they see between their desired and actual performance (Butler & Winne, 1995). This feedback conception suits the focus of scholars concerned with sustainable feedback practices (Boud, 2000; Carless et al., 2011; Hounsell, 2007). SRL scholars note the importance of learners learning how to interpret the feedback they receive and improve based upon it (Sadler, 1989), even cautioning against guidance so explicit as to encourage learners’ dependence on feedback providers (Evans, 2013).

Process-Centered Feedback Research

Feedback scholarship also attends to process components in combinations. Such works have examined process issues associated with communicating performance standards, feedback timing, facework, and students’ empowered participation in the feedback process, among other process variables.

Norm-referenced versus self-referenced feedback

Norm-referenced feedback compares a person’s performance against others’ performances, whereas self-referenced feedback judges performance against that person’s own abilities (Shute, 2008). Self-referenced feedback (even norm-referenced guidance communicated in self-referenced terms) generally produces learninghelpful attributions to effort, especially for novice learners (McColskey & Leary, 1985; Shute, 2008). On the other hand, Kluger and DeNisi’s (1996) meta-analytic review found that norm-referenced feedback comparing students’ performance to others depresses poor performers’ motivations, helpful attributions, and expectations about future performance. Comparisons against self or criteria are preferred.

Feedback timing

One of the more disputed issues in the literature has involved whether to provide learners immediate or delayed feedback. Advice’s sequential placement has been found consequential even in studies about communicating social support and guidance, for example (Feng, 2014; Goldsmith, 2000). In education, Shute’s (2008) review intriguingly notes that many feedback field studies show evidence of immediate feedback’s merits (especially for efficiently acquiring verbal and math facility and some motor and procedural skills), while laboratory studies often support delayed feedback’s worth. Immediate feedback may aid some learners’ retention by explicitly helping them link outcomes to causes. Alternatively, Kulhavy and Anderson’s (1972) interference-perseveration hypothesis supports delayed feedback by claiming that initial errors do not compete with feedback’s corrections when they are given time to be forgotten, and that delayed feedback may enable more conceptual knowledge transfer and tap mindful cognitive and metacognitive processing not as readily available immediately after a learning event, better utilizing a learner’s self-efficacy and autonomy (Shute, 2008).

Several variable interactions are newsworthy here, including Clariana, Wagner, and Roher Murphy’s (2000) finding that immediate feedback may benefit simpler learning tasks, with delayed feedback best for more difficult tasks. While feedback should offer guidance about the learners’ learning process and next step (Nicol & Macfarlane-Dick, 2006; Thurlings et al., 2013), understanding whether and when feedback’s timing might be thwarting a learner’s own in-process problemsolving may be key to calibrating when to delay outside messaging, as when answers are provided too soon, before learners begin examining their own knowledge (Shute, 2008). Mathan and Koedinger’s (2002) review concluded that feedback’s effectiveness may rely more on the learner’s capability and the nature of the task than on feedback’s timing. Immediate error correction may benefit task acquisition, for example, but delayed feedback may better suit the more complex fluency building that comes next (Bangert-Drowns et al., 1991). Conclusions about feedback’s timing clearly are mixed and subject to the involvement of other forces.

Skilled communication of feedback interventions

DeNisi and Kluger (2000) illustrated the importance of managing the affective aspect of communicating difficult messages. Based on Kluger and DeNisi’s (1996)cognitive modeling of feedback reception (i.e., feedback intervention theory), feedback should be communicated in fair, considerate ways that encourage and do not threaten perceivers’ ego, in order to encourage their focus on task performance only rather than divert finite cognitive energies to repair or protect their self-concept (Evans, 2013; Li & De Luca, 2014). This can be very difficult to achieve when students themselves routinely make unhelpful social comparisons between themselves and peers (Hattie & Timperley, 2007), especially when receiving feedback as a team or group (Gabelica et al., 2012).

Although feedback messages do implicate learners’ identity (i.e., face) needs for autonomy, competence, and relatedness (Kerssen-Griep, 2001; Martens et al., 2010), feedback providers’ perceived oral and written facework skill (i.e., saving face, mitigating face threats) during feedback has associated with recipients’ positive perceptions of providers’ credibility and the usefulness of their feedback. Learners’ attentiveness, responsiveness, mastery learning orientation, and intrinsic motivation also are predicted by skilled facework’s perceived presence, as are learners’ own less defensive responses to instructional feedback, facework perhaps helping them perceive feedback as coming from expert allies commenting on task performance rather than critiquing their self (Kerssen-Griep, Hess, & Trees, 2003; Kerssen-Griep & Witt, 2012, 2015; Witt & Kerssen-Griep, 2011).

Students’ empowered participation

Aside from succeeding when its communication mitigates threats to learners’ identities, feedback messaging along with other factors also can help students perceive a productive teacher-student relationship, affecting feedback’s reception and adoption. Feedback seen as too controlling or critical impedes performance improvement efforts (Fedor, Davis, Maslyn, & Mathieson, 2001) and stunts collaborative teacher-student relationship-building. According to Pokorny and Pickford (2010), perceiving a less controlling relationship with a feedback provider keys students’ best sense-making about their performance, goals, and achievement strategies.

Feedback also improves learning when it empowers students to monitor their own learning needs and discover how to meet them (Balzer et al., 1989; Butler & Winne, 1995). This effect is due in part to granting learners a degree of control over their participation in the learning process. Student participation in assessment has been successfully spurred by soliciting their desired feedback areas (Bloxham & Campbell, 2010), involving students in feedback-about-peer-feedback exercises (Kim, 2009), and initiating learners’ reflection about and planning based on early feedback received (Cramp, 2011; Schalkwyk, 2010). Although critical theorists caution that vertical social hierarchies still are operating in such situations (Reynolds & Trehan, 2000), constructivist and SRL researchers in particular note that feedback succeeds when it involves students in something that feels like dialogue to them and that informs learners’ self-judgment, self-regulatory, and knowledge construction practices (Carless et al., 2011; Evans, 2013; Mory, 2003).

Socio-Cultural-Centered Feedback Research

Finally, feedback scholarship increasingly examines how key cultural forces are woven into the feedback process, and with what consequences and possibilities. Concerns include understanding cultural influences on the expectations and status differentials that have consequences for feedback provision and processing, as well as for students navigating differing communities of practice in schools. Cohen et al. (1999), for example, created a feedback procedure they called “wise criticism” to use in situations where racial and status differences exist between privileged feedback providers and the recipients of those messages. They found that “wise” feedback – invoking high standards while credibly assuring they were invested in the student’s work and confident in the student’s ability to meet those standards – outperformed providers’ attempts to buffer criticism with friendliness or with praise alone; it also reduced the students’ perceptions of evaluator bias, mitigating one effect of cultural difference through communication.

Socio-cultural values and norms generally frame how individuals understand and process social and academic experiences and develop their learning and cognitive styles (Evans, 2013; Vickerman, 2009). At a pragmatic level, second-language learners, for example, often recognize all but the most subtle cues in feedback (Baker & Bricker, 2010). However, to be effective for complicated learning tasks, instruction and feedback must account for culturally embedded contextual and learner characteristics that frame their encoding and decoding of feedback messages (Narciss & Huth, 2004). Individualist-cultured learners tend to seek, offer, and understand direct feedback messages, whereas collectivist-cultured students often prefer interpreting indirect feedback, and that no feedback should comment on individuals, for instance (De Luque & Sommer, 2000). Evans (2013) argued for better understanding how students’ academic and social circumstances co-operate to shape their expectations, interpretations, and navigation of instructional feedback encounters. “Teachers need to view feedback from the perspective of the individuals engaged in the learning and become proactive in providing information addressing the three feedback questions [i.e., when, how, and at what level to offer] and developing ways for students to ask these questions of themselves” (Hattie & Timperley, 2007, p. 101).

Culture-interested scholars encourage approaching and educating students primarily as feedback partners in dialogue (Carless et al., 2011; Fluckiger, Vigil, Tixier, Pasco, & Danielson, 2010) or as active agents who may need training in the process of feedback itself (Sadler, 1989). Carefully designed multi-stage learning assessments (Carless et al., 2011; Handley, Price, & Millar, 2008) can dovetail with feedback training to foster students’ senses of autonomy and competence as learners, though with effects still moderated by individual and cultural differences (Evans, 2013; Seifert, 2010).

With these influences and mitigations in mind, feedback scholars increasingly question whether standard feedback practices themselves – even those designed to develop self-regulation – unnecessarily force cultural assimilation on students (Nicol, 2008), and how institutional and instructional practices could do more to embrace and utilize students’ home cultures (Ball, 2010).

Research Conclusions

In 2008, Shute concluded a comprehensive review by noting how variable and inconsistent and even contradictory the connections among feedback phenomena, mechanisms, and principles remained, even in the face of so much research on the topic. Although feedback scholarship is moving in more and more comprehensive directions, much of that inconsistency remains. Increasing attention to consequential details in combination and to key situational components is helping scholars gradually solve the riddle of correcting and helping shape their learners’ works in ways that do more good than harm.

Principles for Feedback Practice

Skilled feedback should spur learners’ interest in their task, simplify that task, mark the distance between their work and the standard, offer direction and focus to help them bridge that gap, specify expectations, and reduce the risks and frustrations inherent in the learning activity (Bransford, Brown, & Cocking, 2000). Although it is clear that many forces must be accounted to create such feedback outcomes (Boud & Molloy, 2013; Evans, 2013; Shute, 2008), specific, clear, and learner-tailored feedback targeting task improvement rather than learner ability has been supported as a general guideline (Butler, 1987; Kluger & DeNisi, 1996; Kulhavy et al., 1985; Narciss & Huth, 2004; Shute, 2008; Thurlings et al., 2013). Feedback generally should provide suggestions about effortful improvement rather than focus on the intelligence or other perceived-trait attributes of the learner (Boud, 2000; DeNisi & Kluger, 2000; Evans, 2013; Hattie & Timperley, 2007; Kluger & DeNisi, 1996; Shute, 2008). Feedback scholarship’s increasing attention to key process, contextual, and socio-cultural variables, however, continues highlighting feedback’s essence as a complex social intervention and complicates prescriptions for doing it; there is no universal “top ten list” of feedback behaviors to apply without regard to audience, culture, and context.

With those forces in mind, one forward-looking way to frame feedback guidance is in terms of how well it increases students’ likelihood to own and process the corrective communication; i.e., to develop a skilled, resilient, and self-regulated approach to judging and adjusting their own learning (Nicol & Macfarlane-Dick, 2006). In this view, skilled instructors are those who empower students to recognize in feedback ways to facilitate their own meaning-making; to notice and acquire new learning opportunities; to improve their self-management, -prioritizing, -filtering, and -perspective-building; to buttress their accountability, resilience, and grit; and to achieve their personal learning goals (Duckworth, Peterson, Matthews, & Kelly, 2007; Evans, 2013; Poulos & Mahony, 2008). Feedback has most value when it comes from multiple sources (teachers, peers, experts) at incremental stages of assignments, and when it raises students’ awareness of quality performance, supports their skilled goal setting and learning planning, and helps them develop capacities to monitor and evaluate their own learning (Boud & Molloy, 2013). Intriguingly, this sustainable feedback approach rests on the presumption that all students already self-regulate their own learning, differing only by degree; some students have gotten better at self-regulated learning practices than others have, and weaker students are the ones who most need chances to enhance their feelings of being in control (Nicol, 2008). This section thus offers general guidance synthesized from recent comprehensive reviews and reconsiderations of feedback message, source, perceiver, and process factors (Boud & Molloy, 2013; Evans, 2013; Li & De Luca, 2014; Shute, 2008; Thurlings et al., 2013). Each is a potential means to help develop learners’ short- and long-term will and capacity to seek out, make sense of, own, value, and utilize corrective communication.

Message Principles

Feedback providers’ messages should be explicit regarding assessment requirements and what counts as quality work, and they should offer engagement with examples of good work. Feedback messages about performance should clarify learning goals, signpost key areas to address, and reduce learners’ uncertainty about how they are doing relative to those factors. Messages that invite learners into dialogues about and active work with assessment criteria themselves also can spur their processes of learning and repair, as well as encourage their longer-term reflection and focus on self-regulation (Evans, 2013; Thurlings et al., 2013).

The literature offers many tests of very particular message strategies, but more generally it is worth knowing that for many learners (see “Perceiver Principles” below), elaborated feedback offered in clear, specific, manageable units often enhances learning more than simple verification messages do. Messages should note areas of strength and provide information on how to improve, most often without overall grading attached. Messages must be rhetorically sensitive in mitigating potential face (i.e., identity) threats (Trees et al., 2009) and not likely to be perceived as signaling personal insult – or personal praise, as learners also often are wary of or distracted by such. Hearing feedback as a comment on their self rather than their work often impairs the cognitive energy needed for feedback to improve learning or performance itself.

Source Principles

Aside from cultivating multiple sources of formative feedback for their learners (Bouzidi & Jaillet, 2009), feedback providers should adopt consistent interpersonal and structural practices known to enhance rather than erode providers’ trustworthiness, expertise, and credibility in learners’ eyes. Cohen and colleagues’ (1999) “wise criticism” practices merit special mention as means to preserve a feedback provider’s credible mentor standing when working with learners across racial or other categorical status differentials: the provider’s feedback intervention must clearly invoke high standards plus a personal investment in evaluating the learner’s work, and also should communicate confidence in the student’s capacity to reach those performance standards.

Perceiver Principles

Several practices can help tailor feedback’s potential complexity to individuals’ learning needs, abilities, and preferences (Knight & Yorke, 2003). Instructors must provide enough intellectual pressure without signaling too much control or focus on a student’s person or relative position (Evans, 2013). The best instruction seeks to nurture students’ productive learning goal orientation (i.e., mastery, or trying to achieve academic goals) rather than performance goal orientation (ego, trying to please others) toward their work. Some key means to doing this include measuring students’ performance against criteria or their own previous work rather than comparing it against their peers’ performance, targeting learners’ effort rather than their innate ability, and framing their mistakes as welcome keys to advancing their learning (Dweck, 2000).

Some advice is tailored to particular types of learner. Specific, goal-directed feedback best suits learners who are focused on pleasing others or uninterested in task mastery per se. Directive (or corrective), elaborated feedback is important for novices, whereas simple verification feedback may be sufficient and offer important autonomy for high-achieving learners (Shute, 2008). Challenging facilitative feedback (including hints, cues, or prompts) also can engage high-achieving or motivated learners (Shute, 2008). Mitigate impacts on perceivers’ cognitive load by offering feedback via multiple communication modes when possible and suited to the learners.

With perceivers in mind, feedback interventions should inhale as well as exhale. Providers and learners together should query feedback practices to clarify meanings, expectations, misconceptions, and future actions. Train students how to peer-assess and self-assess, since giving feedback often impacts a student’s future performance even more than receiving feedback does (Kim, 2009). Overall, clarify learners’ active role in the feedback process by involving them in assessment design whenever possible, and by giving students agency to develop their own action plans after feedback (Evans, 2013).

Process Principles

Some process variables also can enhance feedback’s effectiveness, especially when they occur within assessment and feedback designs that respect and encourage constructive use of study time (Evans, 2013; Shute, 2008). Feedback’s timing can be one such variable. Providers should not interrupt actively engaged learners to offer feedback, as, for example, when providing answers too soon, before learners try solving problems on their own (Shute, 2008). Immediate feedback often is best for lower-achieving students or others engaged in work that is difficult for them; it also often suits fixing errors in real time and when learning verbal, procedural, and some motor skills (Shute, 2008). Delayed feedback often suits higher-achieving students or learners engaged in work that is simple for them; delayed feedback often outperforms immediate feedback when the goal is more sophisticated transfer (rather than simple acquisition) of learning (Shute, 2008). All feedback communication practices should help reinforce its integrated, ongoing, developmental role in students’ self-regulated learning process (Boud & Molloy, 2013; Evans, 2013).

Exigencies and Future Research

Knowledge Gaps

Scholars recently have lamented a lack of sufficient attention to the means by which cognitive styles and abilities affect learners’ abilities and responses, noting the need to incorporate recent neuroscience findings about cognition and learning, for example (Evans, 2013). Others note a need for more attention to the details of particular feedback process features, such as its ordering, for example (Evans, 2013), or how different feedback message delay lengths might impact performance (Cavanaugh, 2013), or how and why feedback interventions supporting declarative, conceptual, or procedural learning processes might differ from or overlap one another (Shute, 2008). Black and McCormick (2010) and Kerssen-Griep and Witt (2015) among others advocate greater emphasis on helping feedback providers better understand and navigate the multitude of social forces at play especially in common oral feedback encounters, rather than learn primarily about written feedback practices. Such scholarship could help feedback participants more mindfully reflect on what they omit from their commentary, and why (Evans, 2013).

At the system level, Shute (2008) and others note major research gaps relative to understanding interactions among task characteristics, instructional contexts, and student characteristics that mediate feedback effects. There is a lack of empirical evidence about what feedback strategies best suit which situations and contexts, including how newer electronic assessment feedback interventions can affect student performance (Evans, 2013). In addition, scholars yet know too little about feedback’s role within schooling that is subject to more diverse and more continuous forms of assessment than ever before (Boud & Molloy, 2013). How does this changed academic environment influence the opportunities, abilities, actions, and outcomes achieved (and achievable) by feedback participants including school administrators, psychologists, and coaches, as well as teachers, peers, and other experts (Cavanaugh, 2013)?

Method concerns

Published issues with existing research designs center around key process variables, including more consistently defining and operationalizing key phenomenal terms used frequently, such as sufficient and effective and even feedback itself as something genuinely more complex than a simple message stimulus cleanly transmitted to passive receivers (Boud & Molloy, 2013; Kluger & DeNisi, 1996; Nicol & Macfarlane-Dick, 2006; Thurlings et al., 2013). One-time, small-scale case studies dominate the literature, and self-judgment data often stand in for less subjective measures of key variables (Evans, 2013; Thurlings et al., 2013). Amid calls for more observational feedback studies is advocacy seeking larger samples, better experimental control, and more follow-up studies (Cavanaugh, 2013; Thurlings et al., 2013). In general, scholars are seeking richer and more objectively measured examinations of feedback’s complexity as enacted and experienced in particular contexts.

Responsive Future Research

Develop and train teachers and scholars in a learner-centered conception of feedback communication

Understanding and studying learners as feedback agents rather than as simple message receivers will improve how feedback researchers, providers, and perceivers conceive of learning and examine feedback, especially feedback practices that enhance students’ sustained, self-directed independence in learning (Black & McCormick, 2010; Boud & Molloy, 2013; Evans, 2013). This research re-focus also will help participants navigate feedback interactions as two-way dialogues about interpretations and expectations, teach learners more about how to exercise their ability to elicit the sorts of information they need, and highlight additional learner characteristics (e.g., their autonomy, competence, and relatedness identity needs) that matter to their effective feedback-seeking behaviors (Boud & Molloy, 2013; Evans, 2013; Li & De Luca, 2014; Shute, 2008). One word of caution here is that expanding feedback’s definition to include learners’ sense-making from available information (including from feedforward and feed-up processes; e.g., Evans, 2013) could cause the phenomenon to lose some of its identity as a unique learning activity. Is all learner sense-making from standards-based information a form of feedback when seen from this perspective? What might that expansion in feedback’s conceptual scope cost and gain scholars and practitioners consulting the literature?

Understand learners’ identity, relational, and socio-cultural sense-making about feedback communication

Research needs to better integrate, contextualize, and account for the multitude of forces that combine to affect feedback’s reception, processing, ownership, and use (Evans, 2013; Boud & Molloy, 2013). Heeding Vygotsky’s (1987) warning not to artificially separate people’s intellectual from their motivational and emotional aspects, scholarship should better synthesize consequential feedback components, such as student goal-setting’s impacts on other variables (Cavanaugh, 2013), or the roles learners’ affect plus the setting plus learner characteristics play in mediating feedback outcomes (Shute, 2008). Examining the feedback process as an embedded system can help comprehend its complexity in context rather than reduce it to a matter of isolated variables (Thurlings et al., 2013).

Examine discipline-specific and cross-disciplinary aspects of effective feedback communication

Finally, feedback communication scholarship should seek any overarching principles as well as practices and insights anchored within specific disciplines’ particular learning characters. Humanities and scientific feedback may differ – and overlap – for explainable reasons, for example, or feedback normally may be offered via electronic means (or peer groups, or repeated designs) within certain learning realms (Li & De Luca, 2014). Feedback participants understandably differ in how well they navigate between communities of practice (Wenger, 2000). Understanding the interplay of socio-cultural, contextual, processual, relational, communication, affective, and cognitive forces that help students self-regulate their learning across multiple settings will be key to effective feedback program design, especially collaborative program designs (Evans, 2013). Feedback scholarship continues attempting to make this very complex social intervention process appear as simple as it can be, and no simpler.

References

Alexander, P. A., Schallert, D. L., & Hare, V. C. (1991). Coming to terms: How researchers in learning and literacy talk about knowledge. Review of Educational Research, 61, 315–343.

Archer, J. C. (2010). State of the science in health professional education: Effective feedback. Medical Education, 44, 101–108. doi:10.1111/j.1365-2923.2009.03546.x

Ashwell, T. (2000). Patterns of teacher response to student writing in a multiple-draft composition classroom: Is content feedback followed by form feedback the best method? Journal of Second Language Writing, 9, 227–257.

Askew, S., & Lodge, C. (2000). Gifts, ping-pong, and loops – Linking feedback and learning. In S. Askew (Ed.), Feedback for learning (pp. 1–18). London, England: RoutledgeFalmer.

Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computerbased instruction. Journal of Educational Computing Research, 13, 111–127.

Baker, W., & Bricker, R. H. (2010). The effects of direct and indirect speech acts on native English and ESL speakers’ perception of teacher written feedback. System, 38, 75–84.

Ball, E. C. (2010). Annotation an effective device for student feedback: A critical review of the literature. Nurse Education in Practice, 10, 138–143. doi:10.1016/j.nepr.2009.05.003

Balzer, W. K., Doherty, M. E., & O’Connor, R. Jr. (1989). Effects of cognitive feedback on performance. Psychological Bulletin, 106, 410–433.

Bandura, A. (1991). Social theory of self-regulation. Organizational Behavior and Human Decision Processes, 50, 248–287.

Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61, 213–238.

Bartram, D., & Roe, R. A. (2008). Individual and organizational factors in competence acquisition. In W. Nijhof (Ed.), The learning potential of the workplace (pp. 71–96). Rotterdam, Holland: Sense Publishers.

Black, P., & McCormick, R. (2010). Reflections and new directions. Assessment & Evaluation in Higher Education, 35, 493–499. doi:10.1080/02602938.2010.493696

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5, 7–74. doi:10.1080/0969595980050102

Bloxham, S., & Campbell, L. (2010). Generating dialogue in assessment feedback: Exploring the use of interactive cover sheet. Assessment & Evaluation in Higher Education, 35, 291–300. doi:10.1080/02602931003650045

Boekaerts, M. (2006). Self‐regulation and effort investment. Handbook of child psychology volume 4: Child psychology in practice (pp. 345–377). New York, NY: Wiley.

Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in continuing education, 22, 151–167. doi:10.1080/713695728

Boud, D., & Falchikov, N. (2007). Developing assessment for informing judgement. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: Learning for the longer term (pp. 181–197). London, England: Routledge.

Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38, 698–712.

Bouzidi, L., & Jaillet, A. (2009). Can online peer assessment be trusted? Educational Technology & Society, 12, 257–268.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school (Rev. Ed.). Washington, DC: National Academies Press.

Brinko, K. T. (1993). The practice of giving feedback to improve teaching: What is effective? The Journal of Higher Education, 64, 574–593.

Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65, 245–281.

Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36, 395–407. doi:10.1080/03075071003642449

Cartney, P. (2010). Exploring the use of peer assessment as a vehicle for closing the gap between feedback given and feedback used. Assessment & Evaluation in Higher Education, 35, 551– 564.

Carver, C. S., & Scheier, M. F. (1990). Origins and function of positive and negative affect: A control-process view. Psychological Review, 97, 19–35.

Cavanaugh, B. (2013). Performance feedback and teachers’ use of praise and opportunities to respond: A review of the literature. Education and Treatment of Children, 36, 111–136.

Chinn, C. A., & Brewer, W. F. (1993). The role of anomalous data in knowledge acquisition: A theoretical framework and implications for science instruction. Review of Educational Research, 63, 1–49.

Clariana, R. B., Wagner, D., & Roher Murphy, L. C. (2000). Applying a connectionist description of feedback timing. Educational Technology Research and Development, 48, 5–21.

Cohen, G. L., Steele, C. M., & Ross, L. D. (1999). The mentor’s dilemma: Providing critical feedback across the racial divide. Personality and Social Psychology Bulletin, 25, 1302–1318.

Cohen, V. B. (1985). A reexamination of feedback in computer-based instruction: Implications for instructional design. Educational Technology, 25, 33–37.

Cramp, A. (2011). Developing first-year engagement with written feedback. Active Learning in Higher Education, 12, 113–124. doi:10.1177/1469787411402484

Cusella, L. P. (1984). The effects of feedback source, message, and receiver characteristics on intrinsic motivation. Communication Quarterly, 32, 211–221.

Deci, E. L., Koestner, R., & Ryan, M. R. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125, 627–668.

De Luque, M. F., & Sommer, S. M. (2000). The impact of culture on feed-back-seeking behavior: An integrated model and propositions. Academy of Management Review, 25, 829–849.

Dempsey, J., Driscoll, M., & Swindell, L. (1993). Text-based feedback. In J. Dempsey & G. Sales (Eds.), Interactive instruction and feedback (pp. 21–54). Englewood Cliffs, NJ: Educational Technology Publications.

DeNisi, A. S., & Kluger, A. N. (2000). Feedback effectiveness: Can 360-degree appraisals be improved?. The Academy of Management Executive, 14, 129–139.

Draper, S. W. (2009). What are learners actually regulating when given feedback?. British Journal of Educational Technology, 40, 306–315.

Duckworth, A. L., Peterson, C., Matthews, M. D., & Kelly, D. R. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality and Social Psychology, 9, 1087–1101. doi:10.1037/0022-3514.92. 6. 1087

Dweck, C. S. (2000). Self-theories: Their role in motivation, personality, and development. Philadelphia, PA: Psychology Press.

Espasa, A., & Meneses, J. (2010). Analysing feedback processes in an online teaching and learning environment: An exploratory study. Higher Education, 59, 277–292.

Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83, 70–120.

Evans, C., & Waring, M. (2011). Enhancing feedback practice: A personal learning styles pedagogy approach. In S. Rayner & E. Cools (Eds.), Style differences in cognition, learning, and management: Theory, research and practice (pp. 188–203). New York, NY: Routledge.

Fedor, D. B., Davis, W. D., Maslyn, J. M., & Mathieson, K. (2001). Performance improvement efforts in response to negative feedback: The roles of source power and recipient self-esteem. Journal of Management, 27, 79–97.

Feng, B. (2014). When should advice be given? Assessing the role of sequential placement of advice in supportive interactions in two cultures. Communication Research, 41, 913–934.

Ferguson, P. (2011). Student perceptions of quality feedback in teacher education. Assessment & Evaluation in Higher Education, 36, 51–62. doi:10.1080/02602930903197883

Ferreira, A., Moore, J. D., & Mellish, C. (2007). A study of feedback strategies in foreign language classrooms and tutorials with implications for intelligent computer-assisted language learning systems. International Journal of Artificial Intelligence in Education, 17, 389–422.

Finn, A. N., Schrodt, P., Witt, P. L., Elledge, N., Jernberg, K. A., & Larson, L. M. (2009). A metaanalytical review of teacher credibility and its associations with teacher behaviors and student outcomes. Communication Education, 58, 516–537.

Fisher, R., Cavanaugh, J., & Bowles, A. (2011). Assisting transition to university: Using assessment as a formative learning tool. Assessment & Evaluation in Higher Education, 36, 225–237.

Fisher, S. L., & Ford, J. K (1998). Differential effects of learner effort and goal orientation on two learning outcomes. Personnel Psychology, 51, 397–420.

Fluckiger, J., Vigil, Y., Tixier, Y., Pasco, R., & Danielson, K. (2010). Formative feedback: Involving students as partners in assessment to enhance learning. College Teaching, 58, 136–140. doi:10.1080/87567555.2010.484031

Fritz, C. O., & Morris, P. E. (2000). When further learning fails: Stability and change following repeated presentation of text. British Journal of Psychology, 91, 493–511. doi:10.1348/000712600161952

Frymier, A. B., & Thompson, C. A. (1992). Perceived teacher affinity-seeking in relationship to perceived teacher credibility. Communication Education, 41, 388–399.

Fund, Z. (2010). Effects of communities of reflecting peers on student‐teacher development– including in‐depth case studies. Teachers and Teaching: Theory and Practice, 16, 679–701. doi:10.1080/13540602.2010.517686

Gabelica, C., Van den Bossche, P., Segers, M., & Gijselaers, W. (2012). Feedback, a powerful lever in teams: A review. Educational Research Review, 7, 123–144.

Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1, 3–31.

Gielen, S., Dochy, F., & Onghena, P. (2011). An inventory of peer assessment diversity. Assessment and Evaluation in Higher Education, 36, 137–155.

Gilbert, T. (1978). Human competence: Engineering worthy performance. New York, NY: McGraw-Hill.

Gilman, D. A. (1969). Comparison of several feedback methods for correcting errors by computer-assisted instruction. Journal of Educational Psychology, 60, 503–508.

Goldsmith, D. J. (2000). Soliciting advice: The role of sequential placement in mitigating face threat. Communication Monographs, 67, 1–19.

Guzzo, R. A., Jette, R. D., & Katzell, R. A. (1985). The effects of psychologically based intervention programs on worker productivity: A meta-analysis. Personnel Psychology, 38, 275–291.

Handley, K., & Cox, B. (2007). Beyond model answers: learners’ perceptions of self-assessment materials in e-learning applications. Association for Learning Technology Journal, 15, 21–36.

Handley, K., Price, M., & Millar, J. (2008). Engaging students with assessment feedback. Final report for FDTL project 144/03. Retrieved from http://www.brookes.ac.uk/aske/documents/FDTL_FeedbackProjectReportApril2009.pdf

Hanna, G. S. (1976). Effects of total and partial feedback in multiple-choice testing upon learning. Journal of Educational Research, 69, 202–205.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81– 112.

Hecht, M. L., Warren, J. R., Jung, E., & Krieger, J. L. (2005). A communication theory of identity: Development, theoretical perspective, and future directions. In W. B. Gudykunst, (Ed.), Theorizing about intercultural communication (pp. 257–278). Thousand Oaks, CA: Sage.

Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: The problem of communicating assessment feedback. Teaching in Higher Education, 6, 269–274. doi:10.1080/13562510120045230

Higgins, R., Hartley, P., & Skelton, A. (2002). The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27, 53–64. doi:10.1080/03075070120099368

Hounsell, D. (2007). Towards more sustainable feedback to students. In D. Boud & N. Falchikov (Eds.), Rethinking assessment in higher education: Learning for the longer term (pp. 101– 113). London, England: Routledge.

Hounsell, D., McCune, V., Hounsell, J., & Litjens, J. (2008). The quality of guidance and feedback to students. Higher Education Research and Development, 27, 55–67. doi:10.1080/07294360701658765

Huxham, M. (2007). Fast and effective feedback: Are model answers the answer?. Assessment & Evaluation in Higher Education, 32, 601–611.

Imahori, T. T., & Cupach, W. R. (2005). Identity management theory: Facework in intercultural relationships. In W. B. Gudykunst, (Ed.), Theorizing about intercultural communication (pp. 195–210). Thousand Oaks, CA: Sage.

Ivancevich, J. M., & McMahon, J. T. (1982). The effects of goal setting, external feedback, self generated feedback on outcome variables: A field experiment. Academy of Management Journal, 25, 359–372.

Johnson, D. W., & Johnson, R. T. (1993). Cooperative learning and feedback in technology-based instruction. In J. V. Dempsey & G. C. Sales (Eds.), Interactive instruction and feedback (pp. 133–157). Englewood Cliffs, NJ: Educational Technology.

Kerssen-Griep, J. (2001). Teacher communication activities relevant to student motivation: Classroom facework and instructional communication competence. Communication Education, 50, 256–273.

Kerssen-Griep, J., Hess, J., & Trees, A. (2003). Sustaining the desire to learn: Dimensions of perceived instructional facework related to student involvement and motivation to learn. Western Journal of Communication, 67, 357–381.

Kerssen-Griep, J., Trees, A. R., & Hess, J. A. (2008). Attentive facework during instructional feedback: Key to perceiving mentorship and an optimal learning environment. Communication Education, 57, 312–332.

Kerssen-Griep, J., & Witt, P. L. (2012). Instructional feedback II: How do instructor immediacy cues and facework tactics interact to predict student motivation and fairness perceptions? Communication Studies, 63, 498–517.

Kerssen-Griep, J., & Witt, P. L. (2015). Instructional feedback III: How do instructor facework tactics and immediacy cues interact to predict student perceptions of being mentored? Communication Education, 64, 1–24. doi:10.1080/03634523.2014.978797

Kim, M. (2009). The impact of an elaborated assessee’s role in peer assessment. Assessment and Evaluation in Higher Education, 34, 105–114. doi:10.1080/02602930801955960

King, P. E., Schrodt, P., & Weisel, J. (2009). The instructional feedback orientation scale: Conceptualizing and validating a new measure for assessing perceptions of instructional feedback. Communication Education, 58, 235–261.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284.

Knight, P., & Yorke, M. (2003). Assessment, learning and employability. Maidenhead, UK: SRHE/ Open University Press.

Kowitz, G. T., & Smith, J. C. (1987). The four faces of feedback. Performance & Instruction, 26, 33– 36.

Krause-Jensen, J. (2010). Seven birds with one magic bullet: Designing assignments that encourage student participation. Learning and Teaching: The International Journal of Higher Education in the Social Sciences, 3, 51–68.

Kulhavy, R. W., & Anderson, R. C. (1972). Delay-retention effect with multiple-choice tests. Journal of Educational Psychology, 63, 505–512.

Kulhavy, R. W., & Stock, W. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1, 279–308.

Kulhavy, R. W., & Wager, W. (1993). Feedback in programmed instruction: Historical context and implications for practice. In J. Dempsey & G. Ales (Eds.), Interactive instruction and feedback (pp. 3–20). Englewood Cliffs, NJ: Educational Technology Publications.

Kulhavy, R. W., White, M. T., Topp, B. W., Chan, A. L., & Adams, J. (1985). Feedback complexity and corrective efficiency. Contemporary Educational Psychology, 10, 285–291.

Lew, M. D., Alwis, W. A. M., & Schmidt, H. G. (2010). Accuracy of students’ self‐assessment and their beliefs about its utility. Assessment & Evaluation in Higher Education, 35, 135–156.

Li, J., & De Luca, R. (2014). Review of assessment feedback. Studies in Higher Education, 39, 378– 393.

Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41, 525– 536.

London, M., & Sessa, V. I. (2006). Group feedback for continuous learning. Human Resource Development Review, 5, 1–27.

Malone, T. W. (1981). Toward a theory of intrinsically motivating instruction. Cognitive Science, 5, 333–370.

Maringe, F. (2010). Leading learning: Enhancing the learning experience of university students through anxiety auditing. Education, Knowledge, and Economy, 4, 15–31. doi:10.1080/17496891003696470

Martens, R., De Brabander, C., Rozendaal, J., Boekaerts, M., & Van der Leeden, R. (2010). Inducing mind sets in self‐regulated learning with motivational information. Educational Studies, 36, 311–327.

Mason, B. J., & Bruning, R. (2001). Providing feedback in computer-based instruction: What the research tells us. Center for Instructional Innovation, University of Nebraska–Lincoln.

Mathan, S. A., & Koedinger, K. R. (2002). An empirical assessment of comprehension fostering features in an intelligent tutoring system. In S. A. Cerri, G. Gouarderes, & F. Paraguacu (Eds.), Intelligent tutoring systems, 6th International Conference, ITS 2002 (Vol. 2363, pp. 330–343). New York, NY: Springer-Verlag.

McColskey, W., & Leary, M. R. (1985). Differential effects of norm-referenced and self-referenced feedback on performance expectancies, attribution, and motivation. Contemporary Educational Psychology, 10, 275–284.

McCroskey, J. C., & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66, 90–103.

McLaren, B. M., DeLeeuw, K. E., & Mayer, R. E. (2011). Polite web-based intelligent tutors: Can they improve learning in classrooms? Computers & Education, 56, 574–584

Moreno, R. (2004). Decreasing cognitive load for novice students: Effects of explanatory versus corrective feedback in discovery-based multimedia. Instructional Science, 32, 99–113.

Morra, A. M., & Asis, M. I. (2009). The effect of audio and written teacher responses on EFL student revision. Journal of College Reading and Learning, 39, 68–81. doi:10.1080/10790195.2009.10850319

Mory, E. H. (2003). Feedback research revisited. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology (pp. 745–783). New York, NY: MacMillan Library Reference.

Mory, E. H. (2004). Feedback research revisited. Handbook of Research on Educational Communications and Technology, 2, 745–783.

Murphy, P. (2010). Web-based collaborative reading exercises for learners in remote locations: The effects of computer-mediated feedback and interaction via computer-mediated communication. ReCALL, 22, 112–134.

Narciss, S., & Huth, K. (2004). How to design informative tutoring feedback for multimedia learning. In H. M. Niegemann, D. Leutner, & R. Brunken (Eds.), Instructional design for multimedia learning (pp. 181–195). Munster, NY: Waxmann.

Nassaji, H. (2011). Immediate learner repair and its relationship with learning targeted forms in dyadic interaction. System, 39, 17–29.

Nicol, D. (2008). Transforming assessment and feedback: Enhancing integration and empowerment in the first year. Scotland, UK: Quality Assurance Agency.

Nicol, D. (2010). From monologue to dialogue: improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35, 501–517. doi:10.1080/02602931003786559

Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31, 199–218. doi:10.1080/03075070600572090

Orrell, J. (2006). Feedback on learning achievement: Rhetoric and reality. Teaching in Higher Education, 11, 441–456.

Orsmond, P., & Merry, S. (2011). Feedback alignment: Effective and ineffective links between tutors’ and students’ understanding of coursework feedback. Assessment and Evaluation in Higher Education, 36, 125–136.

Perera, J., Lee, N., Win, K., Perera, J., & Wijesuriya, L. (2008). Formative feedback to students: the mismatch between faculty perceptions and student expectations. Medical Teacher, 30, 395–399. doi:10.1080/01421590801949966

Phye, G. D. (1979). The processing of informative feedback about multiple-choice test performance. Contemporary Educational Psychology, 4, 381–394.

Pokorny, H., & Pickford, P. (2010). Complexity, cues and relationships: Student perceptions of feedback. Active Learning in Higher Education, 11, 21–30.

Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: The students’ perspective. Assessment & Evaluation in Higher Education, 33, 143–154.

Price, M., Handley, K., Millar, J., & O’Donovan, B. (2010). Feedback: All that effort but what is the effect? Assessment and Evaluation in Higher Education, 35, 277–289. doi:10.1080/02602930903541007

Quinton, S., & Smallbone, T. (2010). Feeding forward: Using feedback to promote student reflection and learning – a teaching model. Innovations in Education and Teaching International, 47, 125–135.

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28, 4–13.

Reynolds, M., & Trehan, K. (2000). Assessment: a critical perspective. Studies in Higher Education, 25, 267–278.

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144. doi:10.1007/BF00117714

Sargeant, J., Mann, K., Sinclair, D., Vleuten, C. V. D., & Metsemakers, J. (2008). Understanding the influence of emotions and reflection upon multi-source feedback acceptance and use. Advances in Health Sciences Education, 1, 275–288. doi:10.1007/s10459-006 9039-x

Schalkwyk, S. (2010). Early assessment: Using a university-wide student support initiative to effect real change. Teaching in Higher Education, 15, 299–310.

Schrodt, P., Witt, P. L., Turman, P. D., Myers, S. A., Barton, M., & Jernberg, K. (2009). Instructor credibility as a mediator of instructors’ prosocial communication behaviors and students’ learning outcomes. Communication Education, 58, 350–371.

Schunk, D. H., & Swartz, C. W. (1992, April). Goals and feedback during writing strategy instruction with gifted students. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.

Seifert, T. (2010). Understanding student motivation. Educational Research, 46, 137–149. doi:10.1080/0013188042000222421

Senko, C., & Harackiewicz, J. M. (2005). Regulation of achievement goals: The role of competence feedback. Journal of Educational Psychology, 97, 320–336.

Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78, 153–189.

Sivunen, A. (2006). Strengthening identification with the team in virtual teams: The leaders’ perspective. Group Decision and Negotiation, 15, 345–366.

Sleeman, D. H., Kelly, A. E., Martinak, R., Ward, R. D., & Moore, J. L. (1989). Studies of diagnosis and remediation with high school algebra students. Cognitive Science, 13, 551–568.

Smith, C. D., & King, P. E. (2004). Student feedback sensitivity and the efficacy of feedback interventions in public speaking performance improvement. Communication Education, 53, 203–216.

Smither, J., London, M., & Reilly, R. (2005). Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings. Personnel Psychology, 58, 33–66.

Taras, M. (2003). To feedback or not to feedback in student self-assessment. Assessment and Evaluation in Higher Education, 28, 549–565. doi:10.1080/02602930301678

Thurlings, M., Vermeulen, M., Bastiaens, T., & Stijnen, S. (2013). Understanding feedback: A learning theory perspective. Educational Research Review, 9, 1–15.

Ting-Toomey, S. (2005). The matrix of face: An updated face-negotiation theory. In W. B. Gudykunst, (Ed.), Theorizing about intercultural communication (pp. 71–92). Thousand Oaks, CA: Sage.

Trees, A. R., Kerssen-Griep, J., & Hess, J. A. (2009). Earning influence by communicating respect: Facework’s contributions to effective instructional feedback. Communication Education, 58, 397–416.

Turner, J. E., Husman, J., & Schallert, D. L. (2002). The importance of students’ goals in their emotional experiences of academic failure: Investigating the precursors and consequences of shame. Educational Psychologist, 37, 79–89.

Van der Kleij, F. M., Eggen, T. J. H. M., Timmers, C. F., & Veldkamp, B. P. (2012). Effects of feedback in a computer-based assessment for learning. Computers & Education, 58, 263– 272.

Van der Pol, J., Van den Berg, B. A. M., Admiraal, W. F., & Simons, P. R. J. (2008). The nature, reception, and use of online peer feedback in higher education. Computers & Education, 51, 1804–1817. doi:10.1016/j.compedu.2008.06.001

Van-Dijk, D., & Kluger, A.N. (2001, April). Goal orientation versus self-regulation: Different labels or different constructs? Paper presented at the 16th annual convention of the Society for Industrial and Organizational Psychology, San Diego, CA.

Värlander, S. (2008). The role of students’ emotions in formal feedback situations. Teaching in Higher Education, 13, 145–156. doi:10.1080/13562510801923195

Vermeer, H., Boekaerts, M., & Seegers, G. (2001). Motivational and gender differences: Sixthgrade students’ mathematical problem-solving behaviour. Journal of Educational Psychology, 92, 308–315. doi:10.1037/0022-0663.92.2.308

Vermunt, J. D., & Verloop, N. (1999). Congruence and friction between learning and teaching. Learning and Instruction, 9, 257–280.

Vickerman, P. (2009). Student perspectives on formative peer assessment: An attempt to deepen learning?. Assessment and Evaluation in Higher Education, 34, 221–230. doi:10.1080/02602930801955986

Vygotsky, L. S. (1987). The collected works of L. S. Vygotsky. New York, NY: Plenum.

Wager, W., & Wager, S. (1985). Presenting questions, processing responses, and providing feedback in CAI. Journal of Instructional Development, 8, 2–8.

Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment and Evaluation in Higher Education, 31, 379–394. doi:10.1080/02602930500353061

Wenger, E. (2000). Communities of practice and social learning systems. Organization, 7, 225– 246. doi:10.1177/135050840072002

Wiliam, D. (2011). What is assessment for learning?. Studies in Educational Evaluation, 37, 3–14.

Wingate, U. (2010). The impact of formative feedback on the development of academic writing. Assessment and Evaluation in Higher Education, 35, 519–533. doi:10.1080/02602930903512909

Witt, P. L., & Kerssen-Griep, J. (2011). Instructional feedback I: The interaction of facework and immediacy on students’ perceptions of instructor credibility. Communication Education, 60, 75–94.

Yorke, M. (2003). Formative assessment in higher education: Moves towards theory and the enhancement of pedagogic practice. Higher Education, 45, 477–501. doi:10.1023/A:1023967026413

Young, P. (2000). “I might as well give up”: Self-esteem and mature students’ feelings about feedback on assignments. Journal of Further and Higher Education, 24, 409–418. doi:10.1080/030987700750022325

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.111.134