Chapter 3
How Do I Make My Reports More Useful to Consumers?

In Chapter 2 we reviewed the legal mandates for reports and evaluations. In Chapter 3, we focus on the concept of usefulness, including why focusing on making our reports more useful should be a major goal of our writing. We will outline our recommendations for how to make reports more useful for consumers, specifically parents and teachers. In doing this, we build on the research literature touched on in Chapter 1 as well the ethical standards of our profession. As a reminder, thus far we have highlighted that our reports should address specific referral concerns and help those who work with children to do their jobs better. In essence, useful reports support these two themes.

Research indicates that consumers characterize useful psychoeducational reports as clear and understandable. For example, understandable reports communicate assessment data using language that is easily understood by the consumer (Harvey, 1997, 2006; Rafoth & Richmond, 1983; Weddig, 1984). Useful reports also clearly answer the referral questions, focus on strengths as well as needs, and provide concrete and feasible recommendations for educational planning (Brenner, 2003; Cornwall, 1990; Eberst & Genshaft, 1984; Teglasi, 1983; Wiener, 1985, 1987; Wiener & Kohler, 1986).

These assumptions of report accessibility and usefulness are reflected in the ethical guidelines of both national school psychology professional organizations. NASP’s Principles for Professional Ethics (NASP, 2010) and the American Psychological Association’s (APA) Standards for Educational and Psychological Testing (American Educational Research Association [AERA], APA, National Council on Measurement in Education, 2013) maintain that assessment findings should be presented in language clearly understood by the recipients and the report should support the recipients in their work or interactions with the child.

Write Your Report with the Audience in Mind

In order to thoroughly examine the concept of usefulness we need to consider our audience or the report consumers. Although often considered a type of technical writing, we believe that psychological reports are best conceptualized as stories. In this, they share with other stories a need for a clear plot that pulls the reader along and characters, both main and supporting, who despite their flaws are portrayed in the most accurate and positive light possible. Often stories are complicated, as are children, but it is our job to pull the threads together to communicate a comprehensive view of the child and come to specific conclusions. Beyond a clear plot, the structure and language you use to tell a story will depend on your audience. Given this, the first question we then must ask ourselves about our reports is: Who is our intended audience?

Many individuals, including parents, school administrators, outside professionals, and sometimes legal counsel, read assessment reports hoping to better understand a child’s cognitive, academic, emotional, or behavioral functioning. However, we strongly argue that the most important consumers of a psychoeducational report are the student’s parents and teachers. Teachers and parents usually initiate psychoeducational assessments and implement the assessment recommendations (Gilman & Medway, 2007). If our reports are designed to positively influence their interactions with students and assist with educational planning, school psychologists need to know how to make the information in written reports more useful and understandable for these critical consumers.

Physicians most often write to other physicians, and attorneys most often write to other attorneys. This allows them to use a shared vocabulary and style that is designed to communicate information in an economical and helpful fashion to insiders. This makes their writing efficient for a specific audience but also makes these communications nearly impossible to decipher by those outside these professions. Psychoeducational reports are different in that they are in large part intended for consumers outside of the psychological profession, including, most importantly, teachers and parents. Unfortunately, many reports appear to be written with the wrong audience in mind. Given the amount of boilerplate legal language and technical psychological jargon used, perhaps many school psychologists are writing for other school psychologists or attorneys with a background in special education law. These reports often become compliance documents that are very difficult to read, rather than useful communication tools.

This is not a problem unique to school psychology, nor is it a new phenomenon. In Tallent and Reiss’s 1959 survey of over 700 Veterans Administration mental health workers, the greatest criticism of psychological reports was that they were not written “with a useful purpose in mind” (Tallent & Reiss, 1959, p. 444). This problem persists despite a consensus in the literature that useful reports: (a) are understandable by the consumer, (b) provide specific answers to individualized referral questions, (c) focus on strengths as well as needs, and (d) give specific and feasible recommendations (Brenner, 2003; Cornwall, 1990; Eberst & Genshaft, 1984; Teglasi, 1983; Wiener, 1985, 1987; Wiener & Costaris, 2012; Wiener & Kohler, 1986). To return to our story metaphor, they tell a story to a particular audience, offer a balanced depiction of the protagonist, and come to a concrete conclusion. In the following sections we will explore each one of these characteristics in detail.

Useful Reports Clearly Answer the Referral Questions

In Chapter 1, we defined assessment as the process of gathering information to inform decisions. This process involves collecting and evaluating data for the purpose of responding to stakeholders’ questions and concerns, identifying needs and strengths, and making meaningful recommendations. Perhaps we should have clarified that this is the definition of a useful assessment. To be perceived as useful, we believe that an assessment and corresponding report have to directly respond to the concerns of parents and teachers.

It makes intuitive sense that a psychoeducational evaluation would be designed to answer the questions posed by those who referred the child. However, more often than not, we see assessments that are designed around a protocol of assessments geared toward a specific disability classification. For example, all students suspected of having a learning disability are assessed using Test A, Test B, Test C, and Test D, regardless of individual differences, strengths, or concerns. Although this system of assessment may gather lots of data, it may not be useful or helpful in supporting those who live and work with the child to better understand the child’s needs. Since a primary goal of a useful report is to answer the referral questions, we must identify these questions early in the evaluation process. We can then design our assessment to gather specific data to answer the referral questions and then clearly answer these questions in our report.

To develop these questions, we recommend two strategies. First, we recommend a thorough review of records, including an examination of cumulative school records and work samples. Understanding the child’s educational history and progress will help identify suspected areas of disability and need. Second, collaboratively create the assessment plan with the parents, teachers, and other service providers requesting the evaluation. This can take place at a meeting with all parties present or through individual conversations with each person. Through these two steps, you will clarify the concerns to be addressed in your assessment and identify suspected areas of disability and need. As reviewed in Chapter 2, this facilitates meeting the legal mandate of conducting a comprehensive assessment. In Chapter 4 we will review how to write referral questions and use those questions and answers as a structure for your report.

Useful Reports Focus on Strengths as Well as Needs

Addressing strengths as well as problems and pathology has become a common recommendation in psychology and related professions (Snyder, Ritschel, Rand, & Berg, 2006). In addition, there is considerable support for addressing strengths in reports (Brenner, 2003; Brown-Chidsey & Steege, 2005; Cornwall, 1990; Eberst & Genshaft, 1984; Teglasi 1983; Wiener, 1985, 1987; Wiener & Costaris, 2012; Wiener & Kohler, 1986). Philosophically we strongly support this approach and in our own practice we have attempted to integrate a strengths perspective into our evaluations and the reports we write. As Snyder and his colleagues explain, “Each individual exists and functions in a variety of contexts by using a combination of strengths and weaknesses” (2006, p. 34). To be comprehensive, we believe that it is important to account for the full range of a child’s functioning and not just focus on challenges and problems.

Including strengths and resources in a comprehensive evaluation has several advantages. Conversations about strengths encourage greater participation in the assessment process and subsequent interventions (Epstein, Hertzog, & Reid, 2001). They encourage practitioners to focus on the enhancement of skills and learning while simultaneously reducing problems or symptoms (Epstein et al., 2001; Nickerson, 2007).

Integrating client strengths into your evaluation can be accomplished by working closely with the referral sources to formulate referral questions that consider assets and strengths as well as problems (Snyder, Ritschel, Rand, & Berg, 2006). In addition, it is important to incorporate a schema for how to consider strengths into our understanding of our assessments. In addition to the models of RIOT and the Rule of Two (Leung, 1993; Levitt & Merrell, 2009), discussed in Chapter 2, Snyder et al. (2006) have suggested a four-level matrix that guides practitioners to assess for both assets and weaknesses within the client and the environment (Snyder & Elliott, 2005; Wright & Fletcher, 1982). This matrix can help guide our assessment questions and the data we collect, as well as the themes that we develop from our assessment results. In essence, the matrix simply encourages you to include the student’s personal and environmental assets as well as challenges.

In the following example, Max is a newly assessed fifth-grader who will soon be transitioning to middle school on a new campus. His psychoeducational report may be his first introduction to his new teachers, so a goal is to clearly discuss his learning disability and our recommendations in the report. But it is equally important that the report reflects Max’s strengths as well as his weaknesses. In this example, information about his personal assets helps the reader understand that although Max reads at a mid-third-grade level, he is very intelligent and mature for his age.

Table 3.1 Incorporating Strengths as Well as Needs into an Evaluation

Environmental Assets Environmental Challenges
Personal Assets Personal Challenges

Example: Including Student Strengths in Your Report

Later in the report, his strengths were used to write the following recommendation:

Useful Reports Provide Concrete and Feasible Recommendations for Educational Planning

Results from several studies suggest that consumers such as teachers and parents have a very practical view of reports. Although they might find our results and diagnostic conclusions interesting and perhaps useful, they are most concerned about the implications of these conclusions and our recommendations. Many reports lack meaningful recommendations, either because the results do not actually lend themselves to practical recommendation or the authors have avoided offering recommendations out of what we believe is an incorrect fear that they will legally bind the team to accept their recommendations and provide the services implied by their suggestions.

In their report writing guide, Wolber and Carne (2002) assert that individualized and prescriptive recommendations are the hallmark of a useful report. Research has supported their claim. As early as 1945, Cason argued that reports utilizing vague generalizations and lacking appropriate and specific recommendations hindered communication between teachers and psychologists (Cason, 1945). In 1964, Mussman conducted an informal evaluation of report content with a small group of teachers. The teachers surveyed placed high value on the recommendation section. Brandt and Giebink (1968) corroborated Mussman’s findings. In their investigation of experienced elementary and high school teachers, they found that reports with specific recommendations were preferred over reports with abstract recommendations. Hagborg and Aiello-Coultier (1994) evaluated teachers’ perceptions of psychoeducational reports they received on their students. The vast majority of the teachers rated themselves as satisfied to very satisfied with the school psychologist’s report. However, teachers also indicated they viewed the report as an opportunity to receive specific suggestions and strategies and that the recommendations provided were often too few and too abstract. These findings were consistent with the results of Davidson and Simmons’ (1991) qualitative survey of special and general education teachers, and special education teachers in training. The respondents in this study maintained that much of the information in reports was irrelevant to their work with students. They saw a need for a more collaborative approach to the assessment, use of classroom terminology and constructs, and clear recommendations for intervention.

Parents have also identified clear and individualized recommendations as a highly valued assessment outcome. In their study, Tidwell and Wetter (1978) had parents complete a questionnaire designed to determine their expectations and satisfaction with their child’s psychoeducational evaluation. The participants identified the recommendations and suggestions for helping them work with their child as the primary motivation for the evaluation and the most valued part of the psychoeducational report.

Consider the following recommendation, which is required to be included in all reports by a school district near us: Share and discuss evaluation results with the IEP team and discuss appropriate educational interventions. What is remarkable, given the research showing that useful recommendations are what consumers value the most in reports, is that this is the only recommendation provided. Consider this from the perspective of a worried parent or teacher waiting for some information that will help them understand and help a child. You have spent several hours assessing the child, they have waited at least a few weeks and up to two months to get the results, and in the end, you essentially have nothing to recommend other than the fact that the IEP members should read your report and then talk about it. This is not useful and, at a minimum, would be incredibly frustrating for many of the IEP team members. A neurologist colleague of ours likens this to diagnosing someone with a seizure disorder with the only recommendation being that the patient discusses this with his general practitioner (J. Donnelly, personal communication, 2012).

It is clear that, legally, a “group of qualified individuals” including the parent establish eligibility for special education services and, if needed, develop an Individualized Education Plan (IEP). However, this does not imply that the person or persons who conducted the assessment should not make meaningful recommendations that contribute to the IEP or, if the child is not eligible for special education services, make appropriate and helpful recommendations to the general education teacher or parents. Indeed, we have to ask ourselves why anyone would value our services if, in the end, we have nothing useful to say about the child we have assessed.

The avoidance of making meaningful recommendations seems a solution in search of a problem. We have been told that this type of generic pass-the-buck non-recommendation is a precautionary measure so that the school district will not be obligated to provide whatever an evaluator recommends. This is incorrect and legally there is no obligation to provide what an evaluator recommends (McBride, Dumont, & Willis, 2011). There is an obligation to consider the recommendations provided in evaluation reports, though it is up to the IEP team to decide what services and accommodations are appropriate to be placed in the IEP. Indeed, there is an ethical aspect to providing useful recommendations. For example, the 1997 version of the ethical standards of the National Association of School Psychologists was very clear that there is an ethical obligation to write reports that “assist the student or client” and “emphasize recommendations and interpretations” (National Association of School Psychologists, 1997, p. 29). In the most recent NASP Principles for Professional Ethics (2010), it is implied throughout the Professional Competence and Responsibility standards that recommendations are a responsibility of the school psychologists’ practice.

A positive perception of the usefulness of reports by teachers has also been shown to increase the likelihood that recommendations stemming from a psychoeducational evaluation are followed (Andrews & Gutkin, 1994; Gilman & Medway, 2007). Interestingly, and as a segue to our next theme, in their evaluation of the persuasiveness of psychoeducational reports, Andrews and Gutkin (1994) demonstrated that increasing text understanding, familiarity, and message quality within a report were factors that enhanced report influence and increased teacher compliance with recommendations. In other words, clear and understandable writing can impact teachers’ consideration and cooperation with your recommendations. We will cover writing good recommendations in Chapter 4, but for now, remember teachers and parents place high value on recommendations that are detailed and appropriate for implementation in the school environment (Bagnato, 1980; Mussman, 1964; Salvagno & Teglasi, 1987).

Here are a few examples. These recommendations are linked to the assessment results, are student specific, and reflect an understanding of the classroom environment and curriculum.

Examples: Include Useful Recommendations in Your Report

Useful Reports Are Clear and Understandable

Making the information from psychological assessments accessible or understandable to consumers is important for practical, legal, and ethical reasons. The National Assessment of Adult Literacy (United States Department of Education, 2003) assessed English literacy levels of over 19,000 adults representing the entire population of U.S. adults, 16 years of age and older. NAAL classified participants’ literacy levels into five descriptive categories. Adults categorized as non-literate possessed no literacy skills. Adults in the Below Basic category were described as having the most simple and concrete literacy skills with the ability to locate easily identifiable information in short, commonplace text. Basic categorization implied that adults could perform simple and everyday literacy activities, such as reading and understanding information in short common text. Intermediate and Proficient levels were described as performing moderately to complex literacy activities such as reading, summarizing, making inferences, and synthesizing text. Based on their data collection, the NAAL concluded that the prose literacy skills for 43% and the quantitative literacy skills for 55% of the adult population in this country were at a basic level or below.

The results of the NAAL survey have ethical and legal implications for report writing. At least half the adult population (and we are certain this is a very conservative estimate) does not possess the literacy skills to read and comprehend the information provided in a typical psychoeducational report. Not only is this counterintuitive if parents are among the main consumers of our reports, but it can be argued that this violates their rights under IDEA (Harvey, 1997; Weddig, 1984).

IDEA (2004) provides clear guidelines regarding parents’ right to be involved in the evaluation process and to participate in the decisions that follow. Yet, to involve parents in this process authentically, they must have access to adequate information that is understandable, which is one of the key tenets of informed consent (Lidz, 2006). If parents cannot read or understand the psychoeducational report, they cannot be informed participants in their child’s educational planning and decision making (Weddig, 1984). One barrier to this is that often the written information contained in reports is communicated in a way that few parents can understand. The ethical and legal obligation to ensure that parents have access to the information they need to authentically and fully participate in the evaluation and decision-making process falls on us as professionals.

Readability Impacts the Usefulness of Your Reports

One of the most important ways of making the information in psychological reports more understandable is to increase the readability of the text. Readability is a term associated with the ease of reading proficiency and comprehension of written material. Reader competence, level of education, and word and sentence difficulty are factors that are correlated with the readability and comprehension of text (Klare, 1976). There are several ways to determine readability but most formulas use a combination of vocabulary and sentence complexity to calculate how easy it is to read a passage or document. Formulas such as the Flesch-Kincaid Grade Level and the Flesch Reading Ease have been used since the mid-1930s to provide information on the amount of formal schooling needed to read and comprehend text (Flesch, 1948; Klare, 1976).

Readability is important because researchers going back several decades have consistently found that increasing the readability of text such as a magazine or newspaper article, through using simpler vocabulary and sentence structures, both increased the number of people who actually read the material and improved the comprehension of those readers (Lostutter, 1947; Murphy, 1947; Schramm, 1947; Swanson, 1948). Later, Klare and colleagues found that improving readability increased both the amount read and the amount learned among Air Force recruits (Klare, Mabry, & Gustafson, 1955; Klare, Shuford, & Nichols, 1957). It is interesting to note that Lostutter (1947) found that the reading ease of newspaper articles had little to do with the education level of the writer but rather reflected the conventions and culture of the newspaper industry. This implies that perhaps psychological reports are not written at a graduate level because the authors have advanced degrees and knowledge, but rather they are following the conventions of the profession.

Here is one of our favorite examples of this concept in action: Klopfer (1960), an early report writing researcher, wrote, “It is my contention that any statement found in psychological reports could be made comprehensible to any literate individual of at least average intelligence.” This sentence has a Flesch-Kincaid Reading Level of 17.8, meaning that a graduate education level would be needed to read and understand this statement with ease. No offense to Dr. Klopfer, but by using simpler vocabulary and sentence structure, this sentence can be rewritten to “I think a psychological report can be written so most people can understand it.” This sentence now has a Flesch-Kincaid Reading Level of 9.2, meaning a typical high school freshman could read this sentence with ease. The content of the sentence is still intact; it is just presented in much more understandable language.

Despite the advantages of increasing the readability of reports, most research has found that a very high level of literacy skills is needed to comprehend the average psychological report. For example, in Weddig’s (1984) analysis of traditional psychological reports, the mean readability was at the 14.5-grade level. Several years later, Harvey (1997) found similar readability levels and suggested that school psychology graduate students receive direct training in how to make their written reports more accessible for the reader.

Almost 20 years later, Harvey provided the same suggestion in her exploratory study of the variables affecting report clarity and understanding (2006). Harvey evaluated the language usage within psychological report examples and models collected from 20 textbooks commonly used for training school psychology graduate students. Ironically, the majority of textbooks recommended writing clear and understandable reports, though they provided sample excerpts and reports with a mean Flesch-Kincaid Reading Level calculated at 18.5.

We believe that reports can be written in such a way as to increase their readability and enhance their accessibility for parents and teachers. For example, in Weddig’s (1984) research, a traditional psychological report written at the 15th-grade level was rewritten at a 6th-grade level by replacing professional terminology with behavioral descriptions and eliminating information considered irrelevant to educational decision making. Parents reading the modified (i.e., 6th-grade level) report scored significantly higher on a comprehension test than those reading the traditional report (i.e., 15th-grade level). This finding supports the recommendation that lowering the readability level of psychological reports will facilitate parental understanding while maintaining adequate coverage and validity of the assessment information.

Increasing the readability of reports is helpful for all consumers but especially critical for parents who may not have the literacy skills for or familiarity with the psychological language we use in our reports. This is not an easy task and there is no goal readability level that we set. Both of us have actively worked on increasing our report readability. On average, we write reports at approximately an 11th- to 12th-grade Flesch-Kincaid Reading Level (FKRL). We both strive to lower our FKRLs further with a goal of approximately 9th to 10th grade. So, how do we actually do this? We provide four suggestions to help increase the readability level of your written reports. First, reduce the professional or technical jargon. Second, cut extraneous words and use an active voice. Third, carefully consider the length, including amount and quality of information included in your reports. Fourth, use a report structure that integrates data and highlights relevant assessment findings.

Increase Readability by Reducing Professional Jargon

Psychological jargon or the use of technical terminology in reports has been frequently identified as a hindrance to understanding by both psychologists and non-psychologists (Harvey, 2006; Ownby, 1997; Tallent, 1993; Weddig 1984). It is a common recommendation in books and guidelines on report writing to avoid jargon or technical terms if you can.

Although the language used within written reports can be controlled and altered by the writer, the lack of consensus among school psychology practitioners and test publishers regarding technical terms such as working memory, auditory processing, or learning disabled makes this task a formidable one (Harvey, 2006; Rafoth & Richmond, 1983). For example, in her survey of practicing psychologists, Harvey (2006) requested a narrative and numerical definition of the term average intelligence. The results suggested that the participating psychologists did not have a standard definition for the term, noting that 30% of respondents disagreed with the most commonly used numerical definition and provided qualitatively different narrative definitions. Rafoth and Richmond (1983) investigated the level of understanding and perceived usefulness of psychological terms by school psychology students, education students, and practicing special education teachers. Although educators with a special education background rated some technical language as more useful, in general, disagreement was noted in the usefulness of some frequently used terms such as hyperactive, grade level, and developmental delay.

In many of our workshops we ask the participants to define a technical term often seen in psychoeducational reports, such as visual processing. Although scientific rigor in this task is clearly absent, the results have been the same in dozens of trainings. The participants struggle with this activity. They usually stop making direct eye contact with us, perhaps in fear we may call on them to answer the question. Rarely do we have a participant who wants to share his definition in front of a large group of his professional peers. What are the implications for these findings, both the research based and our anecdotal? Perhaps, as a field we do not have a professional consensus on what many of these technical terms mean. This dilemma will hopefully lead us to question our belief that these terms have an unambiguous meaning and that they are somehow self-explanatory for the reader of our report.

We recommend that you be cautious in using this kind of language, and when you need to use technical language, make sure to explain each term in the context of the student and evaluation. Rewrite technical information using easily understood language and vocabulary and provide clear behavioral examples when using technical terms. If this seems a formidable task for you, remember it will be no easier for the reader. Put simply, if you cannot clearly define a term or provide examples of what it looks like, we recommend that you do not use it.

Consider the following examples of John and Kelly, two Kindergarteners. The technical jargon is presented, and then rewritten using the recommendation to rewrite technical information using easily understood language and providing clear behavioral examples. As a side note, ask yourself which description is more helpful. Which description provides information for the IEP team to make decisions about services, goals, and accommodations?

Examples: Rewriting Technical Jargon

Rewrite technical information using easily understood language and provide clear behavioral examples:

Increase the Readability of Psychoeducational Reports by Cutting Words and Using Active Voice

Although it is perhaps unusual to cite George Orwell in a book written for psychologists, Orwell’s advice on writing is useful. In his essay, Shooting an elephant and other essays (1950), Orwell proposed six rules for writing. We paraphrase four of them that we think are especially useful for authors of reports:

  1. Never use a long word when you can use a short one instead.
  2. If it is possible to cut a word out, cut it out.
  3. Never use the passive voice where you can use the active voice.
  4. Never use a scientific word or jargon if you can think of an everyday English equivalent.

Numbers 1 and 4 relate to our discussion of professional jargon. Numbers 2 and 3 can also go a long way toward making your writing more readable. Cutting words is easy once you develop an eye for it. Some examples are small. For example, why say “school context” or “preschool setting” when “school” or “preschool” will do just as well? Other examples involve cutting the unnecessary language out of sentences. Consider this sentence:

John has not experienced any significant changes related to his health and development in recent years.

That’s not a terrible sentence, but we can easily cut words and make it clearer:

John has not had any significant changes in his health in recent years.

Or, better yet, use positive phrasing and say:

John is healthy.

Notice that we have gone from 16 words to 3. If John’s health was actually poor and there had been no changes, we, of course, would have phrased this differently. But, most of the time, writers are simply referring to a health status that is unremarkable in the sense that the child is healthy and has no significant health or medical problems. In that case, “John is healthy” will almost always work just fine.

Here is a 33-word sentence:

John demonstrated satisfactory progress in all areas of personal growth and study habits, with the exception of “completes and returns homework on time,” in which he received a grade of E (excellent progress).

One revision could reduce this to 31 words, which is not a big savings:

John earned grades of “satisfactory progress” in all areas of study habits on his report card except “completes and returns homework on time,” where he earned a grade of “excellent progress.”

But if we consider the essential meaning of this sentence, we could say:

John earned grades of “satisfactory” to “excellent progress” in all areas of study habits on his latest report card.

Another way to cut words in reports is to remember Steven King’s writing adage: “The road to Hell is paved by adverbs” (King, 2000). Adverbs and adjectives are sometimes referred to as “intensifiers” and are used when you think the reader needs an extra boost to get your point. If Stephen King can get by without adverbs writing horror fiction, then writers of psychological reports can get by without them, too.

Passive voice is when the object of an active sentence (I interviewed John) is made into the subject (John was interviewed). Too much passive voice will deaden your writing and make it harder to read. We acknowledge that passive voice is difficult to avoid in psychoeducational reports, especially when you make the choice to not use the personal pronouns I or me. The question is one of degree. Use passive voice sparingly and avoid it when you can. An example of a sentence written in passive voice that could be easily changed to active voice is:

In the preschool setting, John’s difficulty with managing anger, frustration, and fear was noted.

The first question is, who noted these behaviors? In this case let’s assume it was the teachers. The conversion to active voice would be:

In preschool, John’s teachers observed that he had difficulty managing anger, frustration, and fear.

Note that we also removed “setting,” which is an example of cutting unnecessary words. We include two more examples to really drive the point home:

Passive: His behavior at school is similar at home, according to John’s parents.

Active: John’s parents said his behavior at school is similar to home.

Passive: John was adopted at birth.

Active: Mr. and Mrs. Lopez adopted John at birth.

Although we prefer active voice, we will opt for passive voice instead of referring to ourselves in the third person such as “this psychologist” or “the interviewer.” To the average reader of our reports, this will simply seem odd. For us, it seems too clinical. We are writing the report for an audience. “I” conducted the assessment and wrote the report, so it seems peculiar to refer to myself in the third person as “the psychologist.” However, if you are part of a multidisciplinary assessment and report, simply use active voice when clarifying who conducted each part of the evaluation. Our final example provides an idea of what these might look like:

Passive voice, third person: John was observed during recess by the school psychologist.

Active voice, third person: The school psychologist observed John at recess.

Active voice, first person: I observed John at recess.

You can use your word processing software to help reduce passive voice. Most word processing programs allow you to set the spelling and grammar function to check the percentage of sentences written in a passive voice. When running a spelling and grammar check, the grammar function will highlight the passive sentences and the readability statistics we reviewed earlier will provide a percentage of sentences. Again, there is no perfect percentage, but we have set personal goals to have 20% or fewer passive sentences in our reports.

Increase the Readability of Psychoeducational Reports by Considering the Length, Including Amount and Quality of Information

As we noted in Chapter 1, we have observed a trend in the area where we practice toward very long reports of 30-plus pages. Reports of this length can be difficult to read for anyone, let alone parents and teachers. Interestingly, some of the research on readability we discussed earlier in this chapter found that a newspaper story nine paragraphs long lost 3 out of 10 readers by the fifth paragraph, making a case for shorter rather than longer reports (Lostutter, 1947). Lengthy reports often have considerable generic, rather than individualized information about the actual child being assessed, and include a level of detail that is rarely helpful to consumers. Another problem we have observed is that these long reports do not integrate or interpret information in a way that is easy to follow, thus lacking a coherent plot and breaking the first rule of a good story. In addition to our own observations, several researchers have found that longer reports are not preferred or considered more useful than shorter ones (Brenner, 2003; Brown-Chidsey & Steege, 2005; Donders, 1999).

There is no magic number of pages for reports, and both children and evaluations vary in their complexity. Research on the preferences of both parents and teachers suggests that integration of information and recommendations, not report length, are most important (Wiener, 1985, 1987; Wiener & Kohler, 1986). Both of us have done many complex evaluations with children that have had significant referral concerns, often involving information from multiple sources. It is a rarity that these reports have exceeded 20 pages, a number that strikes us as probably longer than necessary. On average, we write reports between 5 and 10 pages. Some have suggested that once reports become longer than five to seven pages, the length may become a barrier to understanding the information (Weiner, personal communication cited in Mastoras, Climie, McCrimmon, & Schwean, 2011). Our own experience bears this out. Although we are obviously sophisticated readers of reports, we often lose patience with longer reports and find ourselves flipping through the pages searching for a summary or some concise presentation of what is meaningful.

Report length is sometimes a function of adding unnecessary detail about every aspect of the evaluation. This “throwing data on a page” obscures our primary task as assessors, which is to make meaning of the data gathered and communicate this meaning in a way that is helpful. Part of what makes this information difficult to understand is that it is not really about the child being assessed but rather consists of generic language, often cut and pasted from test manuals, that does not communicate anything meaningful to the readers of the report. Another problem is that scores and observations are often described in great technical detail, which unfortunately can obscure what the data actually mean for those who work with a child.

The research finding that integration of information is a key consumer preference also suggests that much of the generic technical information and boilerplate claims about the legality of your evaluation could be easily left out of reports or, in some cases, placed in an appendix at the end of the report. This would serve to both shorten most reports and place the focus of the report on the student-specific data and information presented. A simple recommendation is, to the maximum extent possible, focus the content of a report on the specific child assessed. This seems like a straightforward point, but frequently we have found ourselves wading through large amounts of generic material in reports that could have been written about any child. We have both read independent educational evaluation (IEE) reports on students that were over 250 pages in length. In these reports, the vast majority of the information presented has nothing to do with the student who was assessed. There are pages of generic information about every assessment tool, including technical information about each subtest. Readers would literally have to weed through pages of random information to get to a sentence or two about the child and then somehow remember that information as they wade through the next 20 or so pages. This shock-and-awe method of presentation is clearly an extreme example, though it highlights the need for us to keep our audience in mind and focus on the essential information. Consider the following example, pulled directly from a report from a local school district, where Leslie’s performance on the Test of Auditory Processing–3 was presented. Read this example, and then try to answer the questions that follow.

Example: Data’s Meaning Lost in the Presentation

Ask Yourself:

  • Are these data interpreted in a way that consumers can understand?
  • Are there data from multiple sources?
  • Will this information help the IEP team write goals and accommodations?

If you answered “No” to any of the questions (and we hope you answered “No” to all of them), then you have remembered some fundamentals of this book: Interpretation and integration of data is the psychologist’s job, not the reader’s, and our report is the foundation for the IEP team’s decision making. The previous example is a classic illustration of the meaning of evaluation data being lost in the presentation. There are many reasons for this, including overuse of technical language and jargon, inclusion of unnecessary information, exclusion of any meaningful interpretation of the evaluation results, and poorly structured information and formatting. The responsibility for interpretation and integration falls on the school psychologist. In the following example, we present the same data but link performance on the standardized measure to classroom and home.

Example: Data Interpreted and Integrated

This rewrite interprets the test results in the context of all the evaluation data, without over-interpreting single-index scores (a topic for another book altogether), providing meaningful information for the reader. In doing this it also shortens the presentation from 465 words at a 13.3 FKRL to 196 words at a 12.1 FKRL.

Of course, other psychologists, special education teachers, administrators, and perhaps attorneys will also read our reports. In some cases, people who have a more extensive knowledge of evaluation will want access to technical information regarding the instruments we have used or the scores we reference. However, it is important to remember that these people are not our primary audience, so rather than make this kind of detailed technical information the focus of our reports we recommend that you be prepared to answer these questions, if asked, or perhaps place the information in appendixes or attachments at the end of your report.

Increase the Readability of Your Report by Using a Report Structure That Integrates Data and Highlights Relevant Evaluation Findings

As we demonstrated in Leslie’s example, one of our major concerns about many of the reports we read is that the data are not integrated in a way that the reader can clearly understand the major findings or “big ideas.” To return to our analogy of reports as stories, the plot points are unclear. In the next section, we discuss the advantages and disadvantages of different report models and propose a question-driven, theme-based report model as the most efficient report writing structure for focusing your writing on the important assessment elements and facilitating the integration of assessment data.

Report writing structures or models refer to the organizational style used to present assessment results. Like the chapters and subheading of a book or article, the structure of a report can contribute to a clear and understandable presentation of the results by guiding the readers’ thinking (Ownby, 1997; Tallent, 1976; Wiener, 1987). To establish which report writing models have been traditionally taught, we reviewed common graduate school assessment training texts (e.g., Flanagan & Harrison, 2005; Salvia & Ysseldyke, 2001; Sattler, 1992, 2001, 2008) and report writing guides (e.g., Bradley-Johnson & Johnson, 2006; Lichtenberger, Mather, Kaufman, & Kaufman, 2004; Ownby, 1997, Tallent, 1993; Wolber & Carne, 2002).

Report writing is the culmination of the assessment process; however, in our review, the majority of assessment texts did not mention report writing and only Sattler (1992, 2001, 2008) devoted an entire chapter to report writing strategies. Sattler (2008) did not recommend a specific report format, though he strongly recommended assessment findings be synthesized and clearly presented. A well-written report requires the writer to integrate and analyze the assessment data. All of the reviewed guides recommended using an organizational structure to support an integrated and cohesive report. Based on our review, we have grouped report writing models into (a) test-based, (b) domain-based, and (c) referral-based.

Test-based reports are characterized by a sequenced presentation of each assessment tool (usually standardized tests) and typically use test titles as headings. The test’s purpose and the student’s quantitative test results are the primary information provided. In Sattler’s (1992) opinion, test-based reports are the most common and the easiest to write for novice report writers because the reports often lack the more difficult-to-write qualitative interpretive information. The difficulty with a test-based report is that a well-written report requires the writer to integrate and analyze the assessment data, not simply report test performance data. Teglasi’s (1983) statement that “tests do not tell the results; rather, the psychologist reports the results after using the tests to arrive at interpretation, conclusions, and recommendations” (p. 472) highlights why using a test-based structure has been discouraged by many authors (e.g., Ownby, 1997; Tallent, 1993; Wolber & Carne, 2002), including ourselves. If standardized assessment is an integral part of your assessment, then test scores may clearly be valuable. However, test scores alone are not useful, and separate from the assessor’s interpretation, can be misleading for many report consumers. Presenting scores and score descriptors alone, for example, the TAPS-3 narrative earlier in this chapter lacks interpretation and leaves the process of integrating what these test data mean to the reader.

Domain-based reports present evaluation data in specific assessment categories such as Cognitive Abilities, Academic Achievement, and Social–Emotional Functioning. A pattern of strengths and weaknesses is often presented under each domain. Traditionally, the summary is the section that integrates the report sections into a whole. Many authors classify domain-based reports as a traditional report writing model for school psychologists (Batsche, 1983; Ross-Reynolds, 1990). In their most recently published guide to report writing, Bradley-Johnson and Johnson (2006) used domain-based reports as the model and exemplar. Although others’ styles were reviewed, the psychological report examples by two sets of authors (i.e., Lichtenberger, Mather, Kaufman, & Kaufman, 2004; Ownby, 1997) were entirely domain based.

We acknowledge that a domain-based structure is preferable to a test-based structure, though we do not recommend the use of a domain-based model for two reasons. First, the reader often has to wade through pages of information prior to receiving key information or a summary. This can leave the reader to reconstruct the report domains into a whole. Often we see school psychologists use the headings of a domain-based report, but then revert to a test-based presentation under each domain heading. Second, in domain-based reports, the categories or domains can predetermine the areas of importance, which can make the evaluation and report less focused on unique concerns and needs.

Referral-based reports are organized around assessment-based answers to specific referral questions. Referral-based reports are a product of a referral-based evaluation where report writing is integrated into the assessment process. Assessment tools are chosen and assessment data are interpreted in the context of the referral questions and then presented by statements or thematic headings within the report. The conceptualization of a psychological assessment as a consultation-based inquiry focused on answering specific referral questions is a fundamental shift from the traditional idea of an evaluation as a search for disability or profile of students’ strengths and weaknesses. If you are unsure what types of reports you are writing, Table 3.2 will help you identify the style based on the headings you are using.

Table 3.2 If Your Headings Look Like This . . .

REPORT HEADING REPORT STRUCTURE
Woodcock Johnson IV: Test of Cognitive Abilities Test-Based
Cognitive Ability Domain-Based
Cognitive Ability
  1. Woodcock Johnson IV: Test of Cognitive Abilities
  2. Kaufman Brief Intelligence Test–2
Domain/Test-Based
What are Michael’s cognitive strengths and weaknesses and how do they impact his ability to access the grade-level curriculum? Referral-Based

Unfortunately, research investigating effective report writing models has been limited, with the majority of the literature focused on professional opinions of best practice (e.g., Lichtenberger, Mather, Kaufman, & Kaufman, 2004; Ownby, 1997; Tallent, 1993; Wolber & Carne, 2002); however, there have been a small number of studies that consider the impact of different report formats.

Mussman (1964) conducted the first published research investigating different report structures. A brief handwritten report containing a statement of the referral question, a description of the student’s performance, and recommendations related to the referral question was compared to a more traditional typewritten report containing information regarding the behavior and appearance of the student during the evaluation, test scores and analyses, student interview information, and recommendations. Twenty-five teachers read a number of reports from one of the formats and then completed an opinion questionnaire regarding the reports they read. Although the study suffered from several limitations, Mussman (1964) encouraged school psychologists to self-evaluate the usefulness of their report-writing techniques.

Bagnato (1980) hypothesized that the report format and synthesis of assessment information would influence teachers’ ability to match assessment information to appropriate classroom-based intervention goals. Preschool-level special education teachers read two different report formats and were asked to make judgments on appropriate educational goals. Both reports presented the assessment information. The translated report included a thorough description of how the assessment data were linked to classroom objectives, while the traditional report did not include linkages to classroom-based practices. The teachers who received the translated report were significantly more accurate in linking the assessment information to curriculum goals than the teachers who received the traditional report. Bagnato (1980) recommended that the report writer synthesize the comprehensive assessment data, and then organize and present the information around functional domains (i.e., domain-based) rather than the test given (test-based). Bagnato also recommended that strengths and skill deficits be presented in clear behavioral terms to facilitate the creation of individualized objectives. Lastly, Bagnato recommended the inclusion of detailed educational and behavioral recommendations. These report attributes were viewed as more relevant to intervention planning and creation of individualized education goals.

Wiener (1985, 1987) and Wiener and Kohler (1986) investigated the comprehension of parents, elementary and secondary school teachers, and school administrators for different psychological report formats. Over the course of these investigations, the researchers randomly assigned participants to read one of three report formats. The Short Form was a single-page report including the reason for referral, assessment results, and succinct recommendations. The Psychoeducational Report format was a domain-based report with information clustered under specific headings. The report was written in behavioral terms with the sources of information clearly identified. Technical terminology was avoided or clearly defined and recommendations were specific and carefully explained. The Question-and-Answer report utilized a referral-based format, including the same information, terminology, and recommendations as contained in the Psychoeducational Report, though the Reason for Referral section was comprised of a list of referral questions drawn from interviews with teachers and parents. Each question was directly addressed within the report. The researchers found that the educators and parents comprehended and preferred the Psychoeducational and Question-and-Answer formats over the Short Form report. Wiener (1987) hypothesized that the domain- and referral-based reports were more coherent, facilitating connections between the assessment information presented and the readers’ prior knowledge. Wiener recommended that school psychologists include clear and specific examples of technical terms and concepts, as well as elaborate descriptions of the student’s current functioning and recommendations.

Salvagno and Teglasi (1987) examined the perceived helpfulness of a test-based versus an observation-based psychoeducational report. Although the teachers’ ratings of helpfulness were not significantly different for the two report formats, interpretive information was consistently rated as more helpful than test-based quantitative statements. Teachers preferred interpretations that reflected the school psychologists’ integration and synthesis of the assessment data and recommendations that were specific and easily implemented by the teacher.

Most recently, Carriere, Kennedy, and Hass (2013) investigated teacher comprehension and usefulness of different report structures. Teachers were randomly assigned to read a test-based, domain-based, or referral-based report, and then completed a questionnaire about their understanding of the report content, as well as their beliefs regarding the usefulness of the report itself. The results confirmed that the report writing model impacted teachers’ comprehension of the information in the psychoeducational report. Report models emphasizing the integration of assessment data resulted in the highest comprehension scores, with teachers who read the referral-based report having significantly higher comprehension of report data than those who read the test-based report.

Although limited, our review of the research suggests that there is a distinct pattern of preferences by educators and parents. Parents and educators prefer reports that synthesize and integrate the comprehensive assessment data (Salvagno & Teglasi, 1987; Wiener, 1985, 1987; Wiener & Kohler, 1986). Information presented around specific areas of functioning and written in behavioral terms was preferred over a test-based presentation of information (Bagnato, 1980; Carriere, Kennedy, & Hass, 2013). Teachers also placed high value on recommendations that are detailed and appropriate for implementation in the schools (Bagnato, 1980; Mussman, 1964; Salvagno & Teglasi, 1987).

Referral-Based Reports Synthesize Fundamental Research Findings with Best Practice

At this point, it will not come as a shock that we both strongly believe a referral-based or question-driven theme-based report is the most useful and effective model. We teach and practice this model because we believe that it is the best way for us to write useful and legally defensible reports. We both see our report writing as a work in progress and neither of us believes that our reports are perfect. In fact, we read each other’s reports and often have lively discussions about how to make them better. Yet, we both read and write a lot of reports and we do our best to practice what we preach. It was our commitment to making this vital part of school psychological practice more useful for parents and teachers, and our audiences’ and students’ enthusiasm for our trainings and workshops that led us to write this book.

We credit Batsche (1983) with developing a report writing structure that combines the fundamental research findings with best practice. His report writing model is geared toward enhancing the collaborative process between the school psychologist and the referring person. Batsche’s Referral-Based Consultative Assessment/Report Writing Model was designed to clarify and answer specific referral questions through a consultative process with the referring person. These referral questions drive the assessment process, including data collection, report writing, recommendations, and intervention planning (Batsche, 1983; Ross-Reynolds, 1990). The Batsche model consists of six steps: (1) reviewing existing data on the student, (2) consultation with the referring person, (3) collaborative development of referral questions, (4) selection of assessment procedures validated to answer the referral questions, (5) integration of assessment data to answer the referral questions, and (6) developing recommendations in response to the assessment results. These steps not only guide the assessment process, but they are reflected throughout the structure of the written report.

This report model synthesizes the guidelines and recommendations we present in this book and aligns with NASP’s Blueprint for Training and Practice III, which advocates that assessment activities be directly connected to prevention and intervention and focused on enhancing students’ academic and social-emotional competencies (NASP, 2006). We value collaborative consultation, integrated data-based decision making, and linking assessment to prevention and intervention. We believe this aligns well with the needs and preferences of consumers; clear answers to referral questions and recommendations are exemplified in this report writing model. In Chapter 4, we go step-by-step through a referral-based report structure, explaining how to frame both your evaluation and report using this model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.133.233